Rationality Quotes: June 2011

post by Oscar_Cunningham · 2011-06-01T08:17:07.695Z · LW · GW · Legacy · 470 comments

Contents

470 comments

Y'all know the rules:

470 comments

Comments sorted by top scores.

comment by Oscar_Cunningham · 2011-06-01T08:20:21.264Z · LW(p) · GW(p)

Just because you two are arguing, doesn't mean one of you is right.

Maurog: http://forums.xkcd.com/viewtopic.php?f=9&t=14222

Replies from: chatquitevoit
comment by chatquitevoit · 2011-06-24T17:21:12.138Z · LW(p) · GW(p)

...or that both of you are wrong. Most times people argue, neither party actually has a fundamental grasp of their own position. If both did, it would either change the argument to an ENTIRELY different and more essential one, or dissolve it. And either of those options is of absolute gain for the participants.

Not that I can do anything about this aside from in my own actions, but it's annoying as hell sometimes.

comment by Miller · 2011-06-01T23:52:54.132Z · LW(p) · GW(p)

The megalomania of the genes does not mean that benevolence and cooperation cannot evolve, any more than the law of gravity proves that flight cannot evolve. It means only that benevolence, like flight, is a special state of affairs in need of an explanation, not something that just happens.

  • Pinker, The Blank Slate
Replies from: Antisuji
comment by Antisuji · 2011-06-02T17:41:47.614Z · LW(p) · GW(p)

The great thing about this quote for me is that when I read it I can hear Pinker's voice saying it in my mind.

comment by NihilCredo · 2011-06-01T15:06:46.921Z · LW(p) · GW(p)

A little long, but I don't see the possibility of a good cut:

“Other men were stronger, faster, younger, why was Syrio Forel the best? I will tell you now.” He touched the tip of his little finger lightly to his eyelid. “The seeing, the true seeing, that is the heart of it.

“Hear me. The ships of Braavos sail as far as the winds blow, to lands strange and wonderful, and when they return their captains fetch queer animals to the Sealord’s menagerie. Such animals as you have never seen, striped horses, great spotted things with necks as long as stilts, hairy mouse-pigs as big as cows, stinging manticores, tigers that carry their cubs in a pouch, terrible walking lizards with scythes for claws. Syrio Forel has seen these things.

“On the day I am speaking of, the first sword was newly dead, and the Sealord sent for me. Many bravos had come to him, and as many had been sent away, none could say why. When I came into his presence, he was seated, and in his lap was a fat yellow cat. He told me that one of his captains had brought the beast to him, from an island beyond the sunrise. ‘Have you ever seen her like?’ he asked of me.

“And to him I said, ‘Each night in the alleys of Braavos I see a thousand like him,’ and the Sealord laughed, and that day I was named the first sword.”

Arya screwed up her face. “I don’t understand.”

Syrio clicked his teeth together. “The cat was an ordinary cat, no more. The others expected a fabulous beast, so that is what they saw. How large it was, they said. It was no larger than any other cat, only fat from indolence, for the Sealord fed it from his own table. What curious small ears, they said. Its ears had been chewed away in kitten fights. And it was plainly a tomcat, yet the Sealord said ‘her,’ and that is what the others saw. Are you hearing?”

Arya thought about it. “You saw what was there.”

“Just so. Opening your eyes is all that is needing. The heart lies and the head plays tricks with us, but the eyes see true. Look with your eyes. Hear with your ears. Taste with your mouth. Smell with your nose. Feel with your skin. Then comes the thinking, afterward, and in that way knowing the truth.”

- George R.R. Martin, "A Game of Thrones"

Replies from: MixedNuts, MixedNuts, gwern
comment by MixedNuts · 2011-06-01T15:49:29.292Z · LW(p) · GW(p)

This is beautiful, and inspiring. In fact, I predict LW will do better if we have an introductory post consisting of this quote and "That's our goal. Come on in and let's work on that." (would probably cause copyrighty troucle).

It's not a pure illustration, though. Maybe the others thought "Huh, that's just a regular cat. But if I say that the king might ordered me killed in the kind of way people die in Martin books. Better kiss some ass.".

Replies from: Dorikka
comment by Dorikka · 2011-06-02T03:57:01.270Z · LW(p) · GW(p)

It's not a pure illustration, though. Maybe the others thought "Huh, that's just a regular cat. But if I say that the king might ordered me killed in the kind of way people die in Martin books. Better kiss some ass.".

I agree that the existence of this factor makes whether someone announces that it's a normal cat a poor indication of whether they actually realized such. However, I think it's reasonable to hypothesize that Syrio was looking for someone who both recognized that he was holding a normal cat and was willing to tell him such.

comment by MixedNuts · 2011-06-01T21:08:56.496Z · LW(p) · GW(p)

What are the tigers with a pouch for their young? There seem to be no large carnivorous marsupials. A candidate is the marsupial lion (which is also striped), but it's been extinct for a while.

Edit: Ah, the thylacine ("Tasmanian wolf") was also known as the Tasmanian tiger. Yay for learning!

Replies from: Alicorn, brazzy
comment by Alicorn · 2011-06-01T21:10:05.483Z · LW(p) · GW(p)

Thylacines, maybe.

comment by brazzy · 2011-06-03T09:18:14.947Z · LW(p) · GW(p)

The quote is from a fantasy book. There are dragons in it...

Replies from: Alicorn, MixedNuts, Nornagest
comment by Alicorn · 2011-06-03T17:36:26.361Z · LW(p) · GW(p)

Yes, but "striped horses" have an obvious Earthly referent, and so it was not unreasonable to suppose that marsupial tigers might too (as indeed they have).

comment by MixedNuts · 2011-06-06T06:37:17.677Z · LW(p) · GW(p)

Yup. I don't know if that's what the terrible walking lizards are, or if they are that other kind of dragon of something in the same family.

comment by Nornagest · 2011-06-03T18:01:38.751Z · LW(p) · GW(p)

Dragons aren't all that less physically possible than FTL travel, and no one complains about quoting sources that use that as a plot device.

Of course, I imagine this is really about the romanticism vs. enlightenment divide in literature, but dismissing a relevant and well-written quote on genre grounds nonetheless seems a little biased.

comment by gwern · 2011-06-01T20:34:20.715Z · LW(p) · GW(p)

Hrm. How would one tell it was not female? Was it sitting on the king's lap in a rather unlikely fashion?

Replies from: None, NihilCredo
comment by [deleted] · 2011-06-01T20:39:59.659Z · LW(p) · GW(p)

Tomcats are usually stouter and more muscular, and have a more robust head shape? Also, they have pretty large and conspicuous balls.

Replies from: gwern
comment by gwern · 2011-06-02T00:06:02.295Z · LW(p) · GW(p)

It's a large cat by stipulation.

Also, they have pretty large and conspicuous balls.

What, even when sitting nicely on someone's lap?

Replies from: taryneast
comment by taryneast · 2011-06-02T12:33:11.273Z · LW(p) · GW(p)

A large indolent cat is unlikely to actually sit on somebody's lap. In my experience they sprawl.

comment by NihilCredo · 2011-06-04T18:10:54.395Z · LW(p) · GW(p)

I think the "plainly" meant that his jewels were in plain sight.

comment by Richard_Kennaway · 2011-06-01T11:00:49.353Z · LW(p) · GW(p)

If the fossil record shows more dinosaur footprints in one period than another, it does not necessarily mean that there were more dinosaurs -- it may be that there was more mud.

Elise E. Morse-Gagné

comment by Patrick · 2011-06-03T14:13:04.401Z · LW(p) · GW(p)

If a process is potentially good, but 90+% of the time smart and well-intentioned people screw it up, then it's a bad process. So they can only say it's the team's fault so many times before it's not really the team's fault.

comment by jaimeastorga2000 · 2011-06-01T15:58:08.089Z · LW(p) · GW(p)

The bulk of political discourse today is purposefully playing telephone with facts in ways that couldn't be done in the Information Age if people just had the know-how to check for themselves. Comprehending complex sentences is something that can be done by first grade, and comprehending complex concepts and issues is without a doubt something better learned in math than in English, where one learns to obfuscate concepts and issues, and to play to baser emotions. Granted, one also learns to recognize and to defend against these tactics, but it still can't hold a candle to the "mental gymnastics" referenced above. Do you realize what the world looks like if you've got a background in math? Imagine signs reading DANGER: KEEP OUT are planted everywhere, but people purposefully and proudly ignore them, treating it as laughably eccentric to have learned more than half the alphabet, approaching en masse and dragging you with them.

~From the Math It Just Bugs Me page, TV Tropes

comment by MichaelGR · 2011-06-02T18:37:39.043Z · LW(p) · GW(p)

"At one of our dinners, Milton recalled traveling to an Asian country in the 1960s and visiting a worksite where a new canal was being built. He was shocked to see that, instead of modern tractors and earth movers, the workers had shovels. He asked why there were so few machines. The government bureaucrat explained: “You don’t understand. This is a jobs program.” To which Milton replied: “Oh, I thought you were trying to build a canal. If it’s jobs you want, then you should give these workers spoons, not shovels.”

-Milton Friedman story

Replies from: brazzy, James_K, Document, grendelkhan
comment by brazzy · 2011-06-03T09:09:01.455Z · LW(p) · GW(p)

A few points come to mind:

  • Presumably they also wanted a canal and there may well be an optimum point where you maximize some sort of combined utility
  • Jobs programs, even those that create nothing particularly useful, are about giving people a sense of worth and accomplishment, otherwise you could just hand out money. Obviously futile make-work activities like the one suggested achieve the opposite of that and are, indeed, often deliberately used to punish and humiliate people.
Replies from: Mercy
comment by Mercy · 2011-06-06T23:51:22.568Z · LW(p) · GW(p)

"They" is the tricky bit there. Presumably some people wanted a canal, and some people other people wanted jobs, and for that matter presumably some people wanted money to go to the construction company who've got an opening for a government liaison consultant coming up in five years time. There's little reason to think the equilibrium is welfare maximising.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-06-26T07:31:03.056Z · LW(p) · GW(p)

Probably, but Brazzy's explanation without adding all those other variables fits well enough to show why Milton's statement might have been missing something important. The point of a jobs program is that society pays some cost (of not using the most optimal method, i.e. more machines and fewer workers) in order to keep its members out of the unemployment trap. To propose, even as a deliberate reductio ad absurdum, that this would go just as well with spoons rather than shovels is not rationality, it's Spock-logic.

Now I'm quite willing to suppose that he understood the usefulness of such programs as an economist and overall had good reasons to see them as not worth it, or that some other measure would do better, but that particular quote fails to show it.

comment by James_K · 2011-06-02T19:07:42.503Z · LW(p) · GW(p)

For the record, I'm pretty sure this story is apocryphal, though that doesn't take away from it's value as a rationality quote.

comment by Document · 2011-06-08T00:53:19.054Z · LW(p) · GW(p)

Seems like more of a libertarianism quote to me.

Replies from: MichaelGR
comment by MichaelGR · 2011-06-08T17:29:12.486Z · LW(p) · GW(p)

It can be that, but I think it also illustrate the importance of understanding people's real goals and intentions and not assuming that they are what they appear to be at first glance.

comment by grendelkhan · 2012-05-31T16:54:10.114Z · LW(p) · GW(p)

The earliest known citation of the anecdote is from 1935, quoting Canadian William Aberhart. Milton Friedman certainly told the story, and may have invented the somewhat snappier form quoted here. (Interestingly, William Aberhart was speaking for the Social Credit Party, which was hardly libertarian.)

comment by jasonmcdowell · 2011-06-01T21:23:48.701Z · LW(p) · GW(p)

I wish there was no illness, I don't care if an old doctor starves.

Loā Hô, a Taiwanese physician and poet.

Replies from: MixedNuts, gwern, Document, Document
comment by MixedNuts · 2011-06-01T21:37:14.768Z · LW(p) · GW(p)

I care. If illness is abolished and a doctor of any age is starving, they can stay at my place and I'll feed them. Alternately, we could raise taxes slightly to finance government-mandated programs for training and reconversion of young doctors and early retirement for old doctors.

In other words: beware of though-mindedly accepting bad consequences of overall good policies. Look for a superior alternative first.

Replies from: SilasBarta, Normal_Anomaly, endoself
comment by SilasBarta · 2011-06-01T22:01:56.543Z · LW(p) · GW(p)

I agree. Unfortunately, the way it actually works is, "No, we can't allow your universal cure -- the AMA/[your country's MD association] is upset."

"No, we can't accept your free widgets -- that would cost our widgetmakers major sales."

"No, I don't want you to work for me for free -- that would put domestic servants out of jobs."

"No, I don't want to marry you -- that would hurt the income of local prostitutes."

"No, I don't want your solar radiation -- that would put our light and heat industries out of business."

Edit: Even better: "No, I don't want you to be my friend -- what about my therapist's loss of revenue?"

Replies from: wedrifid, MixedNuts
comment by wedrifid · 2011-06-01T22:55:38.853Z · LW(p) · GW(p)

"No, I don't want to marry you -- that would hurt the income of local prostitutes."

That is a brilliant line. Now I'm trying to work out how to create a circumstance in which to use it.

Replies from: NihilCredo
comment by NihilCredo · 2011-06-04T18:18:47.203Z · LW(p) · GW(p)

The worst thing about how frequenting prostitutes is no longer socially acceptable, even for males, is that there are so many quips and jokes that just don't work any more.

Replies from: TeMPOraL
comment by TeMPOraL · 2013-07-07T20:39:14.601Z · LW(p) · GW(p)

Was it ever socially acceptable?

comment by MixedNuts · 2011-06-01T22:10:59.367Z · LW(p) · GW(p)

IRL it's the pharmaceutic labs that block it, not the docs.

That's one of the reasons why you try to mitigate bad side effects: so that people who'll suffer on net from the efffects will STFU.

Replies from: SilasBarta
comment by SilasBarta · 2011-06-01T23:09:03.085Z · LW(p) · GW(p)

That's one of the reasons why you try to mitigate bad side effects: so that people who'll suffer on net from the efffects will STFU.

In theory, yes. And I'd much prefer a one-time ("extortion") payment to a domestic industry to allow cheaper imports, than allow the global economy to remain in a perpetual rut just so a few people don't have to change jobs.

But the fact that this alternative is Pareto-efficient doesn't mean the potential sufferers will STFU -- rather, it costs the alternative its public support, probably because the average person, sympathetic to the domestic industry, still sees it as extortion. And the people in the domestic industry don't want to see themselves as extortioners either! (Relevant Landsburg post.)

comment by Normal_Anomaly · 2011-06-03T11:12:51.312Z · LW(p) · GW(p)

IAWYC. One quibble:

we could raise taxes slightly to finance government-mandated programs for...early retirement for old doctors.

If illness is abolished, what's the point of retirement?

Replies from: NihilCredo
comment by NihilCredo · 2011-06-04T18:20:08.985Z · LW(p) · GW(p)

To keep dusky sports pubs in business, of course.

comment by endoself · 2011-06-01T21:42:29.676Z · LW(p) · GW(p)

That can be a danger, but I think starvation is an obvious enough problem that people won't take this literally.

comment by gwern · 2011-06-07T15:59:54.038Z · LW(p) · GW(p)

What I really like about this quote is that I'm fairly sure the 'old doctor' is himself.

comment by Document · 2011-06-01T21:44:04.167Z · LW(p) · GW(p)

Starvation is an illness. (Or food dependency if you prefer.)

Replies from: Alicorn
comment by Document · 2011-07-13T17:31:58.671Z · LW(p) · GW(p)

SMBC #2305 is another, more cynical instance of the false dichotomy.

comment by Dreaded_Anomaly · 2011-06-02T21:27:21.986Z · LW(p) · GW(p)

If you want to know the way nature works, we looked at it, carefully... that's the way it looks! You don't like it... go somewhere else! To another universe! Where the rules are simpler, philosophically more pleasing, more psychologically easy. I can't help it! OK! If I'm going to tell you honestly what the world looks like to the human beings who have struggled as hard as they can to understand it, I can only tell you what it looks like. And I cannot make it any simpler, I'm not going to do this, I'm not going to simplify it, and I'm not going to fake it. I'm not going to tell you it's something like a ball bearing inside a spring, it isn't. So I'm going to tell you what it really is like, and if you don't like it, that's too bad.

— Richard Feynman, the QED Lectures at the University of Auckland

Replies from: gwern
comment by gwern · 2011-06-07T16:01:26.030Z · LW(p) · GW(p)

Reminds me of a Schneier quote that I like:

'Every time I write about the impossibility of effectively protecting digital files on a general-purpose computer, I get responses from people decrying the death of copyright.

"How will authors and artists get paid for their work?" they ask me.

Truth be told, I don't know. I feel rather like the physicist who just explained relativity to a group of would-be interstellar travelers, only to be asked: "How do you expect us to get to the stars, then?"

I'm sorry, but I don't know that, either.'

"Protecting Copyright in the Digital World", Bruce Schneier http://www.schneier.com/crypto-gram-0108.html#7

comment by Patrick · 2011-06-01T13:47:26.862Z · LW(p) · GW(p)

I didn't do the engineering, and I didn't do the math, because I thought I understood what was going on and I thought I made a good rig. But I was wrong. I should have done it.

Jamie Hyneman

Replies from: sketerpot
comment by sketerpot · 2011-06-03T19:28:52.676Z · LW(p) · GW(p)

Of course, that depends on how costly failure is, compared to the up-front analysis that would make failure less likely. I don't know who said "Fail fast, fail cheap," but it's a good counterpoint quote.

comment by wedrifid · 2011-06-01T09:58:59.953Z · LW(p) · GW(p)

If you want to beat the market, you have to do something different from what everyone else is doing, and you have to be right.

David Bennett

comment by [deleted] · 2011-06-01T13:25:20.483Z · LW(p) · GW(p)

.

Replies from: roland
comment by roland · 2011-06-04T17:28:05.444Z · LW(p) · GW(p)

This quote is from a passage where Darwin is talking about religion:

At present the most usual argument for the existence of an intelligent God is drawn from deep inward conviction and feelings which are experienced by most persons. But it cannot be doubted that Hindoos, Mahomadans and others might argue in the same manner and with equal force in favour of the existence of one God, or of many Gods, or as with the Buddhists of no God....

Formerly I was led by feelings such as those just referred to, (although I do not think that the religious sentiment was ever strongly developed in me), to the firm conviction of the existence of God, and of the immortality of the soul... ...This argument would be a valid one, if all men of all races had the same inward conviction of the existence of one God; but we know this is very far from being the case. Therefore I cannot see that such inward convictions and feelings are of any weight as evidence of what really exists....

Source: http://www.age-of-the-sage.org/faith_vs_reason_debate.html

comment by Unnamed · 2011-06-01T19:11:09.871Z · LW(p) · GW(p)

Violence is not a way of getting where you want to go, only more quickly. Its existence changes your destination. If you use it, you had better be prepared to find yourself in the kind of place it takes you to.

-hilzoy

Replies from: wedrifid, Dorikka, AdeleneDawner
comment by wedrifid · 2011-06-01T23:01:42.914Z · LW(p) · GW(p)

If you use it, you had better be prepared to find yourself in the kind of place it takes you to.

Including such destinations as "Not being the unwilling sex toy of the big bald guy while in prison". Although if you also don't use 'fraud' you may find yourself not in jail in the first place - but it's not always so simple. It also leads you to the destination "still having your food, possessions, dignity and social status in your schoolyard despite having no control of whether you wish to be subject to that environment".

Replies from: Unnamed, Will_Sawin
comment by Unnamed · 2011-06-02T04:15:37.740Z · LW(p) · GW(p)

I didn't read the quote as a blanket opposition to violence. It's a warning about one thing to consider before you choose violence.

I also didn't read the quote as only being about violence. It also makes a more general point about means and ends. When you're considering an action in pursuit of a goal, you should consider the action in its own right and try to predict where it is likely to lead. Don't settle on an action just because it seems to fit with the goal. This is especially relevant when you consider using violence, coercion, manipulation, or dishonesty for a noble purpose, but it also applies more generally.

comment by Will_Sawin · 2011-06-02T03:02:19.160Z · LW(p) · GW(p)

Of course, sometimes one is prepared to find yourself in the kind of place it takes you to. The quote seems to already acknowledge this possibility.

Replies from: wedrifid
comment by wedrifid · 2011-06-02T05:10:07.033Z · LW(p) · GW(p)

The quote seems to already acknowledge this possibility.

It does, hence allowing for me to phrase the counterpoint within the quote's own framework.

comment by Dorikka · 2011-06-02T04:04:49.683Z · LW(p) · GW(p)

This is confusing. Does your use of violence change your intended destination, or does it just exert certain optimization pressures on future world-states, as do all of your other actions?

Replies from: brazzy, AdeleneDawner
comment by brazzy · 2011-06-03T09:34:26.159Z · LW(p) · GW(p)

Read the (long) linked-to article from which the quote stems. Basically the point is that using violence to achieve a goal teaches the people involved that violence is an effective, legitimate way to achieve goals - and at some later point they will invariably have conflicting goals.

Replies from: wedrifid
comment by wedrifid · 2011-06-03T10:47:46.052Z · LW(p) · GW(p)

See also: Live by the sword, die by the sword.

comment by AdeleneDawner · 2011-06-02T18:58:49.059Z · LW(p) · GW(p)

I'm not sure there's a useful distinction between those two options. Your future selves are part of the future world-states that it's exerting pressure on, and not exempt from that pressure.

comment by CSalmon · 2011-06-04T01:26:27.225Z · LW(p) · GW(p)

Rin: What are clouds? I always thought they were thoughts of the sky or something like that. Because you can't touch them.

[ . . . ]

Hisao: Clouds are water. Evaporated water. You know they say that almost all of the water in the world will at some point of its existence be a part of a cloud. Every drop of tears and blood and sweat that comes out of you, it'll be a cloud. All the water inside your body too, it goes up there some time after you die. It might take a while though.

Rin: Your explanation is better than any of mine.

Hisao: Because it's true.

Rin: That must be it.

Katawa Shoujo

Replies from: sketerpot
comment by sketerpot · 2011-06-04T22:21:29.138Z · LW(p) · GW(p)

For those who are interested: Katawa Shoujo is a visual novel currently in beta, which you can freely download on Windows, Mac OS, and Linux.

Replies from: gwern, MarkusRamikin, tut
comment by gwern · 2011-06-07T16:04:29.766Z · LW(p) · GW(p)

You know, when I first heard about Katawa Shoujo, I was horrified. (Struck a little too close to home.) But if the rest of the writing is on par with that, I might have to play it.

Replies from: sketerpot
comment by sketerpot · 2011-06-07T20:27:43.376Z · LW(p) · GW(p)

The writing isn't all shining gems of dialogue, but it's solidly entertaining, and not nearly as horrifying as the premise might make it sound. The various disabilities are treated more as inconvenient body quirks, rather than defining features; the characters are defined by their personalities and actions. If Katawa Shoujo has a message, that's it.

Anyway, I got a few very enjoyable hours out of it.

comment by MarkusRamikin · 2013-05-13T16:01:40.058Z · LW(p) · GW(p)

That was surprisingly good.

As to the quote, I wonder if Rin was mocking him.

comment by tut · 2011-06-14T21:54:38.295Z · LW(p) · GW(p)

Why does it matter what OS you use when downloading a text?

Replies from: Nic_Smith
comment by Nic_Smith · 2011-06-14T22:00:30.403Z · LW(p) · GW(p)

A visual novel (ビジュアルノベル bijuaru noberu?) is an interactive fiction game featuring mostly static graphics, usually with anime-style art, or occasionally live-action stills or video footage. As the name might suggest, they resemble mixed-media novels or tableau vivant stage plays. -- Wikipedia

I.E. It's a video game.

comment by Nick_Roy · 2011-06-01T18:25:51.496Z · LW(p) · GW(p)

The whole universe sat there, open to the man who could make the right decisions.

Frank Herbert, "Dune"

comment by Jayson_Virissimo · 2011-06-02T00:34:21.982Z · LW(p) · GW(p)

In the study of reliable processes for arriving at belief, philosophers will become technologically obsolescent. They will be replaced by cognitive and computer scientists, workers in artificial intelligence, and others.

Robert Nozick, The Nature of Rationality

If you haven't read this book yet, do so. It is basically LessWrongism circa 1993.

Replies from: FiftyTwo, Will_Sawin
comment by FiftyTwo · 2011-06-13T16:46:43.531Z · LW(p) · GW(p)

What do you mean by Philosophy in that quote? Contemporary philosophy already incorporates knowledge from other fields including computer science, and this is an ongoing process of adaption.

If it refers to 'philosophy' as some static corpus of knowledge from before a certain point then yes it is trivially true.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-13T17:11:48.619Z · LW(p) · GW(p)

When they start making real, mathy progress, they'll stop calling themselves philosophers, like natural philosophers are now called physicists.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-06-13T17:58:29.605Z · LW(p) · GW(p)

If we are arguing from the common uses of the term 'philosophers' then that isn't the case. Logicians make progress in the same manner as mathematics, and are sill classed as philosophers. (They also have strong links with computer scientists professionally but thats a side point.)

If your definition is that Philosopher = person who does not make "real, mathy progress" then its just a tautology. All members of this set, who don't make progress, will not make progress, become obsolescent and be replaced.

Sorry if I sound confrontational. But I am unsure what the larger point that is being made about the methods/knowledge of philosophers. It seems to primarily be a tribal "computer scientists good, philosophers bad" statement, unless something precise and meaningful is meant by "philosophers."

Replies from: MixedNuts
comment by MixedNuts · 2011-06-13T19:56:22.177Z · LW(p) · GW(p)

Yeah, common usage. Things like "Are they on the payroll of the Philosophy Department?", and "Do students study it to avoid getting into hard sciences?". (I acknowledge that the philosophy I was taught covers long-dead white guys, not modern breakthroughs - the sorry state of philosophy classes is only a weak point against philosophy, like the sorry state of science journalism.)

I got the impression that people who actually invent logic (like Boole or Gödel) were either classified as mathematicians in their time, or classified such nowadays even though they called themselves philosophers. (Like we call early physicists physicists, not philosophers.) Counterexample?

Replies from: Larks, FiftyTwo
comment by Larks · 2011-06-14T17:45:14.334Z · LW(p) · GW(p)

15 of the Senior Philosophers at Oxford list Logic or Rationality as one of their areas of expertese, all philosophy student study at least first-order logic, and further courses are offered.

Boolos, Putman, Quine and Kripke are notable philosopher-logicians

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T18:04:11.430Z · LW(p) · GW(p)

That doesn't quite answer his question, I believe.

You have to point not just to people called logicians, by themselves or others, but to useful logical progress made by such people.

Replies from: Larks
comment by Larks · 2011-06-14T23:22:32.239Z · LW(p) · GW(p)

Boolos did Frege's theorem, Quine did New Foundations, among other things, Kripke our standard modal-logic semantics... I don't how useful they are, but they're definitely logic.

comment by FiftyTwo · 2011-06-13T20:50:21.373Z · LW(p) · GW(p)

I agree, the long dead white guys approach to Philosophy is far too prominent particularly in introductory courses, which of course attracts all the wrong sort of people into it. [The stereotype of the pretentious freshman relativist is sadly far too common.]

At least my own experience includes studying Godel, Russell etc in the context of philosophy, and there are a great many logic postgrads (on the payroll as you said) whose papers are highly technical and mathematical, and have direct applications in computing and other practical sciences.

On a wider note, the best 'principled' division between philosophy and hard science in my opinion is between the methodology of induction vs deduction. Not sure where that would put computer science.

But in the context of the original quote, if thats the division then I'd disagree that philosophers are obsolete, as most of the techniques we use for considering the meaning, interactions and validity of beliefs originated and is developed on in philosophy.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-13T20:58:59.161Z · LW(p) · GW(p)

Where can I read badass philosophy? (There's some incredulity here. It's sad that the opinion of a domain expert isn't enough to convince me philosophy isn't a rotten field.) Note that I don't doubt that philosophers have said stuff about Gödel, but I want the Gödel-equivalent work.

most of the techniques we use for considering the meaning, interactions and validity of beliefs originated and is developed on in philosophy

That would mostly be probability theory, right? That left the philosophy-cradle long ago - or can you show me the modern developments?

Replies from: Emile, FiftyTwo, diegocaleiro, Peterdjones
comment by Emile · 2011-06-14T08:37:45.255Z · LW(p) · GW(p)

Where can I read badass philosophy?

Nietzsche is pretty badass in his own way, though he doesn't write the same kind of stuff analytical philosophers write about (it seems to me that it's two different genres that just happen to share a name). It's more about social / intellectual / historical commentary than about science.

Replies from: orthonormal, MixedNuts
comment by orthonormal · 2011-06-14T15:20:47.134Z · LW(p) · GW(p)

It's sort of philosophy crack: intensely pleasurable and satisfying to read, but wrong about 90% of things. (The other 10% consists of brilliant original insights that no other philosopher within a century of him could have seen. On the other hand, it can be difficult to distinguish these from the rest of his corpus.)

comment by MixedNuts · 2011-06-14T10:46:56.017Z · LW(p) · GW(p)

I agree, and bought one of these! But he's not doing any work, just saying "Transhumanism will rock, when it's invented sometime after my death!". Sort of a motivational poster.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T12:14:57.653Z · LW(p) · GW(p)

Nietzsche always struck me as non-transhumanist. Quick google tells me Bostrom agrees with me about this and people seem capable of making long arguments for and against.

Nietzsche is a prime example of a philosopher that pretty much everything I've understood him saying, I've disagreed with. But he is quite badass.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-14T13:51:23.969Z · LW(p) · GW(p)

The source is Bostrom's 2004 paper A history of transhumanist thought, page 4. I'll paraphrase the difference he lists:

Transhumanism uses tech to change bodies and minds, Nietzsche uses old pathways.

Yeah, that's his mistake. He points at the right goal, but can't say how to get there. As I said, no real work.

Transhumanism wants to boost everyone, Nietzsche only a select few.

I think that's unfair to Freddy. His Zarathustra puppet goes around telling everyone to do it, but they aren't interested. Obviously he was envisioning individual progress as opposed to inventing tech then distributing it to Muggles, so he thinks that if few people want to put in the effort then few people will get boosted.

Transhumanism likes individual liberties.

I don't understand what Bostrom means by that. AFAICT, Fred is huge on individual liberties.

Transhumanism comes from the Enlightenment.

I fail to see the relevance.

What I got from reading Nietzsche (before I got any exposure to transhumanism) was an extremely pretty way of saying "Striving to improve yourself a lot is awesome". No argument why, no proposed methods, some very sucky assumptions about what it'd be like. Just a cheer, and an invitation for people who share this goal to band together and work on it. Which is what transhumanists have done.

Replies from: Will_Sawin, Marius, Emile
comment by Will_Sawin · 2011-06-14T14:34:24.863Z · LW(p) · GW(p)

Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don't kill each other, and the second one always seemed more important to me.

It's hard for me to understand what he's saying because he doesn't engage (much? at all?) with Actually True Morality, that is the utilitarian/"group is just a sum of individuals" paradigm. The question of whether it's OK for the strong to bully the weak almost doesn't seem to interest him.

One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.

ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.

Replies from: orthonormal, MixedNuts
comment by orthonormal · 2011-06-14T15:14:06.422Z · LW(p) · GW(p)

You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T16:53:38.205Z · LW(p) · GW(p)

(Assuming that individual value is nonnegative.)

Replies from: orthonormal
comment by orthonormal · 2011-06-14T18:45:57.852Z · LW(p) · GW(p)

That's an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T18:49:34.175Z · LW(p) · GW(p)

But certainly the people who believe in the $L^1$ norm don't take the absolute value...

Replies from: None
comment by [deleted] · 2011-06-14T19:33:52.034Z · LW(p) · GW(p)

What? The L^1 norm is the integral of the absolute value of the function.

In this thread: people using mathematics where it doesn't belong.

Replies from: Will_Sawin, orthonormal
comment by Will_Sawin · 2011-06-14T20:07:05.433Z · LW(p) · GW(p)

I should say:

No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.

In this thread: people using mathematics where it doesn't belong.

I suppose. It's a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.

Replies from: orthonormal
comment by orthonormal · 2011-06-14T23:37:55.881Z · LW(p) · GW(p)

No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.

Yes, that's what I should have written.

comment by orthonormal · 2011-06-14T23:39:45.888Z · LW(p) · GW(p)

I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it's the analogy that we were ironing out.

Replies from: None
comment by [deleted] · 2011-06-14T23:48:23.234Z · LW(p) · GW(p)

I read from the start of the L^p talk to now, and I can't think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is... hmmm.

It's like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn't cut it, why not discuss the issue in Japanese?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-15T02:29:39.108Z · LW(p) · GW(p)

deciding that since plain English just doesn't cut it, why not discuss the issue in Japanese?

Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don't think the analogy stands as stated.)

Replies from: None
comment by [deleted] · 2011-06-15T02:59:25.865Z · LW(p) · GW(p)

If we extend the analogy to the above conversation, it's an argument between non-Japanese otaku.

comment by MixedNuts · 2011-06-14T15:28:26.855Z · LW(p) · GW(p)

No offense to Fred, but he's a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering "I'll show them! I'll show them all!".

Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don't actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can't confidently say it wouldn't be worth it if it were the only way - even after correcting for historical failures.

By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.

FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code

It certaintly shouldn't be a transhuman. Eliezer's preferred metaphor is more like "the ultimate laws of physics", which says quite a bit about how individualistic you and he are.

comment by Marius · 2011-06-14T15:54:46.701Z · LW(p) · GW(p)

Nietzsche can't know what the Superman will look like - nobody can. But he provides a great deal of assistance: he is extremely insightful about what people are doing today (well, late 1800s, but still applicable), how that tricks us into behaving and believing in certain ways, and what that means.

But he wrote these insights as poetry. If you wanted an argument spelled out logically or a methodology of scientific inquiry, you picked the wrong philosopher.

comment by Emile · 2011-06-14T16:00:24.201Z · LW(p) · GW(p)

I didn't see much transhumanism in Nietzsche, I just like reading him because he has a lot of interesting ideas while living in a quite distant intellectual context.

comment by FiftyTwo · 2011-06-13T21:26:35.482Z · LW(p) · GW(p)

Look at Philpapers.org, and search for recent papers in whatever you're interested in I guess.

Theres a lot of stuff about the recent (last decade) experimental philosophy (X-Phi) movement available online which may allay some of your concerns about Philosophical Methodology.

For a more informal look at how professional philosophers behave http://philosiology.blogspot.com/ is quite amusing.

Lukeprog did a set of articles not long ago about the relationship between philosophy and less wrong rationality which can probably give you more than I can off the top of my head.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-14T06:30:17.712Z · LW(p) · GW(p)

Will read, thanks!

I read lukeprog's ads for philosophy. Doesn't show the money. The most badass stuff he's shown is just basics ("reductionism is true" as opposed to actual reductions, etc.).

comment by Peterdjones · 2011-06-21T21:12:21.813Z · LW(p) · GW(p)

A fun critique of Dennett http://www-personal.umich.edu/~lormand/phil/cons/qualia.htm

A fun critique of zombies (and Dennett, and Searle and Chalmers) http://www.davidchess.com/words/poc/lanier_zombie.html

The single most famous paper in analytical philosophy is an attach on the sacred cows of...analytical philosophy http://www.ditext.com/quine/quine.html

Replies from: MixedNuts
comment by MixedNuts · 2011-06-22T08:07:14.207Z · LW(p) · GW(p)

Lormand: Read the first half, skimmed the other. Lowered my opinion of Dennett, didn't change my mind otherwise. The goal is pretty non-badass in the first place: to disprove Dennett's argument about qualia, not to actually answer the question, let alone look into the black box labeled "quale". It's mostly right. It makes the common mistake of forgetting the author is a brain, though. This leads to generalizing from one example (the same old "Only analytic reflection in the form of a stream of words counts as thought" canard), and to forget about physical law (there is brain circuitry that gives rise to a quale, you can mess with it, that's where inferences are hidden).

Lanier: Consciousness is is the computer, not in the meteor shower, you pickleplumbing niddlewick! And of course specifying a conscious mind doesn't instantiate it, you have to run it... and did you just conflate "computers are not fundamental" and "computers don't exist"? Yeah, every physical system is a computer (a basket of apples plus gravity that drops more in performs addition), you want a specific-algorithm-detector. We don't know the consciousness algorithm, so obviously it's hard to detect, but at least you could look for optimization processes, which are well-defined in terms of thermodynamics. And worse than all the particular mistakes - you're falling for mysterious answers to mysterious questions again. Don't those people ever learn from history?

Quine:I don't get it. (This is a good sign - I'm an outsider, if there's advanced work then I shouldn't get it.) Why are you talking about language in the first place? Why not just define logic (as a set of axioms for manipulating strings), then say "'Analytic truth' is a fancy word for 'tautology'", and then worry about how natural language maps onto logic? And why are you looking for meanings and definitions in words rather than in cognitive processes? (The reason "bachelor" and "unmarried man" are synonymous, but not "creature with a kidney" and "creature with a heart" is because, upon hearing the word "bachelor", we translate it to "unmarried man", then reason about unmarried men, whereas upon hearing "creature with a kidney", we reason about creatures with kidneys, then notice they're the same as the creatures who have a heart.) And what does this have to do with reductionism? (Is this the same old confusion between probability estimates and statements in a language?)

comment by Will_Sawin · 2011-06-12T18:21:24.207Z · LW(p) · GW(p)

This strikes me as wrong. The proper work of philosophers and computer scientists seem like they have very little overlap. Yes, philosophers often mistakenly do computer science work, but that is irrelevant.

is there a reason I should want to read an earlier, less developed version of LessWrong, by someone who is not a consequentialist, when I could just read LessWrong?

Replies from: Jayson_Virissimo, asr, Peterdjones
comment by Jayson_Virissimo · 2011-06-13T00:35:52.433Z · LW(p) · GW(p)

This strikes me as wrong. The proper work of philosophers and computer scientists seem like they have very little overlap. Yes, philosophers often mistakenly do computer science work, but that is irrelevant.

The quote isn't talking about philosophy in general, but epistemology specifically. If you take naturalized epistemology seriously (which LessWrongers do), then it seems to follow quite easily that neuroscientists and AI researchers are relatively more important to the future of epistemology than philosophers (remember that most branches of modern science were once a part of philosophy, but later broke off and developed their own class of domain specialists).

is there a reason I should want to read an earlier, less developed version of LessWrong, by someone who is not a consequentialist, when I could just read LessWrong?

One reason to read it would be to provide ourselves with some perspective on how LessWrongism fits into the larger Western intellectual tradition. Nozick is much better about showing how his ideas are related to those of other thinkers than the contributors to Less Wrong are (we share much more in common with Wittgenstein, Quine, Hempel, and Bridgeman than the impression you would get from reading the Sequences). Having this perspective should increase our ability to communicate effectively with other intellectual communities.

His being or not being a consequentialist doesn't seem to have very much to do with the validity of his work in epistemology, decision theory, philosophy of science, or metaphysics. Also, his ethical theory doesn't really fit neatly into the deontological/consequentialism dichotomy anyway. Arguably his ethics/political theory amounts to consequentialism with "side-constraints" (that can even be violated in extreme circumstances). It doesn't seem to be any less consequentislist than, say, rule-utilitarianism.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T01:11:07.443Z · LW(p) · GW(p)

I don't particularly feel driven to communicate to members of other intellectual communities.

Am I exempt from having to read that book?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-14T01:14:46.267Z · LW(p) · GW(p)

Am I exempt from having to read that book?

I will exempt you this one time, but I do not want to see you in my office again! Is that understood?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T01:18:28.384Z · LW(p) · GW(p)

It should be noted that currently my brain interprets all requests for me to do stuff as requests for me to stay up when I should be sleeping.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-14T01:21:10.665Z · LW(p) · GW(p)

It that case, you are hereby commanded to initiate your sleep cycle immediately.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T01:27:44.675Z · LW(p) · GW(p)

The interesting issue is that, since this requires getting up, going upstairs, brushing teeth, etc., I fear the twinge of starting, and end up with an aversion to going to sleep as well.

Replies from: Alicorn
comment by Alicorn · 2011-06-14T01:34:20.032Z · LW(p) · GW(p)

If you really need to be sleeping, just relocate yourself to bed and crash. You can brush your teeth in the morning. (Alternately, decide how many more hours of sleep you're willing to skip in exchange for the chance that you will eventually decide to brush your teeth.)

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T01:37:33.391Z · LW(p) · GW(p)

Interruptions will prevent me from sleeping for about another half-hour. I have a planned schedule to reflect this. The chance that I will follow this schedule is high.

ETA: 90% of the work of this process is getting up, not the brushing the teeth bit.

comment by asr · 2011-06-13T01:18:25.994Z · LW(p) · GW(p)

An idea I've been kicking around -- and am tempted to pull into a coherent form -- is that actually there is a close connection between philosophy and computer science.

Much of philosophy is arguments about various abstractions. Computer science is about using abstractions to engineer software and about proofs about software-related abstractions.

To give one example: I think of the philosophical debate about the semantics of proper nouns as coupled to the notions of reference vs value equality in programming language design.

comment by Peterdjones · 2011-06-12T19:01:40.468Z · LW(p) · GW(p)

What are you comparing Less Wrong to?

Who proved consequentialism?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-13T00:46:44.445Z · LW(p) · GW(p)

What are you comparing Less Wrong to?

He was comparing Less Wrong to a book I was quoting from.

Who proved consequentialism?

No one did, but proof is much too high a requirement anyway. Although, I don't think I am alone in recognizing the theories put forward in the The Metaethics Sequence as the least defensible part of Less Wrong doctrine.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-13T18:47:40.139Z · LW(p) · GW(p)

He was comparing Less Wrong to a book I was quoting from.

Unfavoruably. But what else is he comparing LW to? There seem to be two kinds of LessWrognians;

1) those who haven't read any mainstream philosophy, and think LW represents an unsurpassable pinnacle of excellence.

2) Those who have and don't.

Replies from: Will_Sawin, AdeleneDawner
comment by Will_Sawin · 2011-06-13T19:01:52.849Z · LW(p) · GW(p)

I have read some mainstream philosophy. I have read much of Anarchy, State, and Utopia, and nothing else by Nozick.

My view of LW is that the aspects of philosophy that I am interested in, that I have not already learned enough about from LW or other sources to satisfy me, are areas where LW pretty clearly beats mainstream philosophy.

No one proved consequentialism, and yet consequentialism is right. Who proved Occam's Razor?

Replies from: Jayson_Virissimo, Peterdjones
comment by Jayson_Virissimo · 2011-06-14T00:06:07.250Z · LW(p) · GW(p)

I have read some mainstream philosophy. I have read much of Anarchy, State, and Utopia, and nothing else by Nozick.

Anarchy, State, and Utopia is the least relevant of Nozick's books to LessWrongers. Try The Nature of Rationality (here is a decent summary of the content) and Invariances instead.

My view of LW is that the aspects of philosophy that I am interested in, that I have not already learned enough about from LW or other sources to satisfy me, are areas where LW pretty clearly beats mainstream philosophy.

I'm not really sure what your saying here, but if you get a higher marginal benefit from reading reading Less Wrong than reading philosophy books, then by all means continue reading Less Wrong. On the other hand, if you have completed the Sequences and read most of the other important discussions here, then you might want to check out what individuals in other intellectual communities have to say about these topics.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T00:26:00.039Z · LW(p) · GW(p)

It's only irrelevant because his assumption is held as false by lesswrongers. ASU is to deontology as Fun Theory is to consequentialism, or something along those lines. It would be very relevant if we agreed with deontology!

Why do we disagree? Are we wrong, or is he?

Is he wrong?

Why should I read the book of someone who is wrong on such an important matter when there are so many books I could read?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-14T00:39:40.714Z · LW(p) · GW(p)

It's only irrelevant because his assumption is held as false by lesswrongers.

No it isn't. It is less relevant because it deals mostly with political theory (mind-killer territory and usually avoided on Less Wrong) while his other books cover epistemology, decision theory, and philosophy of science.

Why do we disagree? Are we wrong, or is he?

Firstly, ASU contains Nozick's least developed ethical thought. Secondly, like I said before,

...his ethical theory doesn't really fit neatly into the deontological/consequentialism dichotomy anyway. Arguably, his ethics/political theory amounts to consequentialism with "side-constraints" (that can even be violated in extreme circumstances). It doesn't seem to be any less consequentislist than, say, rule-utilitarianism.

I made this point earlier in the thread here. Is rule-utilitarianism not a kind of consequentialism?

Is he wrong?

I think so, but I also think the Less Wrong ethical doctrine is wrong. At this point non-cognitivism seems more probable than consequentialism (ask me next week and I might not, I am known to go back and forth on the issue).

Why should I read the book of someone who is wrong on such an important matter when there are so many books I could read?

I don't know your preferences, so perhaps you shouldn't. I was merely offering some friendly advice on a course of action that has benefited me. If you are like me in the relevant respects, then you will probably benefit too.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T00:40:33.542Z · LW(p) · GW(p)

Is fun theory not relevant to lesswrongers?

What is the difference between fun theory and political theory?

ETA: Did you edit your comment? I didn't see some of the stuff at first.

...his ethical theory doesn't really fit neatly into the deontological/consequentialism dichotomy anyway. Arguably, his ethics/political theory amounts to consequentialism with "side-constraints" (that can even be violated in extreme circumstances). It doesn't seem to be any less consequentislist than, say, rule-utilitarianism.

but it's still not consequentialist, whereas, consequentialism is correct.

I think so, but I also think the Less Wrong ethical doctrine is wrong. At this point I think non-cognitivism is more probable than consequentialism (ask me next week and I might not, I go back and forth on the subject).

I still believe in consequentialism, as do most (presumably?) people on Less Wrong.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-14T01:36:47.605Z · LW(p) · GW(p)

consequentialism is correct.

What do you mean by this? I know it doesn't mean that humans should generally use consequentialist reasoning, for example.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T01:40:50.675Z · LW(p) · GW(p)

It means that the right way to come up with deontological rules for humans is by thinking of them in the framework discussed in that post.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-14T01:45:50.739Z · LW(p) · GW(p)

Okay, that and your belief that rule-utilitarianism isn't consequantialism leads me to think that your version of consequentialism is roughly "if you're attempting to be an FAI and you're not doing lots of multiplication then you're doing it wrong". Too far off?

Replies from: nshepperd, Will_Sawin
comment by nshepperd · 2011-06-14T05:19:19.154Z · LW(p) · GW(p)

Instrumental vs terminal goals. Consequentialism is the ideal, but can't implement it so we have to approximate it with deontological rules due to limitations of our brains. The rules don't get their moral authority from nowhere, they depend on being useful for reaching the actual goal. Or: the only reason we follow the rules is because we know that we'll get a worse outcome if we don't.

comment by Will_Sawin · 2011-06-14T02:00:08.468Z · LW(p) · GW(p)

It's the difference between - a priori rules and a posteori rules, I guess?

I'm all for a posteori rules, but not a priori rules.

comment by Peterdjones · 2011-06-13T19:59:37.993Z · LW(p) · GW(p)

I have read a lot of mainstream philosophy and I find LW is flawed in some areas, and I can say why.

Why should I believe what you say about consequentualism absent proof? What if I say that phyiscalism and reductionism are just plumb wrong. Would you believe me?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-13T20:02:28.440Z · LW(p) · GW(p)

No, because physicalism and reductionism aren't wrong.

I'm not telling you to believe me, I'm just pointing out that you should believe me, because I'm right.

Do you not believe in consequentialism? I could provide some arguments for it.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-13T20:51:44.115Z · LW(p) · GW(p)

I'm not telling you to believe me, I'm just pointing out that you should believe me, because I'm right.

There;s no appreciable difference between "telling me to do X" and "pointing out I should do X".

Do you not believe in consequentialism? I could provide some arguments for it.

What I mainly believe in is the necessity or arguing for claims. But we seem to have made a little progress there.

Replies from: Alicorn, Will_Sawin
comment by Alicorn · 2011-06-14T00:12:58.303Z · LW(p) · GW(p)

There;s no appreciable difference between "telling me to do X" and "pointing out I should do X".

There is some. The former adds an implicit authority claim, whereas the latter could be said in language that does not imply one.

comment by Will_Sawin · 2011-06-13T22:20:52.463Z · LW(p) · GW(p)

Claim: non-consequentialists are not effective rationalists

Evidence 1: There exist strong, relatively obvious arguments for consequentialism.

Evidence 2: There are no equally strong counterarguments.]

Reason for Evidence 1: Search your mind for arguments for consequentialism. You should find one, seeing that you read lw. They sound relatively obvious, don't they? Certainly that a professional philosopher should encounter them.

Reason for Evidence 2: Search your mind for arguments against consequentialism. You don't find very many good ones, do you?

Replies from: ata
comment by ata · 2011-06-13T23:08:42.355Z · LW(p) · GW(p)

Is there any point to arguing at the meta level like this? Why not just give your arguments instead of arguing that arguments exist?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-13T23:13:38.768Z · LW(p) · GW(p)

It's shorter.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-13T23:53:51.157Z · LW(p) · GW(p)

It is too short.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T00:06:29.956Z · LW(p) · GW(p)

I had thought that people on lesswrong would be aware of the actual arguments.

Replies from: Alicorn
comment by Alicorn · 2011-06-14T00:14:25.593Z · LW(p) · GW(p)

Please describe the hypothetical person who would be helped at all, or convinced of any proposition, by being invited to reflect on arguments for and against consequentialism of which they were already aware.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-14T00:30:54.568Z · LW(p) · GW(p)

Peterdjones, who says that:

Do you not believe in consequentialism? I could provide some arguments for it.

What I mainly believe in is the necessity or arguing for claims

I interpreted this to mean that he believed in consequentialism but did not feel I had sufficiently argued that non-consequentialism is evidence of irrationality. That is, that he was aware of arguments for consequentialism but was choosing not to apply them to the issue.

Maybe this interpretation was wrong, but it was not obviously wrong.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T13:23:52.594Z · LW(p) · GW(p)

I don't particularly believe in consequentialism.

I wouldn't say that someone "is" irrational because they fail to argue one particular point.

It is just that energy spent asserting that certain ideas are or are no rational would be better spent putting forward arguments. Rationality is something you do.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-21T14:43:36.391Z · LW(p) · GW(p)

I don't particularly believe in consequentialism.

Then either you can be dutch-booked or you can fail to dutch-book others.

I wouldn't say that someone "is" irrational because they fail to argue one particular point.

You parsed my sentence wrong.

It is just that energy spent asserting that certain ideas are or are no rational would be better spent putting forward arguments. Rationality is something you do.

There are certain arguments which people on lesswrong are expected to know. Maybe the arguments for consequentialism are not among them?

I would recount them for you, but I don't really think that will do any good.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T15:07:13.995Z · LW(p) · GW(p)

I can avoid dutch booking by applying the laws of probabiity correctly. (And in contexts that have nothing to do with morality). Do you think probability and consequentuialism are somehow the same?

There are certain arguments which people on lesswrong are expected to know. Maybe the arguments for consequentialism are not among them?

I would recount them for you, but I don't really think that will do any good.

I have been reading the material on ethics and have yet to see such an argument. There is tendency to talk in terms of utility functions, which tends to lend itself to a consequentialist way of thinking, but that is not so much proof as "if the only tool you have is a hammer...".

I also notice that there are a lot of ethical subjectivists and non cognitivsts on LW. Maybe you could point them to this wonderful proof, if I am beyond hope.

Replies from: orthonormal, MixedNuts, Will_Sawin
comment by orthonormal · 2011-06-21T15:38:18.716Z · LW(p) · GW(p)

It's not the most sophisticated form of the argument, but Yvain's recent Consequentialism FAQ is an excellent summary and a good read.

comment by MixedNuts · 2011-06-21T15:25:12.301Z · LW(p) · GW(p)

I hear bad things happen if you aren't a utility maximizer. Utilitarianism doesn't imply consquentialism, though; you can assign utility depending on whether (sentient?) decision processes choose virtuously and implement your favorite imperative. These ethical systems are consistent.

I find them quite appalling, however. What do you mean, saving four lives is less important than the virtue of not pushing people under trolleys?

Replies from: orthonormal, Vladimir_M
comment by orthonormal · 2011-06-21T15:34:24.130Z · LW(p) · GW(p)

Utilitarianism doesn't imply consquentialism, though; you can assign utility depending on whether (sentient?) decision processes choose virtuously and implement your favorite imperative.

You mean "having a utility function", not "utilitarianism". The latter is generally used to mean a specific batch of consequentialist utility functions.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-21T15:45:30.295Z · LW(p) · GW(p)

You mean "having a utility function", not "utilitarianism". The latter is generally used to mean a specific batch of consequentialist utility functions.

The latter also assumes the possibility of interpersonal utility comparison, which is not the case with von Neumann-Morgenstern utility functions.

comment by Vladimir_M · 2011-06-21T15:51:39.757Z · LW(p) · GW(p)

I find them quite appalling, however. What do you mean, saving four lives is less important than the virtue of not pushing people under trolleys?

I find simplistic consequentialist views such as this one appalling, if anything because they combine supreme self-assuredness about important problems with ignorance and lack of insight about their vitally important aspects. (See my responses in the Consequentialism FAQ thread for more detail, especially the ones dealing specifically with trolley problems.)

Replies from: MixedNuts
comment by MixedNuts · 2011-06-21T21:00:26.894Z · LW(p) · GW(p)

simplistic consequentialist views such as this one

ignorance and lack of insight

Waaah! You're a meanie mean-head! :( By which I mean: this was a one-sentence reaction to simplistic virtue ethics. I agree it's not a valid criticism of complex systems like Alicorn's tiered deontology. I also agree it's fair to describe this view as simplistic - at the end of the day, I do in fact hold the naive view. I disagree that it can only exist in ignorance of counterarguments. In general, boiling down a position to one sentence provides no way to distinguish between "I don't know any counterarguments" and "I know counterarguments, all of which I have rejected".

supreme self-assuredness

Not sure what you mean, I'm going to map it onto "arrogance" until and unless I learn you meant otherwise. Arrogant people are annoying (hi, atheist blogosphere!), but in practice it isn't correlated with false ideas.

Or is this just a regular accusation of overconfidence, stemming from "Hey, you underestimate the number of arguments you haven't considered!"?

my responses in the Consequentialism FAQ thread

You go into social-norms-as-Schelling-points in detail (you seem to point at the existence of other strong arguments?); I agree about the basic idea (that's why I don't kill for organs). I disagree about how easily we should violate them. (In particular, Near lives are much safer to trade than Far ones.) Even "Only kill without provocation in the exact circumstances of one of the trolley problems" is a feasible change.

Also, least convenient possible world: after the experiment, everyone in the world goes into a holodeck and never interacts with anyone again.

Interestingly, when you said

Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero.

I automatically pictured myself as the fat man, and felt admiration and gratitude for the heroic sociopath. Then I realized you meant a third party, and did feel creeped out. (This is as it should be; I should be more eager to die than to kill, to correct for selfishness.)

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-22T18:32:32.689Z · LW(p) · GW(p)

By which I mean: this was a one-sentence reaction to simplistic virtue ethics.

Actually, I was writing in favor of "simplistic" virtue ethics. However simplistic and irrational it may seem, and however rational, sophisticated, and logically airtight the consequentialist alternatives may appear to be, folk virtue ethics is a robust and workable way of managing human interaction and coordination, while consequentialist reasoning is usually at best simply wrong and at worst a rationalization of beliefs held for different (and often ugly) reasons.

You can compare it with folk physics vs. scientific physics. The former has many flaws, but even if you're a physicist, for nearly all things you do in practice, scientific physics is useless, while folk physics works great. (You won't learn to ride a bike or throw a ball by studying physics, but by honing your folk physics instincts.) While folk physics works robustly and reliably in complex and messy real-world situations, handling them with scientific physics is often intractable and always prone to error.

Of course, this comparison is too favorable. We do know enough scientific physics to apply it to almost any situation at least in principle, and there are many situations where we know how to apply it successfully with real accuracy and rigor, and where folk physics is useless or worse. In contrast, attempts to supersede folk virtue ethics with consequentialism are practically always fallacious one way or another.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-23T07:22:53.812Z · LW(p) · GW(p)

in favor of "simplistic" virtue ethics

So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?

If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you're going to get punished for defecting against a norm, you don't have a stake, so biases can run free and don't get any feedback.

Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for "norms aren't completely stupid", so this only argues for "keep roughly the same system but correct for known biases". But you at least need some kind of feedback. "QALYs per hour of effort" is pretty decent.

And this is a consequentialist argument. "If I try to kill some to save more, I'll almost certainly overestimate lives saved and underestimate knock-on effects" is a perfectly good argument. "Killing some to save more makes me a bad person"... not so much.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-24T02:21:21.758Z · LW(p) · GW(p)

Is this a fair description of folk ethics?

No, because we don't even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.

(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far from being fully explained.)

But you at least need some kind of feedback. "QALYs per hour of effort" is pretty decent.

Trouble is, once you go down that road, it's likely that you're going to come up with fatally misguided or biased conclusions. For practically any problem that's complicated enough to be realistic and interesting, we lack the necessary knowledge and computational resources to to make reliable consequentialist assessments, in terms of QALY or any other standardized measure of welfare. (Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)

Moreover, for any problem that is relevant for questions of power, status, wealth, and ideology, it's practically impossible to avoid biases. At the end, what looks like a dispassionate and perhaps even scientific attempt to evaluate things using some standardized measure of welfare is more likely than not to be just a sophisticated fig-leaf (conscious or not) for some ideological agenda. (Most notably, the majority of what we call “social science” has historically been developed for that purpose.)

Yes, this is a very pessimistic verdict, but an attempt at sound reasoning should start by recognizing the limits of our knowledge.

Replies from: multifoliaterose, MixedNuts
comment by multifoliaterose · 2011-06-28T06:08:18.331Z · LW(p) · GW(p)

I agree with much of your worldview as I've interpreted it. In particular I agree that:

•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.

•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don't have explicit understanding.

•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.

•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.

On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.

But you at least need some kind of feedback. "QALYs per hour of effort" is pretty decent.

Trouble is, once you go down that road, it's likely that you're going to come up with fatally misguided or biased conclusions.

What do you suggest as an alternative to MixedNuts' suggestion?

As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one's efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don't suffice to give a practical applicable ethical theory.

Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)

But interpersonal trade-offs are also inevitable; it's not as though one avoids the issue by avoiding consequentialism.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-28T09:27:29.862Z · LW(p) · GW(p)

The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don't pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it's entirely non-obvious what it might be, and it's fallacious to automatically discard the latter as biased.

Regarding this, though:

But interpersonal trade-offs are also inevitable; it's not as though one avoids the issue by avoiding consequentialism.

The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are -- and therefore they often may seem completely irrational or unfair by other standards.

Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don't, the plan will backfire -- maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-06-28T14:16:11.923Z · LW(p) · GW(p)

Thanks for your response!

I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it's entirely non-obvious what it might be, and it's fallacious to automatically discard the latter as biased

I agree. And I like the rest of your response about tacitly shared focal points.

Part of what you may be running up against on LW is people here (a) Having low intuitive sense for what these focal points are (b) The existing norms being designed to be tolerable for 'most people' and LWers falling outside of 'most people,' and correspondingly finding existing norms intolerable with higher than usual frequency.

I know that each of (a) and (b) sometimes apply to me personally

Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..

comment by MixedNuts · 2011-06-24T07:30:22.609Z · LW(p) · GW(p)

Okay, I don't get it. I can only parse what you're saying one of two ways:

  • "We don't have any idea of folk ethics works." But that's not true, we know it's not "whatever emperor Ming says". We can and do observe folk ethics at work, and notice it favors ingroups, is loss averse, is scope insensitive, etc.
  • "Any attempt to do better won't be perfectly free of bias. Therefore, you can't do better. Therefore, the best you can do is to use folk ethics... which has a bunch of known biases."

You very likely don't mean either of these, so I don't know what you're trying to say.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-25T00:13:31.336Z · LW(p) · GW(p)

These statements are a bit crude and exaggerated version of what I had in mind, but they're actually not that far off the mark.

The basic human folk ethics, shaped within certain bounds by culture, is amazingly successful in ensuring human coordination and cooperation in practice, at both small and large scales. (The fact that we see its occasional bad failures as dramatic and tragic only shows that we're used to it working great most of the time.) The key issue here is that these coordination problems are extremely hard and largely beyond our understanding. While we can predict with some accuracy how individual humans behave, the problems of coordinating groups of people involve countless complicated issues of game theory, signaling, etc., about which we're still largely ignorant. In this sense, we really don't understand how folk ethics works.

Now, the important thing to note is that various aspects of folk ethics may seem as irrational and biased (in the sense that changing them would have positive consequences by some reasonable measure), while in fact the truth is much more complicated. These "biases" may in fact be essential for the way human coordination works in practice for some reason that's still mysterious to us. Even if they don't have any direct useful purpose, it may well be that given the constraints of human minds, eliminating them is impossible without breaking something else badly. (A prime example is that once someone goes down the road of breaking intuitively appealing folk ethics principles in the name of consequentialist calculations, it's practically certain that these calculations will end up being fatally biased.)

Here I have of course handwaved the question of how exactly successful human cooperation depends on the culture-specific content of people's folk ethics. That question is fascinating, complicated, and impossible to tackle without opening all sorts of ideologically charged issues. But in any case, it presents even further complications and difficulties for any attempt at analyzing and fixing human intuitions by consequentialist reasoning.

(Also, similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive "rationalist" perspective, but whose role in practice is much more complicated and important.)

Replies from: MixedNuts, Unnamed, rhollerith_dot_com
comment by MixedNuts · 2011-06-25T10:35:18.710Z · LW(p) · GW(p)

similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive "rationalist" perspective, but whose role in practice is much more complicated and important.

Yeah, that seems to be the crux of our disagreement. You still trust people, you haven't seen them march into death and drag their children along with them and reject a thousand warnings along the way with contempt for such absurd and evil suggestions.

I agree that going against social norms is very costly, that we need cooperation more than ever now there's seven billion of us, and that if something is bad you still need to coordinate against it. But consider this anecdote:

Many years ago, when I was but a child, I wished to search for the best and rightest politician, and to put them in power. And eagerly did I listen to all, and carefully did I consider their arguments, and honestly did I weight them against history and the evening news. And lo, an ideology was born, and I gave it my allegiance. But still doubts nagged and arguments wavered, and I wished for closure.

One day my politician of choice called for a rally, and to the rally I went; filled with doubt, but willing to serve. And such joy came upon me that I knew I was right; this wave of bliss was the true sign that my cause was just. (For I was but a child, and did not know of laws of entanglement; I knew not human psychology told not of world states.)

Then it came to pass that I read a history textbook, and in the book was an excerpt from Robert Brasillach, who too described this joy, and who too claimed it as proof of his ideology. Which was facism. Oops.

So, yeah, never falling for that one again.

comment by Unnamed · 2011-06-25T17:25:51.830Z · LW(p) · GW(p)

Could you say more about what makes folks ethics a form of virtue ethics (or at least sufficiently virtue-based for you to use the term "folk virtue ethics")? I can see some aspects of it that are virtue-based, but overall it seems like a hodgepodge of different intuitions/emotions/etc.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-25T21:15:24.809Z · LW(p) · GW(p)

Yes, it's certainly not a clear-cut classification. However, I'd say that the principal mechanisms of folk ethics are very much virtue-based, i.e. they revolve around asking what sort of person acts in a particular way, and what can be inferred about others' actions and one's own choice of actions from that.

comment by RHollerith (rhollerith_dot_com) · 2011-06-25T07:07:17.610Z · LW(p) · GW(p)

Your praise for folk ethics would be more persuasive to me, Vladimir, if it came with a description of folk ethics -- and if that description explained how folk ethics avoids giving ambiguous answers in many important situations -- because it seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.

In other words, although I am sympathetic to arguments for conservatism in matter of interpersonal relationships and social institutions, your argument would be a whole lot stronger if the process of identifying or determining the thing being argued for did not rely entirely on the phrase "folk virtue ethics".

Replies from: Vladimir_M
comment by Vladimir_M · 2011-06-25T21:56:34.277Z · LW(p) · GW(p)

I don't think we need to get into any controversial questions about interpersonal relationships and social institutions here. (Although the arguments I've made apply to these too.) I'd rather focus on the entirely ordinary, mundane, and uncontroversial instances of human cooperation and coordination. With this in mind, I think you're making a mistake when you write:

[I]t seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.

In fact, the overwhelming part of folk ethics consists of decisions that are so ordinary and uncontroversial that we don't even stop to think about them, and of interactions (and the resulting social norms and institutions) that are taken completely for granted by everyone -- even though the complexity of the underlying coordination problems is enormous, and the way things really work is still largely mysterious to us. The thesis I'm advancing is that a lot of what may seem like bias and imperfection in folk ethics may in fact somehow be essential for the way these problems get solved, and seemingly airtight consequentialist arguments against clear folk-ethical intuitions may in fact be fatally flawed in this regard. (And I think they nearly always are.)

Now, if we move to the question of what happens in those exceptional situations where there is controversy and conflict, things do get more complicated. Here it's important to note that the boundary between regular smooth human interactions and conflicts is fuzzy, insofar as the regular interactions often involve conflict resolution in regular and automatic ways, and there are no sharp limits between such events and more overt and dramatic conflict. Also, there is no sharp bound between entirely instinctive folk ethics intuitions and those that are codified in more explicit social (and ultimately legal) norms.

And here we get to the controversies that you mention: the conflict between social and legal norms that embody and formalize folk intuitions of justice, fairness, proper behavior, etc. and evolve spontaneously through tradition, precedent, customary practice, etc., and the attempts to replace such norms by new ones backed by consequentialist arguments. Here, indeed, one can argue in favor of what you call "conservatism in matter of interpersonal relationships and social institutions" using very similar arguments to the mine above. But whether or not you agree with such arguments, my main point can be made without even getting into any controversial issues.

comment by Will_Sawin · 2011-06-21T17:32:57.313Z · LW(p) · GW(p)

Deciding with a well-behaved preference order includes but is not limited to probability.

Consequentialism doesn't contradict those philosophies.

The arguments I know are, a la MixedNuts, bad things happen if you aren't a utility maximizer.

You can maximize a subjective utility function.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T17:47:27.704Z · LW(p) · GW(p)

Deciding with a well-behaved preference order includes but is not limited to probability.

Consequentialism doesn't contradict those philosophies.

It doesn't follow that I have to adopt consequentialist metaethics in order to avoid being ripped off at the racecourse or stock market.

The arguments I know are, a la MixedNuts, bad things happen if you aren't a utility maximizer.

Well, I probablty won't end up with my own utility maximised. What's that got to do with ethics? It's quite plausible that I should make sacrifices for ethical reasons.

Replies from: nshepperd, Will_Sawin
comment by nshepperd · 2011-06-22T00:59:12.736Z · LW(p) · GW(p)

consequentialist metaethics

Please don't use "metaethics" as a word for ethics.

comment by Will_Sawin · 2011-06-21T18:01:14.527Z · LW(p) · GW(p)

You will sacrifice and no one else will benefit.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T18:51:32.469Z · LW(p) · GW(p)

If I am not utilitarian about X, X is not going to be maximsed. But there are a lot of candidates for X, and they can't all be maximised at once. Whatever version of consequentialism you adopt, there are going to be non optimal outcome by others. So adopt the right version? Maybe. But that is part of the larger problem of adopting the right metaethics. If deontology or rights theory is true, then you really shouldn't push the fat guy, and then any form of consequentualism will lead to Bad Things.

Moral: we can't straightforwardly judge metaethical theories by their tendency to produce good and bad, because we are using them to define good and bad.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-21T18:55:48.924Z · LW(p) · GW(p)

There are things which are less-controversially bad than others.

Suppose a deontologist agrees that world A is better than world B.

Then there is, in general, a world C such the deontologist refuses to move from B to C and then refuses to move from C to A, and is thus dragged kicking and screaming into a better world.

Replies from: nshepperd, Peterdjones
comment by nshepperd · 2011-06-22T00:57:01.661Z · LW(p) · GW(p)

Do you mean from B to C and then C to A?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T01:00:01.596Z · LW(p) · GW(p)

Fixed, thanks.

comment by Peterdjones · 2011-06-21T19:11:28.792Z · LW(p) · GW(p)

I agree that we can use strong and common intuitions to avoid the chicken-and-egg problem, but...

Then there is, in general, a world C such the deontologist refuses to move from B to C and then refuses to move from B to A, and is thus dragged kicking and screaming into a better world

I have no idea what you mean by that.

We don't have strong intuitions about trolley problems, which is why they are problems.

Replies from: MixedNuts, Will_Sawin
comment by MixedNuts · 2011-06-21T20:11:27.238Z · LW(p) · GW(p)

I've never met a person who didn't have one. They're problems because we have strong, different intuitions.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T20:17:15.117Z · LW(p) · GW(p)

Didn't have one what?

And where intuitions are strong and varying, we can't use them to decide between ethical systems.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-21T20:50:10.579Z · LW(p) · GW(p)

Who didn't have a strong intution.

The problem isn't lack of intuitions, it's conflict between them. Agree this makes them useless, but the effects are different - construct a general system from a mostly unrelated set of intuitions vs invalidate some intuitions.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-21T21:31:45.054Z · LW(p) · GW(p)

Hmm. There's plenty of conflict between abortion is right/wrong,. and very little between murder is right/wrong.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-21T22:29:03.525Z · LW(p) · GW(p)

But plenty of conflict on what is/isn't murder.

comment by Will_Sawin · 2011-06-21T22:33:54.062Z · LW(p) · GW(p)

I'm arguing that, if you are a deontologist, for all A such that if the world were in state B you would press a button that changed it to A, this dialogue could occur:

You: "Hi, Omega"

Omega: "The world is currently in state B.. I have a button that changes it to state C. Wanna press it?"

You: "No, that would be immoral."

Omega: "Well, I pressed it for you."

You: "That was an immoral thing you just did!"

Omega: "Well, cheer up. This new button will not only fix my earlier immoral action and return us to state B, but also bring us to the superior world of world A!"

You: "Sounds awesome."

Omega: "Wanna press it?"

You: "No, that would be immoral."

Replies from: wedrifid, Manfred, Alicorn, Peterdjones
comment by wedrifid · 2011-06-22T00:46:44.537Z · LW(p) · GW(p)

The parent seems to be correct and the point an obvious one. That is a trait - and arguable weakness - of deontological systems. It doesn't show that deonotological systems are bad, just explains what the most significant difference is between the actions dictated between vaguely similar utilitarian and deontological value systems.

comment by Manfred · 2011-06-21T23:46:38.589Z · LW(p) · GW(p)

This sounds suspiciously like evaluating deontology by saying "well, it doesn't lead to maximum utility."

In order to make this work you need to justify the properties of utility-maximization that you use from common principles - if these principles (consequentialism being the notable one here, I think) are not accepted, then of course the utilitarian answer won't be accepted.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T00:32:13.036Z · LW(p) · GW(p)

I'm using something along the lines of transitivity.

Deontology violates the principle "Two wrongs don't make a right" and this bothers me.

Replies from: wedrifid
comment by wedrifid · 2011-06-22T00:53:10.221Z · LW(p) · GW(p)

Deontology violates the principle "Two wrongs don't make a right" and this bothers me.

I don't understand your point here. Deontology can implement all sorts of "two wrongs make a right" rules. It also seems strange to see deontology criticised for violating what appears to be more or less a deontological principle itself.

To be honest it seems like Manfred suggested a quite reasonable way to evaluate deontology:

This sounds suspiciously like evaluating deontology by saying "well, it doesn't lead to maximum utility."

Damn right. Deontology makes bad stuff happen. Don't do it!

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T00:56:55.391Z · LW(p) · GW(p)

I don't understand your point here. Deontology can implement all sorts of "two wrongs make a right" rules. It also seems strange to see deontology criticised for violating what appears to be more or less a deontological principle itself.

I think you misunderstand what I mean by "Two wrongs don't make a right". It's not a moral rule, it's a logical (perhaps meta-moral?) rule. It says that if an action is wrong, and another action is wrong, then doing the first action, then the second, in rapid succession is wrong.

With enough logical rules like that, you can prove the existence of a preference order, thus deriving consequentialism.

Damn right. Deontology makes bad stuff happen. Don't do it!

This is roughly my perspective, of course, I don't think this argument would convince many deontologists.

This is another way of explaining why some of my posts in this thread are downvoted.

Replies from: wedrifid
comment by wedrifid · 2011-06-22T01:06:33.200Z · LW(p) · GW(p)

This is roughly my perspective, of course, I don't think this argument would convince many deontologists.

Of course not. (I don't find it all that useful to try to convince people to not have objectionable preferences of any kind. It does not tend to work.)

This is another way of explaining why some of my posts in this thread are downvoted.

Because you are arguing with deontologists? That was approximately my conclusion.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T01:08:56.334Z · LW(p) · GW(p)

Because I am doing so poorly.

comment by Alicorn · 2011-06-21T23:05:03.265Z · LW(p) · GW(p)

I don't follow. Can you give a more specific example for A, B, and C?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T00:35:21.207Z · LW(p) · GW(p)

A = the world of today B = the world of today, but all of Bill Gate's money is now Alicorn's money C = the world of today, but everyone also owns a delicious chocolate-chip cookie

Moving from A=>B violates Bill Gates's rights. Moving from B=>C violates your rights.

Replies from: Perplexed
comment by Perplexed · 2011-06-22T00:42:08.574Z · LW(p) · GW(p)

Does world B contain someone who stole Bill's money? Does world C contain someone who stole Alicorn's money?

One reason that you are having trouble seeing the world as a deontologist sees it is that you stubbornly refuse to even try.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T00:51:52.892Z · LW(p) · GW(p)

In the example, yes, Omega, and yes, peterdjones.

But isn't preventing the existence of people who have stolen a consequentialist goal?

Replies from: Perplexed
comment by Perplexed · 2011-06-22T01:25:03.061Z · LW(p) · GW(p)

isn't preventing the existence of people who have stolen a consequentialist goal?

Taking into account the existence of people who have stolen is one way for a consequentialist to model the thinking of deontologists. If a consequentialist includes history of who-did-what-to-whom in his world states, he is capturing all of the information that a deontologist considers. Now, all that is left is to construct a utility function that attaches value to the history in the way that a deontologist would.

Voila! Something that approximates successful communication between deontologist and consequentialist.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T01:30:38.209Z · LW(p) · GW(p)

Unfortunately, all I can do is imagine a heated contest between two people over which of them is going to do some evil action XYZ that is going to be done regardless. They each want to ensure that they don't do it, but for some reason it will necessarily be done, so they come to blows over it.

I may, in fact, be constitutionally incapable of successful communication with deontologists.

Replies from: Perplexed
comment by Perplexed · 2011-06-22T01:52:07.157Z · LW(p) · GW(p)

I'm not following you. Why is evil action XYZ going to be done regardless? Are you imagining that deontologists seek to have other people do their dirty deeds for them?

Replies from: Will_Sawin, Will_Sawin, Will_Sawin
comment by Will_Sawin · 2011-06-22T01:59:13.261Z · LW(p) · GW(p)

Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed.

Maybe it must be done to prevent bad stuff.

Maybe it's a fact of the psychology of these two individuals that one of them is going to do it.

Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?

comment by Will_Sawin · 2011-06-22T01:59:11.464Z · LW(p) · GW(p)

Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed.

Maybe it must be done to prevent bad stuff.

Maybe it's a fact of the psychology of these two individuals that one of them is going to do it.

Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?

comment by Will_Sawin · 2011-06-22T01:59:08.943Z · LW(p) · GW(p)

Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed.

Maybe it must be done to prevent bad stuff.

Maybe it's a fact of the psychology of these two individuals that one of them is going to do it.

Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?

comment by Peterdjones · 2011-06-21T23:50:34.154Z · LW(p) · GW(p)

That is still hard to follow[*]. You seem to be saying that if if a deontologist has the rule "don't make the world worse" they must also have a rule "don't make the world better". I can't think of the slightest justification of that.

[*} And I have no idea how anyone is supposed to work out the scenario in the parent from the potted version in the great grand parent.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T00:30:41.072Z · LW(p) · GW(p)

No, this is not the case. You have to cleverly choose B.

So let's say, in both A and C, Eliezer Yudkowsky has a sac of gold. In B, Yvain has that sack of gold.

In one deontological morality, stealing gold from Eliezer and giving it to Yvain is always immoral, as is the opposite-directional theft.

This means that changing from A to B and changing from B to C are immoral

(The fundamental problem here is that, while I am driven to respond to your comments, I am not driven to put much effort into those responses. I am still not sure which behavior to change, but together they are certainly pathological.)

Replies from: Peterdjones, Perplexed
comment by Peterdjones · 2011-06-22T02:42:25.322Z · LW(p) · GW(p)

I don't hold to that one deontological morality. I think Jean Valjean was right to steal the bread. I think values/rules/duties tend to conflict, and resolution of such conflicts need values/rules/duties to be arranged hierarchically. Thus the rightness of preventing his nephews starvation overrides the wrongness of stealing the bread. ( "However, there is a difference between deontological ethics and moral absolutism" )

Replies from: Will_Sawin, Will_Sawin
comment by Will_Sawin · 2011-06-22T02:52:18.872Z · LW(p) · GW(p)

Requiring me to think up the example before telling me the exact nature of your morality is unfair.

If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T03:03:45.631Z · LW(p) · GW(p)

I don't have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.

Replies from: wedrifid, Will_Sawin
comment by wedrifid · 2011-06-22T03:17:37.972Z · LW(p) · GW(p)

Requiring me to think up the example before telling me the exact nature of your morality is unfair.

I don't have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.

This reply does not fit the context. If Will is asked to instantiate from a general principle to a specific example then it is not reasonable to declare the general principle null because the specific example does not apply to the morality you happen to be thinking of.

(And the "One True Theory" business a far less subtle straw man.)

comment by Will_Sawin · 2011-06-22T03:09:59.030Z · LW(p) · GW(p)

Suppose you have a system with some set of states, such that changing from state A to state B is either OK or not OK.

Then assuming you accept:

It's OK to move from A to A

If it's OK to move from A to B and OK to move from B to C then it's OK to move from A to C.

If it's not OK to move from A to B and not OK to move from B to C then it's not OK to move from A to C.

then you get a preference order on the states. Presto, consequentialism.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T03:22:54.190Z · LW(p) · GW(p)

If it's OK to make a transition because of the nature of the transition (it's an action which follows certain rules, respects certain rights, arises from certain intentions). then there is no need to re-explain the ordering of A, B and C in terms of anything about the states themselves -- the ordering is derived from the transitions.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T03:25:07.754Z · LW(p) · GW(p)

But if the properties of the transitions can be derived from the properties of the states, then it's so much SIMPLER to talk about good states than good transitions.

Replies from: endoself
comment by endoself · 2011-06-22T03:45:34.959Z · LW(p) · GW(p)

Simplicity is tangential here; we are discussing what is right, not how to most efficiently determine it.

In what circumstances do you two actually disagree as to what one should do (I expect Peter to be more likely to answer this well as he is more familiar with typical LessWrongian utilitarianisms than Will is with Peter's particular deontology)?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T04:10:44.890Z · LW(p) · GW(p)

Well, a better way to frame what I said is:

If those axioms hold, then a consequentialist moral framework is right.

You can argue that those axioms hold and yet consequentialism is not the One True Moral Theory, but it seems like an odd position to take on a purely definitional level.

(also, Robert Nozick violates those axioms, if anyone still cares about Robert Nozick, and the bag-of-gold example works on him)

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T04:30:04.096Z · LW(p) · GW(p)

If those axioms hold, then a consequentialist moral framework is right.

I don't see why. Why would the existence of an ordering of states be a sufficient condition for consequentualism? And didn't you need the additional argument about simplicity to make that work?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T14:37:29.010Z · LW(p) · GW(p)

So consequentialism says "doing right is making good". But it doesn't say what "making good" means. So it's a family of moral theories.

What moral theories are part of the consequentialist family? All theories that can be expressed as "doing right is making X" for some X.

If I show that your moral theory can be expressed in that manner, I show that you are, in this sense, a consequentialist.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T16:15:29.308Z · LW(p) · GW(p)

And if i can show that consequentialism needs to be combined with rules (or something else), does that prove consequentialism is really deontology (or something else)? It is rather easy to show that any one-legged approach is flawed, but if end up with a mixed theory we should not label it as a one-legged theory.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T16:45:49.730Z · LW(p) · GW(p)

Then you should end up violating one of the axioms and getting a not-consequentialism.

All consequentialist theories produce a set of rules.

The right way to define "deontology", then, is a theory that is a set of rules that couldn't be consequentialist.

if you mix consequentialism and deontology, you get deontology.

Replies from: Alicorn, Eugine_Nier
comment by Alicorn · 2011-06-22T16:48:48.879Z · LW(p) · GW(p)

If you mix consequentialism and deontology you get Nozickian side-constraints consequentialism.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T16:58:42.575Z · LW(p) · GW(p)

Good example. You could have consequnentialism about what you should do, and deontology about what you should refrain from.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-22T17:12:58.082Z · LW(p) · GW(p)

Considering that: this whole discussion was about how Robert Nozick isn't (wasn't?) a consequentialist, I think for these purposes we should classify his views as not consequentialism.

comment by Eugine_Nier · 2011-06-23T04:48:17.673Z · LW(p) · GW(p)

Would you count Timeless Decision Theory as deontological since it isn't pure consequentialism?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-23T14:16:23.520Z · LW(p) · GW(p)

No, it's a decision theory, not an ethical theory.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-06-24T03:30:51.532Z · LW(p) · GW(p)

I don't understand the distinction you're making.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T03:32:48.123Z · LW(p) · GW(p)

Decision theories tell you what options you have: Pairs of actions and results.

Ethical theories tells you which options are superior.

Replies from: Eugine_Nier, wedrifid
comment by Eugine_Nier · 2011-06-24T04:11:34.524Z · LW(p) · GW(p)

Perhaps an example of what I mean will be helpful.

Suppose your friend is kidnapped and being held for ransom. Naive consequentialism says you should pay because you value his life more then the money. TDT says you shouldn't pay because paying counterfactually causes him to be kidnapped.

Note how in the scenario the TDT argument sounds very deontological.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T04:22:17.366Z · LW(p) · GW(p)

It sounds deontological, but it isn't. It's consequentialist. It evaluates options according to their consequences.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-06-24T04:35:05.589Z · LW(p) · GW(p)

"Consequences" only in a counterfactual world. I don't see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system. In particular by your definition Kant's categorical imperative is consequentialist since it involves looking at the consequences of your actions in the hypothetical world where everyone performs them.

Replies from: SilasBarta, benelliott, Will_Sawin
comment by SilasBarta · 2011-06-24T16:48:51.720Z · LW(p) · GW(p)

Yes, in that TDT-like decision/ethical theories are basically "consequentialism in which you must consider 'acausal consequences'".

While it may seem strange to regard ethical theories that apply Kant's CI as "consequentialist", it's even stranger to call them deontological, because there is no deontic-like "rule set" they can be said to following; it's all simple maximization, albeit with a different definition of what you count as a benefit. TDT, for example, considers not only what your action causes (in the technical sense of future results), but the implications of the decision theory you instantiate having a particular output.

(I know there are a lot of comments I need to reply to, I will get to them, be patient.)

Replies from: wedrifid
comment by wedrifid · 2011-06-24T17:32:43.785Z · LW(p) · GW(p)

While it may seem strange to regard ethical theories that apply Kant's CI as "consequentialist", it's even stranger to call them deontological, because there is no deontic-like "rule set" they can be said to following;

It certainly is strange even if it is trivially possible. Any 'consequentialist' system can be implemented in a singleton deontological 'rule set'. In fact, that's the primary redeeming feature of deontology. Kind of like the best thing about Java is that you can use it to implement JRuby and bypass all of Java's petty restrictions and short sighted rigidly enforced norms.

comment by benelliott · 2011-06-24T15:32:41.549Z · LW(p) · GW(p)

Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.

In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I'm not sure this is right).

Replies from: Eugine_Nier, Will_Sawin
comment by Eugine_Nier · 2011-06-25T03:48:15.420Z · LW(p) · GW(p)

In both cases, while computing them you never assume anything which you know to be false

Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn't happen. Omega's coin didn't come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.

Replies from: benelliott, Will_Sawin
comment by benelliott · 2011-06-25T08:58:50.365Z · LW(p) · GW(p)

I think counterfactual mugging was originally brought up in the context of problems which TDT doesn't solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn't pay.

comment by Will_Sawin · 2011-06-25T05:20:42.582Z · LW(p) · GW(p)

There are two rather different things both going under the name counterfactuals.

One is when I think of what the world would be like if I did something that I'm not going to do.

Another is when I think of what the world would be like if something not under my control had happened differently, and how my actions affect that.

They're almost orthogonal, so I question the utility of using the same word.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-06-25T08:57:52.229Z · LW(p) · GW(p)

One is when I think of what the world would be like if I did something that I'm not going to do.

Another is when I think of what the world would be like if something not under my control had happened differently, and how my actions affect that.

Well, I've been consistently using the word "conterfactual" in your second sense.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-25T15:31:53.937Z · LW(p) · GW(p)

Well that might explain some of our miscommunication. I'll go back and check.

Consequences" only in a counterfactual world. . I don't see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system.

This makes sense using the first definition, at least, according to TDT it does.

Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.

This is clearly using the first definition.

Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn't happen.

This only makes sense with the second, and should probably be UDT rather than TDT - the original TDT didn't get the right answer on the counterfactual mugging.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-06-25T23:13:50.729Z · LW(p) · GW(p)

This only makes sense with the second, and should probably be UDT rather than TDT - the original TDT didn't get the right answer on the counterfactual mugging.

Sorry, I meant something closer to UDT.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-26T03:17:25.203Z · LW(p) · GW(p)

Alright cool. So I think that's what's going on - we all agree but were using different definitions of counterfactuals.

comment by Will_Sawin · 2011-06-24T18:23:16.724Z · LW(p) · GW(p)

You need a proof-system to ensure that you never assume anything which you know to be false.

ADT and some related theories have achieved this. I don't think TDT has.

Replies from: benelliott
comment by benelliott · 2011-06-24T21:10:09.336Z · LW(p) · GW(p)

What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don't know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don't know prior to making it.

Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don't (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T21:18:03.447Z · LW(p) · GW(p)

Well here is the issue.

Let's say I have to decide what to do at 2'o'clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don't, then I won't. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.

This can mess up the logic of decision-making. There are http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/. This ensures that you can never figure out a decision before making it, which makes things simpler.

I'm not sure if this contradicts what you've said.

And I would agree exactly with your analysis about what's wrong with Kant, and how that's different from CDT and TDT.

Replies from: benelliott
comment by benelliott · 2011-06-24T21:28:52.468Z · LW(p) · GW(p)

I'm not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb's problem or Prisoner's dilemma, but it breaks down for Transparent Newcomb or Parfit's Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T21:42:37.074Z · LW(p) · GW(p)

Well you always know that one of your counterfactuals is true.

and Transparent Newcomb is a bit weird because one of the four possible strategies just explodes it.

Replies from: Vladimir_Nesov, benelliott
comment by Vladimir_Nesov · 2011-06-25T01:22:15.801Z · LW(p) · GW(p)

Well you always know that one of your counterfactuals is true.

There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-25T02:32:05.948Z · LW(p) · GW(p)

Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn't that represent a failure of Omega's prediction?

And since you always choose an action...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-25T02:39:38.128Z · LW(p) · GW(p)

It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you'd be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn't actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don't exist, which you can easily obtain in a variation on Transparent Newcomb.

When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn't care about whether the agent "actually exists", and focus on what it knows instead.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-25T02:45:34.836Z · LW(p) · GW(p)

Yes. The actually impossible counterfactuals matter. All I'm saying is that the possible counterfactuals exist.

If you took such an action, wouldn't you not exist? I request elaboration.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-25T03:00:37.736Z · LW(p) · GW(p)

(You've probably misunderstood, I edited for clarity; will probably reply later, if that is not an actually impossible event.)

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-25T03:03:55.005Z · LW(p) · GW(p)

New reply: Yes, I agree.

All I'm saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.

comment by benelliott · 2011-06-24T21:48:56.773Z · LW(p) · GW(p)

You could possibly fix that by saying Omega isn't perfect, but his predictions are correlated enough with your decision to make precomittment possible.

comment by Will_Sawin · 2011-06-24T04:46:04.999Z · LW(p) · GW(p)

Yes. However that decision theory is wrong and dumb so we can ignore it. In particular, it never produces factuals, only counterfactuals.

comment by wedrifid · 2011-06-24T03:51:15.565Z · LW(p) · GW(p)

Decision theories tell you what options you have: Pairs of actions and results.

You don't need decision theories for that. You can get that far with physics and undirected imagination.

Replies from: Normal_Anomaly, Will_Sawin
comment by Normal_Anomaly · 2011-06-24T15:25:48.818Z · LW(p) · GW(p)

How about this:

Physics tells you pairs of actions and results.

Ethical theories tell you what results to aim for.

Decision theories combine the two.

comment by Will_Sawin · 2011-06-24T04:20:42.181Z · LW(p) · GW(p)

That's only true if you're a human being.

Replies from: wedrifid
comment by wedrifid · 2011-06-24T05:42:35.447Z · LW(p) · GW(p)

That's only true if you're a human being.

That is not my understanding. The only necessary addition to physics is "any possible mechanism of varying any element in your model of the universe". ie. You need physics and a tiny amount of closely related mathematics. That will give you a function that gives you every possible action -> result pair.

I believe this only serves to strengthen your main point about the possibility of separating epistemic investigation from ethics entirely.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T15:07:04.995Z · LW(p) · GW(p)

"any possible mechanism of varying any element in your model of the universe".

That's a decision theory. For instance, if you perform causal surgery, that's CDT. If you change all computationally identical elements, that's TDT. And so on.

Replies from: wedrifid
comment by wedrifid · 2011-06-24T17:25:37.265Z · LW(p) · GW(p)

That's a decision theory. For instance, if you perform causal surgery, that's CDT. If you change all computationally identical elements, that's TDT. And so on.

I don't agree. A decision theory will sometimes require the production of action result pairs, as is the case with CDT, TDT and any other the decision algorithm with a consequentialist component. Yet not all production of such pairs is a 'decision theory'. A full mathematical model of every possible state to the outcomes produced is not a decision theory in any meaningful sense. It is just a solid understanding of all of physics.

On one hand we have (physics + the ability to consider counterfactuals) and on the other we have systems for choosing specific counterfactuals to consider and compare.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T18:20:54.336Z · LW(p) · GW(p)

If you don't have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?

Replies from: wedrifid
comment by wedrifid · 2011-06-24T18:23:25.229Z · LW(p) · GW(p)

If you don't have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?

That is my point. That is what the decision theory is for!

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T18:41:22.817Z · LW(p) · GW(p)

I reassert my claim that:

Decision theories tell you what options you have: Pairs of actions and results.

Your null-decision theory doesn't tell you what options you have. It tells you what options you would have, were you God.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-06-24T19:33:42.404Z · LW(p) · GW(p)

This is a claim about definitions. You don't seem to disagree with wedrifid on any question of substance in this thread.

comment by wedrifid · 2011-06-24T18:52:04.282Z · LW(p) · GW(p)

I reassert my claim that:

Ok, and it is still a claim that doesn't refute anything I have previously said. This conversation is going nowhere. exit(5)

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T19:38:02.541Z · LW(p) · GW(p)

Exit totally reasonable. I just need to point out one thing:

It wasn't a claim in response to anything you said. It was a response to Eugene Nier.

Replies from: wedrifid
comment by wedrifid · 2011-06-24T20:24:44.638Z · LW(p) · GW(p)

It wasn't a claim in response to anything you said. It was a response to Eugene Nier.

It would have made more sense to me if it was made in reply to the relevant comment by Eugene.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-24T20:52:15.415Z · LW(p) · GW(p)

This conversation is kinda pointless. Therefore, my response comes in a short version and a long version.

Short:

Sorry, that was unclear. I did not make the mistake your last post implies I made. I'm pretty sure you've made some mistakes, but they're really minor. We have nothing left to discuss.

Long:

Sorry, that was unclear.

The first time I posted it, it was a response to Eugene. Then you responded, criticizing it. Then, finally, it appears like we agree, so I reassert my original claim to make sure. In that context, this response is strange:

Ok, and it is still a claim that doesn't refute anything I have previously said.

I wasn't trying to refute you with this claim, I was trying to refute Eugene, then you tried to refute the claim.

comment by Will_Sawin · 2011-06-22T02:52:16.875Z · LW(p) · GW(p)

Requiring me to think up the example before telling me the exact nature of your morality is unfair.

If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.

comment by Perplexed · 2011-06-22T00:45:24.874Z · LW(p) · GW(p)

Voted down vigorously. If you can't make the effort to make yourself understood, STFU.

Replies from: orthonormal, wedrifid
comment by orthonormal · 2011-06-27T21:57:29.111Z · LW(p) · GW(p)

According to the search tool, this was Less Wrong's first use of "STFU" directed at another contributor. I'm pretty proud of the site for having avoided this term, and I'm pretty chagrined at you for having broken the streak.

comment by wedrifid · 2011-06-22T01:10:13.096Z · LW(p) · GW(p)

Voted down vigorously. If you can't make the effort to make yourself understood, STFU.

It should be no surprise that this outburst made me far more inclined to view the grandparent in a positive light. In this case the actual content of Will's comment seems easy to understand. Given Peterdjones aggressive use of his own incomprehention Will was rather more patient than he could have been. He could have linking to a wikipedia article on the subject so that he could get a grasp of the basics.

Replies from: Perplexed
comment by Perplexed · 2011-06-22T01:45:39.483Z · LW(p) · GW(p)

Will was rather more patient than he could have been.

Rather less careful, I would say. He failed to notice the typo above until nsheperd pointed it out - the original source of the confusion. And then later he began a comment with:

No, this is not the case. You have to cleverly choose B.

I have no idea at all what "is not the case". And I also don't know when anyone was offered the opportunity to cleverly choose B.

Will's description of his own limited motivation to communicate is the only portion of this thread which is crystal clear.

Yes, by working pretty hard, I was able to ignore the initial typo and to anticipate the explanation of A, B, and C. As I point out elsewhere on this thread, I have some objections to the scenario (as leaving out some details important to deontologists). Perhaps PeterDJones had similar objections. Please notice that neither of us could object to Will's A-B-C story until it was actually spelled out. And Will resisted making the effort of spelling it out far too long.

My "STFU" was rude. But sometimes rudeness is appropriate.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-22T02:52:41.465Z · LW(p) · GW(p)

It seems to me the substance of Mr Savin's objection could have been expressed more briefly and clearly as "Deontologists would not steal under any circumstances". (Or even the familiar "Deontologists would not lie under any circumstances, even to save a lfie").

Replies from: wedrifid
comment by wedrifid · 2011-06-22T03:05:17.078Z · LW(p) · GW(p)

It seems to me the substance of Mr Savin's objection could have been expressed more briefly and clearly as "Deontologists would not steal under any circumstances".

That does not appear to be the case. Those are examples of other things that he could have said which would provide a more convenient target for your reply. Assuming you refer to Will_Sawin, that is.

comment by AdeleneDawner · 2011-06-14T04:24:29.504Z · LW(p) · GW(p)

*raises hand*

I haven't read any mainstream philosophy to speak of (I may have run into some somewhere and not noticed it as such, since I don't actually have a firm grasp on what it is and isn't), and wouldn't be especially surprised to find things in it that are at least as good as LW if I went looking for them.

comment by Thomas · 2011-06-18T08:46:31.041Z · LW(p) · GW(p)

Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them.

  • Laurence J. Peter
comment by advancedatheist · 2011-06-01T20:13:16.448Z · LW(p) · GW(p)

From Space Viking, by H. Beam Piper:

"Young man," Harkaman reproved, "the conversation was between Lord Trask and myself. And when somebody makes a statement you don't understand, don't tell him he's crazy. Ask him what he means. What do you mean, Lord Trask?"

Source:

http://www.gutenberg.org/files/20728/20728-h/20728-h.htm

Replies from: Eliezer_Yudkowsky, MatthewBaker, khafra
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-02T09:53:03.118Z · LW(p) · GW(p)

Space Viking has got to be one of the leading "way more rational than its title sounds like" books out there. I wonder if Piper actually named it that or if it was some bright-eyed publisher.

comment by MatthewBaker · 2011-09-04T11:22:51.161Z · LW(p) · GW(p)

Awesome book, made my night.

comment by khafra · 2011-06-03T18:12:03.855Z · LW(p) · GW(p)

This, plus the earlier positive mention of Space Viking on LW, has me reading it. Halfway though, I've just realized that it's basically a novelization of a .*craft style RTS game.

comment by MichaelGR · 2011-06-02T18:36:26.445Z · LW(p) · GW(p)

"The more you sweat in training, the less you bleed in war."

--WSJ article about Navy SEALs

Replies from: khafra, simplyeric
comment by khafra · 2011-06-03T13:12:07.179Z · LW(p) · GW(p)

I wonder how many other people on LW heard this quote first while in the process of sweating in training; and how many other military aphorisms could be repurposed this way.

comment by simplyeric · 2011-06-03T15:26:20.480Z · LW(p) · GW(p)

It's an interesting point but exceedingly simplistic, more so these days than ever before.
What about "the more you think in training", or "the more you learn in training"? Don't get me wrong, I'm not denying the value of sweat (excerise, fitness, etc), I'm just saying it's not even close to the whole equation.

Replies from: MarkMk1, bcoburn
comment by MarkMk1 · 2011-06-03T17:30:14.901Z · LW(p) · GW(p)

Actually I think the full formula is "sweat saves blood, but brains save both". That's as rlevant today as when it was first used, which was in the British Army, around the time of the Crimean War. I think. I wasn't there.

comment by bcoburn · 2011-06-03T17:34:10.603Z · LW(p) · GW(p)

"Sweat" here is a standin for generic effort, whether it's actual physical sweat or not depends on what exactly you're training for.

comment by phaedrus · 2011-06-02T00:26:26.246Z · LW(p) · GW(p)

‎"We apply fight-or-flight reflexes not only to predators, but to data itself." --Chris Mooney

Replies from: RobinZ
comment by RobinZ · 2011-07-01T17:32:10.151Z · LW(p) · GW(p)

I just got that one. It's a remark on bias, isn't it?

comment by Risto_Saarelma · 2011-06-12T17:30:47.764Z · LW(p) · GW(p)

The science fiction writer Arthur C. Clarke remarked that "any sufficiently advanced technology is indistinguishable from magic". Clarke was referring to the fantastic inventions we might discover in the future or in our travels to advanced civilizations. However, the insight also applies to self-perception. When we turn our attention to our own minds, we are faced with trying to understand an unimaginably advanced technology. We can't possibly know (let alone keep track of) the tremendous number of mechanical influences on our behavior because we inhabit an extraordinarily complicated machine. So we develop a shorthand, a belief in the causal efficacy of our conscious thoughts. We believe in the magic of our own causal agency.

  • Daniel M. Wegner, The Illusion of Conscious Will
Replies from: adamisom
comment by adamisom · 2011-06-23T18:04:08.628Z · LW(p) · GW(p)

Of course. But I wonder what the word "we" is referring to in this sentence: "So WE develop a shorthand...". Didn't that strike anybody else?

Replies from: Manfred
comment by Manfred · 2011-06-24T01:41:53.502Z · LW(p) · GW(p)

Nobody here but us brains.

comment by Jonathan_Graehl · 2011-06-05T02:40:05.844Z · LW(p) · GW(p)

No man has wit enough to reason with a fool.

Proyas (fictional character - author: R. Scott Bakker)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-02T09:57:56.813Z · LW(p) · GW(p)

This strikes me as a nerdism. If you don't find less intelligent people easier to manipulate, you must be working on sympathetic models of them instead of causal ones. I expect that experience would cure this, and after a few months of empirical practice and updating on the task of reasoning with fools, you would find it was actually easier to get them to do whatever you wanted - if you could manage to actually try a lot of different things and notice what worked, instead of being incredulous and indignant at their apparent reasoning errors.

Replies from: Yvain, Jonathan_Graehl, lessdazed, Solvent
comment by Scott Alexander (Yvain) · 2011-09-02T10:36:34.806Z · LW(p) · GW(p)

Upvoted the original for reference to Prince of Nothing series. And upvoted this comment for the terms "sympathetic model" and "causal model", which is one of those times that having the right word for a concept you've been trying to understand is worth a month of trying to untangle things in your head.

...although now I'm not sure whether I should upvote Eliezer or Michael Vassar. It seems kind of unfair to deny Michael an upvote just because the specific instantiation of his algorithm that said this happened to be running on Eliezer's brain at the time.

Replies from: thomblake, Armok_GoB
comment by thomblake · 2011-09-02T13:37:13.499Z · LW(p) · GW(p)

having the right word for a concept you've been trying to understand is worth a month of trying to untangle things in your head

On a related note, it's a programming cliche that 90% of development time is trying to think up the right names for things.

Replies from: None
comment by [deleted] · 2011-09-04T14:38:48.908Z · LW(p) · GW(p)

"There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-09-04T23:13:17.927Z · LW(p) · GW(p)

I read this out of context and interpreted "naming things" so that it generalized cache invalidation. So I wanted to complain that it's only one thing.

comment by Armok_GoB · 2011-09-02T13:02:53.483Z · LW(p) · GW(p)

I'd say both, although I'm actually to lazy to go find a random post by Michael and upvote it.

comment by Jonathan_Graehl · 2011-09-02T21:01:24.089Z · LW(p) · GW(p)

I agree with the Vassar-homonculus, but I took as the point that "reasoning with" may be the wrong tool - not that reasonable practice will fail to suggest the most effective hooks for manipulating the unreasonable fool.

Replies from: lessdazed
comment by lessdazed · 2011-09-02T21:39:41.836Z · LW(p) · GW(p)

I agree. The quote wasn't "No man has wit enough to manipulate a fool."

comment by lessdazed · 2011-09-02T10:13:59.917Z · LW(p) · GW(p)

do whatever you wanted

Not for the reasons wanted.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-09-02T21:04:12.787Z · LW(p) · GW(p)

Also a great addition to a psychological-thriller villain: he not only insists on compliance, but for the "right" reasons.

Replies from: lessdazed
comment by lessdazed · 2011-09-02T21:15:43.619Z · LW(p) · GW(p)

Which will be explained to the hero in due course while he is caught in the villain's trap, with escape impossible. Impossible I say!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-09-02T21:37:06.224Z · LW(p) · GW(p)

But there is no independent existence of hero's personality apart from their mind, so the hero doesn't just have the memes designed by the villain, the hero is villain's memes.

comment by Solvent · 2011-09-02T10:17:24.029Z · LW(p) · GW(p)

My new goal in life is having Eliezer Yudkowsky respect me enough that he makes comments like this for me.

comment by Richard_Kennaway · 2011-06-01T11:08:54.897Z · LW(p) · GW(p)

If you have ten minutes unscheduled and the phone isn't ringing, what do you do? What do you start?

Seth Godin

comment by NancyLebovitz · 2011-06-08T22:00:01.758Z · LW(p) · GW(p)

"Three-fourths of philosophy and literature is the talk of people trying to convince themselves that they really like the cage they were tricked into entering."

-- Gary Snyder (bOING bOING #9, 1992)

I don't have a strong feeling about the accuracy of the percentage, but the general point sounds plausible.

comment by servumtuum · 2011-06-06T21:37:47.039Z · LW(p) · GW(p)

The essence of wisdom is to remain suspicious of what you want to be true.

-Jon K. Hart

Replies from: Document
comment by Document · 2011-06-07T12:18:38.251Z · LW(p) · GW(p)

Without wanting to start a debate: that belief kept me in Mormonism for about two unnecessary years.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-07T12:26:32.789Z · LW(p) · GW(p)

Okay, so the essence of wisdom is to be exactly as suspicious of everything as you should be, the first-pass approximation of wisdom is to remain suspicious of what you want to be true, and the second-pass approximation of wisdom is to be also suspicious of current beliefs you want to be untrue.

Replies from: servumtuum
comment by servumtuum · 2011-06-08T05:09:40.532Z · LW(p) · GW(p)

MixedNuts, I take the quote as a mental "post-it note" reminder to be cognizant of the potential presence of confirmation bias-in both directions as you stated.

comment by loqi · 2011-06-01T18:54:15.180Z · LW(p) · GW(p)

If you can't think intuitively, you may be able to verify specific factual claims, but you certainly can't think about history.

Well, maybe we can't think about history. Intuition is unreliable. Just because you want to think intelligently about something doesn't mean it's possible to do so.

Jewish Atheist, in reply to Mencius Moldbug

Replies from: CuSithBell
comment by CuSithBell · 2011-06-02T19:51:48.233Z · LW(p) · GW(p)

I would think this an irrationality quote? "Fuzzy" thinking skills are ridiculously important. "Intuition" may be somewhat unreliable, but in certain domains and under certain conditions, it can be - verifiably - a very powerful method.

Replies from: shokwave, fburnaby, ChristianKl, loqi
comment by shokwave · 2011-06-06T15:42:50.131Z · LW(p) · GW(p)

I took as being rationality in the sense that it follows the form: "just because you want to action x does not mean that action x is possible", which is always a good reminder.

Replies from: CuSithBell, Will_Sawin
comment by CuSithBell · 2011-06-09T15:02:43.031Z · LW(p) · GW(p)

That's so, but it's also true that just because you're personally not good at X, that does not mean that X is impossible or worthless.

comment by Will_Sawin · 2011-06-06T15:54:34.466Z · LW(p) · GW(p)

Maybe it would be best to shorten it?

comment by fburnaby · 2011-06-23T17:29:03.683Z · LW(p) · GW(p)

Yes. It depends whether we are in the context of discovery (or of "getting things done") or the context of justification.

comment by ChristianKl · 2011-06-19T22:54:01.484Z · LW(p) · GW(p)

You can't reasonable talk about darkness without talking about light. It's the same topic.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-21T03:31:46.130Z · LW(p) · GW(p)

Do you mean to match up intuition and logic, or rationality quotes and irrationality quotes, or something else?

comment by loqi · 2011-06-10T01:47:35.874Z · LW(p) · GW(p)

Intuition is extremely powerful when correctly trained. Just because you want to have powerful intuitions about something doesn't mean it's possible to correctly train them.

comment by MarkusRamikin · 2011-06-08T15:26:07.714Z · LW(p) · GW(p)

"Attack and absorb the data that attack produces!"

-Tylwyth Waff in Heretics of Dune

(Hi. I'm new.)

Replies from: gwern
comment by gwern · 2011-06-12T20:52:58.008Z · LW(p) · GW(p)

I don't understand this one. Anyone want to explain it?

Replies from: Barry_Cotter, MarkusRamikin
comment by Barry_Cotter · 2011-06-12T22:17:49.707Z · LW(p) · GW(p)

Attack. Then based upon the results of the attack modify your behaviour. Or attacck then update your model of the enemy.

Replies from: simplyeric
comment by simplyeric · 2011-06-14T18:21:34.120Z · LW(p) · GW(p)

It seems (to me) to be analogous to a lot of fairly technical pursuits: Seismic analysis from purcussion events for finding oil. Tracking the impact of an object on the moon to detect water. Looking for the decay of particles produced and collided by accelerators. Pitching to a batter, over time, will reveal the best way to pitch to that batter (what are his/her strenghts and weaknesses). Haggling.

Approaching its most distilled form: If a system is not giving you information, affect the system in some way [doesn't have to be an "attack" per se]. How the system changes based on your input is instructive, so absorb all of that data.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-06-15T09:46:02.251Z · LW(p) · GW(p)

Exactly. Poke the confusing-thing, make it give up evidence about how it works. And pay attention to that evidence.

I like your last two examples because they involve situations where you don't have all the time in the world to approach a phenomenon, like we do (or at least feel like we do) when studying the fundamental and unchanging laws of Nature. You have to learn and adapt in real time.

Of course in a sports game you're already going to "attack" because it's part of the game. So the virtue lies in noticing the evidence it produces. It might seem like an obvious thing to say, but then you see people/teams repeating the same failed strategy over and over.

Haggling, negotiation, is pretty much the original context of the quote, and I think the immediate point was to avoid playing defensively and giving up initiative. Waff was trying to tell himself something like, "don't just sit there intimidated by this powerful and mysterious woman, letting her frame the conversation to her advantage. Look for a way to learn more. Probe. Evoke a response."

I'm also thinking of strategy games like, say, Starcraft. You want to commit some resources (units, time, your own attention) to scouting, in order to find out what types of units the enemy is relying on (so as to best counter them), which patches of valueable resource he has covered and how vulnerable they are to attack, how well he responds to raiding/harassment etc.

comment by MarkusRamikin · 2011-06-23T15:41:16.497Z · LW(p) · GW(p)

Another way of looking at it that (I think!) makes it obvious: it's advice for AI, in the AI-in-the-box experiment.

I imagine it's what Eliezer means when he says he won the experiment "the hard way". He just kept poking for a psychological mechanism to exploit until he found one that happened to work with the given person. And the intermediate, failed attempts, probably helped him model the person and narrow down on whatever actually might work.

comment by Patrick · 2011-06-07T03:45:19.154Z · LW(p) · GW(p)

If things are nice there is probably a good reason why they are nice: and if you do not know at least one reason for this good fortune, then you still have work to do.

Richard Askey

comment by NancyLebovitz · 2011-06-06T13:21:30.164Z · LW(p) · GW(p)

If we wait for the moment when everything, absolutely everything is ready, we shall never begin. - Ivan Turgenev

comment by MixedNuts · 2011-06-06T15:17:18.082Z · LW(p) · GW(p)

When shall we cross ourselves?

Whenever we are about to perform a good deed, or when we see or feel that we might commit a sin.

  • Carlos Gimenez, Barrio (Context: children in a religious institution are answering catechism questions)

This sounds like a great way to prime yourself. Crossing yourself has all the wrong connotations, but a gesture meaning "I choose good." should help in general. (I like the fist-over-heart Battlestar Galactica salute.)

Having a whole set of gestures, along with pithy quotes, should prove even more effective.

Replies from: Leonhart
comment by Leonhart · 2011-06-06T23:08:22.237Z · LW(p) · GW(p)

Their insignia was a hand poised with fingers ready to snap.

ETA: Or is that reserved for "I choose whatever they aren't expecting"?

comment by beoShaffer · 2011-06-12T20:53:58.885Z · LW(p) · GW(p)

Tom smiled. "Yes, Don't you like that idea?" "Liking it and having it be true aren't the same thing, Tom."

-Clive Barker, Abarat

comment by Pugovitz · 2011-06-07T19:18:02.141Z · LW(p) · GW(p)

"Try to learn something about everything and everything about something." ~Thomas H. Huxley

One of my favorite quotes; from the father of the word "agnostic."

comment by CharlesR · 2011-06-06T01:09:01.432Z · LW(p) · GW(p)

Smart people believe weird things because they are skilled at defending beliefs they arrived at for nonsmart reasons.

-- Michael Shermer

Replies from: Oscar_Cunningham, MixedNuts
comment by Oscar_Cunningham · 2011-06-06T22:23:16.431Z · LW(p) · GW(p)

Sometimes smart people believe weird things because they're actually, y'know, true.

comment by MixedNuts · 2011-06-06T15:42:28.533Z · LW(p) · GW(p)

The same Shermer who publicly recognizes that his widely-repeated "this is your brain on cryonics" is crap but won't even post a half-hearted correction? Yes. Yes they do.

comment by jscn · 2011-06-02T00:26:59.943Z · LW(p) · GW(p)

The intellect, as a means for the preservation of the individual, unfolds its chief powers in simulation; for this is the means by which the weaker, less robust individuals preserve themselves, since they are denied the chance of waging the struggle for existence with horns or the fangs of beasts of prey. In man this art of simulation reaches its peak: here deception, flattering, lying and cheating, talking behind the back, posing, living in borrowed splendor, being masked, the disguise of convention, acting a role before others and before oneself—in short, the constant fluttering around the single flame of vanity is so much the rule and the law that almost nothing is more incomprehensible than how an honest and pure urge for truth could make its appearance among men. They are deeply immersed in illusions and dream images; their eye glides only over the surface of things and sees "forms"; their feeling nowhere lead into truth, but contents itself with the reception of stimuli, playing, as it were, a game of blindman's buff on the backs of things.

Nietzsche, On Truth and Lie in an Extra-Moral Sense

comment by RobertLumley · 2011-06-02T00:19:35.406Z · LW(p) · GW(p)

"There always comes a time in history when the man who dares to say that two plus two equals four is punished with death … And the issue is not a matter of what reward or what punishment will be the outcome of that reasoning. The issue is simply whether or not two plus two equals four." – Albert Camus, The Plague

comment by wedrifid · 2011-06-01T09:57:43.352Z · LW(p) · GW(p)

A little knowledge that acts is worth infinitely more than much knowledge that is idle.

-- Kahlil Gibran

comment by Alicorn · 2011-06-24T01:22:42.261Z · LW(p) · GW(p)

"People argue against the existence of spirits and immaterial souls because they can't be explained by science. But if by definition these things are outside the scope of science, then you can't use science to prove or disprove them."

"Do these spirits and souls actually affect anything in the real world?"

"Sure."

"Then they're within the scope of science."

"Okay, let's say they don't interact at all with the world."

"Then why do we care?!?!"

--Calamities of Nature

comment by Richard_Kennaway · 2011-06-03T14:16:06.628Z · LW(p) · GW(p)

I see that I've quoted the following twice before within other comment threads, so I think it deserves a place here:

He who would be Pope must think of nothing else.

Usually cited as a Spanish proverb.

comment by beoShaffer · 2011-06-03T00:40:04.252Z · LW(p) · GW(p)

quoted text The art of concluding from experience and observation consists in evaluating probabilities, in estimating if they are high or numerous enough to constitute proof. This type of calculation is more complicated and more difficult than one might think. It demands a great sagacity generally above the power of common people. The success of charlatans, sorcerors, and alchemists — and all those who abuse public credulity — is founded on errors in this type of calculation.

Benjamin Franklin and Antoine Lavoisier, Rapport des commissaires chargés par le roi de l'examen du magnétisme animal (1784), as translated in "The Chain of Reason versus the Chain of Thumbs", Bully for Brontosaurus (1991) by Stephen Jay Gould, p. 195, http://en.wikiquote.org/wiki/Benjamin_Franklin

comment by RobertLumley · 2011-06-02T00:21:14.136Z · LW(p) · GW(p)

"If in other sciences we should arrive at certainty without doubt and truth without error, it behooves us to place the foundations of knowledge in mathematics." – Roger Bacon

Replies from: James_K
comment by James_K · 2011-06-02T19:04:56.518Z · LW(p) · GW(p)

It's even better when said by Leonard Nimoy.

Replies from: RobertLumley
comment by RobertLumley · 2011-06-02T22:13:40.698Z · LW(p) · GW(p)

Ah, so someone knows where I found this quote. :-)

comment by EvelynM · 2011-06-14T15:17:13.540Z · LW(p) · GW(p)

"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind." Bertrand Russell

comment by Will_Euler · 2011-06-12T20:47:50.841Z · LW(p) · GW(p)

"When he is confronted by the necessity for a decision, even one which may be trivial from a normal standpoint, the obsessive-compulsive person will typically attempt to reach a solution by invoking some rule, principle, or external requirement which might, with some degree of plausibility, provide a "right" answer....If he can find some principle or external requirement which plausibly applies to the situation at hand, the necessity for a decision disappears as such; that is, it becomes transformed into the purely technical problem of applying the correct principle. Thus, if he can remember that it is always sensible to go to the cheapest movie, or "logical" to go to the closest, or good to go to the most educational, the problem resolves to a technical one, simply finding which is the most educational, the closest, or such. In an effort to find such requirements and principles, he will invoke morality, "logic," social custom, and propriety, the rules of "normal" behavior (especially if he is a psychiatric patient), and so on. In short he will try to figure out what he "should" do.

-David Shapiro, Neurotic Styles

Replies from: MixedNuts
comment by MixedNuts · 2011-06-13T20:47:42.230Z · LW(p) · GW(p)

Please post anything there might be on how to deal with that. I'm exactly like that, and my rules often break down and then I'm unable to decide.

I've known someone else like that. She made rules about food because it made it easier to decide what to eat.

Could you also post the cites on why "obsessive-compulsive"? Neither I nor the other person have an OCD diagnosis or seem to match the criteria. Any OCD LWers want to chip in?

Replies from: MattFisher, Will_Euler
comment by MattFisher · 2011-06-23T14:16:44.201Z · LW(p) · GW(p)

I try to avoid over-optimising on considered principles. I am willing to accept less-than-optimal outcomes based on the criteria I actually consider because those deficits are more often than not compensated by reduced thinking time, reduced anxiety, and unexpected results (eg the movie turning out to be much better or worse than expected).

'Simple Heuristics That Make Us Smart' indicates most decisions are actually made by considering a single course of action, and taking it unless there is some unacceptable problem with it. What really surprised the researchers was that this often does better than linear recursion and stacks up respectably against Bayesian reasoning.

So my answer is, "make random selections from the menu until you hit something you're willing to eat." :)

Replies from: MixedNuts
comment by MixedNuts · 2011-06-23T14:44:07.157Z · LW(p) · GW(p)

Once again, the problem isn't "How do I ignore rules and go with my gut?", it's "What do I do when my gut says 'Search me'?". So your answer isn't so much "random until satisficing by intuitive standards", and more like "random". Which is dominated by rules if rules exist, and the current best candidate if they don't.

Replies from: MattFisher
comment by MattFisher · 2011-06-23T15:33:41.568Z · LW(p) · GW(p)

Ah. So if I understand correctly, your intuition on what will satisfice sometimes returns zero information, which certainly happens to me sometimes and I would guess most people. In that situation, I switch from optimising on the decision as presented, and optimise on + .

In most cases, the variance in utility over the spread of outcomes of the decision is outweighed by the reduced cognitive effort and anxiety in the simplified decision procedure. Plus there's the chance of exposure to an unexpected benefit.

In other words, there may be a choice that is better than the current best candidate (however that was derived), and rules may exist that dominate "random", but it's not worth your time and effort to figure them out.

comment by Will_Euler · 2011-06-14T18:28:07.023Z · LW(p) · GW(p)

This quote was written in 1965 by a psychoanalyst, so I don't even know if they had the same diagnostic criteria for Obsessive-Compulsive Disorder that they do today. He's talking about "styles" of behavior. Based on a little searching, it seems to me that a preoccupation with rules is characteristic of what is called Obsessive-Compulsive Personality Disorder. As is so often the case, there's a broad spectrum from quirky behavior to personality disorder.

What makes it a disorder is if it is interferring with your enjoyment of life. It is irrational to choose according to arbitrary rules when doing so makes you miss out on outcomes that are preferable but require you going outside of your rules.

A little searching on the Internet says the treatment for the disorder is talk therapy. It's possible that could work.

I would say first of all you have to recognize when living according to rules is making your life better and when living based on rules is boxing you in. Having rules can make decisions easier, but it can make you miss out on a lot of life. Seek feedback from friends and family members about areas in which you might be too rigid. Make sure you tell them you really want honest feedback. Then take baby steps to break out of routines. Doing so will also build your courage.

Accept that it's OK to make mistakes. Failure is a great source of learning. If you have an attitude that says, "I am going to make mistakes," then you might not feel so much anxiety about making a less-than-optimal choice. (I recommend the book The Pursuit of Perfect by Tal Ben-Shahar. I learned a lot about avoiding perfectionism from that book.)

You might find that something like an improv comedy class makes you more spontaneous and able to see how rules for behavior aren't as fixed as you might think they are. People get by and thrive by doing things totally differently from how you do, and you might like a different way better, if you gave yourself the chance.

Try something that you wouldn't have ever thought you'd do before. See how it doesn't feel that bad. (Again, you might start small: browse through the section of the bookstore where you would normally never be caught dead.)

Be courageous. Be spontaneous. Have fun.

Replies from: MixedNuts, roland
comment by MixedNuts · 2011-06-15T06:36:36.679Z · LW(p) · GW(p)

The problem is not to muster the courage to break rules, it's to decide what to do when you don't have relevant rules.

Replies from: MarkusRamikin, roland
comment by MarkusRamikin · 2011-06-15T10:20:28.096Z · LW(p) · GW(p)

"She made rules about food because it made it easier to decide what to eat" - This actually works for such a person? Interesting, I think a lot of people have the opposite problem. I wish I found it easy to follow my own rules.

Replies from: MixedNuts, NancyLebovitz, chatquitevoit
comment by MixedNuts · 2011-06-17T09:17:18.121Z · LW(p) · GW(p)

The rules were supposed to approximate her actual tastes, but more rigid and outright made up when she was unsure if she liked something. I don't think it would work if she suddenly decided she disliked peanut butter.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-06-21T19:42:39.536Z · LW(p) · GW(p)

I see, that makes sense.

Nancy: probably not enough care. But hm, "want to" follow or "feel like" following? Because I may "want to" be conscientious and work hard towards my goals, but I "feel like" slacking off.

comment by NancyLebovitz · 2011-06-15T10:31:42.904Z · LW(p) · GW(p)

Tentative hypothesis: some people start with the intention of making rules they'd want to follow, and others don't. The first set might find themselves with a rule they don't follow, but the second assuredly will.

This goes beyond the temperamental difference between people who find rules a reassuring way of limiting choices and those who find rules an irritant at best.

How much care do you put into crafting your rules?

comment by chatquitevoit · 2011-06-24T17:02:55.488Z · LW(p) · GW(p)

This is a valid attempt to deal with conflicting stimuli from the world - to create standards to which you adhere consciously because you don't trust your intuitions to motivate you rationally in the environment with which you must interact. And really, such attention is partially what it means to be conscious/human - to audit your actions 'from the outside' instead of merely reacting. And with today's bizarre and skewed 'food environment', as it were, this becomes VERY necessary, especially for people with a predilection for analyzing their own behavior even in such supposedly mundane (but really fundamental) things as food consumption.

comment by roland · 2011-06-26T04:50:35.371Z · LW(p) · GW(p)

There seems to be an irrational underlying assumption here "I need rules to decide."

edit: see also my other reply above.

Replies from: MixedNuts, AdeleneDawner
comment by MixedNuts · 2011-06-26T10:21:32.783Z · LW(p) · GW(p)

I seem to have run into a strange inferential distance.

The word "obsessive-compulsive" seems to have suggested the wrong picture. I do not mean rules as impulses to perform stereotyped ritual behaviors.

What I'm trying to describe is a way to handle explicit choices. "Are you coming home this weekend?", "Do I want some chocolate?", "Am I enjoying this movie?". Much (most?) of the time, a simple "yes, good" or "no, bad" gut feeling somehow gets generated with no conscious input. That's a decision.

But some of the time, there's no such gut feeling. Introspection returns "I have no freaking clue what I want.". This is quite distressing, especially when there's pressure to decide immediately.

A known set of rules can thus be useful. "Which movie should I watch?" "ERROR: decision-making system unavailable." "Okay, then let's go with the most educational."

Problems with this approach are absence of relevant rules, bad rules, and inability to access the rule-based decision-making system as well as the emotional reaction-based one.

Someone coming up to me as I agonize and telling me "You don't need rules" is not helping. What do I use instead?

Replies from: Unnamed, roland, roland
comment by Unnamed · 2011-06-27T18:11:22.296Z · LW(p) · GW(p)

I wonder if it would help for you to try to satisfice. You're not trying to choose the best option, you just have some minimum acceptable standard, and you're trying to pick one of the options (any of the options) that meets that standard. You could pick more-or-less randomly, or you could look for any inclination in favor of one of the options and then go with that one.

For instance, if there are 4 movies that you're trying to choose between, and all 4 seem like movies that you're likely to enjoy, then it doesn't really matter which one you pick. You just need to pick one of them. There are a few ways to do this, some more random, some using more general rules or heuristics which can be used for many different kinds of decisions since they apply to the decision-making process, not the particular content.

  • Just pick one of the movies. Don't think or try to come up with reasons, just select one. Maybe your selection will be influenced by preferences of yours that you're not aware of, or maybe it'll just be random. Doesn't matter, you picked one.

  • Decide you're going to pick one, and then wait until something good about one of the options comes to mind and pick that one. It doesn't matter what it is - it could be some good feature that it has, or just a vague feeling.

  • Pick randomly. Label the options 1-4, look at a clock, and take the minutes mod 4.

  • Let someone else decide. If you're going to watch the movie with a friend, tell them "any of these 4, you pick." Even if the decision is only for you (e.g., you're going to watch the movie alone), you could still ask someone else to pick (or give a recommendation) if they're around when you're trying to decide. Just ask "what do you think?"

  • Try to guess what the other person would prefer. If you're going to watch the movie with a friend, ask yourself which of the 4 he would like. Try to do this quickly, with pattern-matching and associations ("this one seems like his kind of movie"), not reasoning. As soon as you have an inclination towards one of the options, go with it.

  • Try to guess what you would prefer, treating yourself as if you were another person. Ask "which of these would MixedNuts like?" and use pattern-matching and association.

  • Try to predict your decision. "If MixedNuts had to choose between these movies, I bet he'd go with that one." (Or, based on past behavior in terms of going home for the weekend, and what's scheduled for this weekend, I bet that he will go home this weekend.) Then just do that.

  • Variety-seeking. See which of the options is something that you haven't done much of in awhile. e.g., I haven't seen many comedies lately, so I'll pick the comedy movie. (Or, I've been going home a lot lately, so I won't go home this weekend.)

Replies from: MixedNuts
comment by MixedNuts · 2011-06-27T19:08:46.221Z · LW(p) · GW(p)

Ooo! Good advice is good! Thanks!

Pick one

Pick at random

I do that for decisions that don't matter much (e.g. picking a movie). It's more problematic when I know I will regret picking the wrong one badly.

Wait for a good feeling

Good idea, thanks. Does this work in reverse, with a bad feeling about all other options?

Ask someone else

I do that as much as possible, but it fails more often that not. Polite bastards.

Ask a model of someone else/myself

Predict myself

Seek variety

All good ideas, thanks a lot! (Though variety-seeking contradicts the others, and conflicting rules are Bad.)

Replies from: Unnamed, Strange7
comment by Unnamed · 2011-06-27T19:54:19.473Z · LW(p) · GW(p)

Does this work in reverse, with a bad feeling about all other options?

I don't know. If you're feeling stuck, and not particularly motivated to do anything, I generally think that it's good to try to find and feed positive motivations to do something. Avoidance motivations (for rejecting bad options) could just keep you stuck. And we want to keep things simple - one of the keys to all of these procedures is that you just need one option to stand out (positively) for whatever reason - you don't need to go through every option to rule all but one of them out. But if there are only 2 options and you do get a clear feeling against one of them, then maybe it is okay to base your decision on that - it's equally simple (with only two options) and it does get you through this decision. So I wouldn't aim to find a bad feeling, but if that was the first feeling that came up then you could go with it.

Ask someone else

I do that as much as possible, but it fails more often that not. Polite bastards.

You could try alternative ways of getting other people involved. e.g., Have them list the pros and cons of each option, and you listen and see if one thing jumps out at you. Or you could describe the options to them, with instructions for them to try to guess which one you'd. Or maybe just talking about the decision can help you clarify which option you prefer.

Though variety-seeking contradicts the others, and conflicting rules are Bad.

True. You may want to forget about that one if it could interfere with the others. There are ways to integrate it with the others, if you determine ahead of time when it applies. For instance, there may be certain domains where you want variety/balance. Or you may sometimes feel like you're in a rut and want to mix things up, and then you can decide that starting now I'm going to make variety-seeking decisions (until I no longer feel this way). The important thing is that when you're faced with a particular decision, you don't need to decide whether or not to seek variety because you have already determined that.

But variety-seeking is more of an advanced technique which you may want to skip for now. It's probably best to pick one or two of the heuristics which seem like they could work for you and try them out. Over time you can expand your repertoire.

comment by Strange7 · 2011-07-02T16:16:30.145Z · LW(p) · GW(p)

Might be helpful to set up meta-rules. In any given situation, a hierarchy of which rules apply, or which to fall back on if the main ones are inconclusive. For example, variety-seeking could be one of the low-ranked options, seldom used but still significant for it's tendency to shake up the results of other rules.

comment by roland · 2011-06-26T23:43:03.561Z · LW(p) · GW(p)

Do you know the book "How we decide" by Jonah Lehrer? In the first chapter, section 3 there is a case of a patient who lost his orbitofrontal cortex (OFC) due to a cancer and suddenly he couldn't make decisions anymore because all emotions where cut off and all choices suddenly became equal in value. So sometimes there are underlying neurological problems that can cause this.

Quote:

I suggested two alternative dates, both in the coming month and just a few days apart from each other. The patient pulled out his appointment book and began consulting the calendar. The behavior that ensued, which was witnessed by several in­ vestigators, was remarkable. For the better part of a half hour, the patient enumerated reasons for and against each of the two dates: previous engagements, proximity to other engagements, possible meteorological conditions, virtually anything that one could reasonably think about concerning a simple date. .. . He was now walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences. It took enormous discipline to listen to all of this without pounding on the table and telling him to stop.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-27T06:37:12.155Z · LW(p) · GW(p)

Yeah, that's what happens. For me it's intermittent, but it does remove all choice-related emotions, so all choices become hard at the same time. As Lehrer says, an explicit cost-benefit analysis is way too long - that's what simple rules are for.

It took enormous discipline to listen to all of this without pounding on the table and telling him to stop.

Oh yeah, cry me a fucking river. This guy has to do that for every single decision, but no, go ahead, whine about having to listen to him make just one.

I'll check the book out, thanks.

Replies from: Unnamed
comment by Unnamed · 2011-06-27T18:18:29.174Z · LW(p) · GW(p)

For me it's intermittent, but it does remove all choice-related emotions, so all choices become hard at the same time.

This sounds like a symptom of something. Have you noticed whether any other symptoms tend to co-occur with it (especially other things related to mood or emotions)? Have you noticed any other patterns in these episodes (how often it happens, how long it lasts, whether it has any particular triggers, and whether anything in particular tends to trigger its end)? Have you mentioned it to a medical/psychological professional?

Replies from: MixedNuts
comment by MixedNuts · 2011-06-27T20:18:22.536Z · LW(p) · GW(p)

This sounds like a symptom of something.

Agreed.

Have you noticed whether any other symptoms tend to co-occur with it (especially other things related to mood or emotions)?

Choice numbness is a special case of emotional numbness is a special case of disconnecting from the world. It comes with akrasia and not-akrasia (a thingy that prevents me from doing stuff I explicitly want to, but doesn't react when I throw willpower at it - like running into an invisible wall in a video game). My field of attention gets restricted (I can only focus on one thing and not very much, objects in the center of my visual field become more interesting, I bump into walls), and my thoughts slow and confused. I crumble completely under any kind of pressure, even more than usually. I have to monitor motor control more finely (like moving a leg with my hand to remind the motor system how to move it, or focusing on a particular point and willing myself there). If there's emotional numbness, my emotions will feel sort of like detached objects.

Have you noticed any other patterns in these episodes (how often it happens, how long it lasts, whether it has any particular triggers, and whether anything in particular tends to trigger its end)?

It happens when I'm dehydrated, tired, lacking balanced amounts of sunshine, at the wrong level of socialization, or at apparently random. The general state tends to last , curing it usually involves a complete reboot (change of context, rest, then sleep), but it can go in and out of choice numbness. Pressure to answer cements it.

Have you mentioned it to a medical/psychological professional?

Nope. I've had horrible luck with psychiatrists and therapists so far, and I expect anything doctor-findable would have been found by now.

Thanks!

Replies from: Unnamed
comment by Unnamed · 2011-06-27T21:58:34.031Z · LW(p) · GW(p)

That suggests a couple more strategies that you could use for deciding-while-numb.

One is to choose the option that is most likely to improve your mental state (or at least avoid making it worse), e.g. the option that seems most likely to increase your energy rather than draining you.

The other is to try to make your decisions while non-numb, as much as possible (especially for more important decisions). You can do this in different ways, depending on the context. When you're numb you could put off making a decision until later (when you're non-numb), or you could guess what you would choose if you were non-numb, or you could try to remember which option you were leaning towards when you had been non-numb and choose that one. When you're non-numb, if you know of any upcoming decisions you could make the decisions right then (rather than waiting and potentially becoming numb), or you could at least note which option you're leaning towards (so that you can use that information if you happen to become numb).

I don't know the details of your experiences with doctors & therapists, but this really does sound like the sort of thing that they should be able to help with. Especially if you're in this state often, figuring out if you can prevent it should be a higher priority than figuring out how to cope with it. Maybe you just haven't taken the right test yet, or you haven't found the right doctor or therapist. If you've had horrible luck so far, does that mean that regression to the mean will be working in your favor as long as you keep trying?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-07-02T00:36:23.076Z · LW(p) · GW(p)

I agree with nearly everything you said there and think it's good advice, so I upvoted your comment. However, I care about statistics and you made a common error here:

If you've had horrible luck so far, does that mean that regression to the mean will be working in your favor as long as you keep trying?

Regression to the mean doesn't work that way. The fact that a random event came out one way several times in a row doesn't make it more likely that it will come out the other way the next time. For instance, if you flip a fair coin and it comes up heads ten times in a row, the probability that it will come up tails on the next flip is still no greater than 50%. The only way that having found multiple bad psychiatrists will improve mixednuts' chances of finding a good one next time appreciably is if he starts crossing off an appreciably large proportion of the bad psychiatrists in his area, leaving the good ones dominating the remaining population. This effect isn't regression to the mean. The error you made is known as the Gambler's Fallacy.

Replies from: Unnamed
comment by Unnamed · 2011-07-02T02:01:03.992Z · LW(p) · GW(p)

I'm familiar with the gambler's fallacy. I wasn't being very clear about what I had in mind when I referenced regression to the mean, so here's my model. If someone has been to a few good therapists (better than the typical therapist) and they haven't been able to help, then "find a better therapist" would be tough advice to follow. But if they've been to a few lousy therapists (worse than the typical therapist) then finding a better therapist should be doable, since they'll have a good shot at doing that even if they just pick a new one at random.

Alternatively, if they've been to a few good therapists but haven't found the right one, that's evidence that "the right therapist for me" is a small category, but if they've been to a few bad therapists and haven't found the right one, "the right therapist for me" could still include a fairly large subset of therapists.

Edit: The more general point is that being unlucky is a reason for optimism, since it means that things are likely to get better just from your luck returning to normal.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-07-02T15:24:51.923Z · LW(p) · GW(p)

That makes sense. Thanks for clearing it up.

comment by roland · 2011-06-26T17:27:45.351Z · LW(p) · GW(p)

A known set of rules can thus be useful. "Which movie should I watch?" "ERROR: decision-making system unavailable." "Okay, then let's go with the most educational."

Or maybe if you don't feel like going to any movie why not do something else? Is there something that you would like to do? I don't watch movies either, but I love reading good books. I suppose when you read LW and write comments here you don't apply a decision procedure in order to do it, but instead you simply enjoy it somehow. Is that correct?

Someone coming up to me as I agonize and telling me "You don't need rules" is not helping. What do I use instead?

Sorry, I think the internet is not the right medium, personal conversation with a knowledgeable individual would probably help more.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-27T06:58:13.100Z · LW(p) · GW(p)

It's intermittent, but covers all choices when it happens.

Or maybe if you don't feel like going to any movie why not do something else? Is there something that you would like to do?

Yeah, go to sleep and never have to make a decision again. Oddly enough, this is rarely available.

You seem to be confusing a lack of emotional reaction with a neutral reaction. If I can't choose a movie, it doesn't mean I'm reluctant to see one, or that I won't enjoy it.

"Do something else, then" is rarely applicable. You aren't going to cancel an appointment because you can't decide on a date, or answer "Do you want to go to the park?" with "I don't know. Let's talk about the history of cheesemaking."

comment by AdeleneDawner · 2011-06-26T10:16:49.298Z · LW(p) · GW(p)

For a definition of 'rules' that includes heuristics, and with the further qualification of either 'quickly' or 'without putting a lot of effort into every single decision', that assumption seems pretty accurate to me - it's more that most people are more comfortable making rules of thumb for themselves, or doing semi-arbitrary things in instances where their rules don't provide the answer (and then usually making a new rule of thumb out of the arbitrary decision, if it turns out well). Am I missing something?

Replies from: roland
comment by roland · 2011-06-26T17:20:16.706Z · LW(p) · GW(p)

Sorry, I meant "verbalizable/conscious rules". When I type this words on the keyboard I don't use any conscious rules to decide how fast to move each finger. The problem is when you have to apply rules/perform conscious rituals all the time even for decisions that shouldn't matter that much.

comment by roland · 2011-06-26T04:49:32.030Z · LW(p) · GW(p)

A little searching on the Internet says the treatment for the disorder is talk therapy. It's possible that could work.

talk therapy doesn't work see "House of Cards" by Robyn Dawes. Instead use CBT cognitive behavioral therapy.

The cognitive part is understanding that your rituals/behavior are irrational and the behavioral part is actually acting on that understanding against the subjective pull to do otherwise. Good advice on the latter can be found in the link below, with video:

http://www.ocduk.org/2/foursteps.htm

comment by Thomas · 2011-06-04T10:59:24.824Z · LW(p) · GW(p)

We sleep safely at night because rough men stand ready to visit violence on those who would harm us.

  • George Orwell / saw on Discovery Channel
Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-06-04T20:21:10.696Z · LW(p) · GW(p)

From wikiquote

"People sleep peaceably in their beds at night only because rough men stand ready to do violence on their behalf."

Alternative: "We sleep safely at night because rough men stand ready to visit violence on those who would harm us."

In his 1945 "Notes on Nationalism", Orwell claimed that the statement, "Those who ‘abjure’ violence can only do so because others are committing violence on their behalf" was a "grossly obvious" fact.

Notes: allegedly said by George Orwell although there is no evidence that Orwell ever wrote or uttered either of these versions of this idea. They do bear some similarity to comments made in an essay that Orwell wrote on Rudyard Kipling, when quoting from one of his poems. Orwell did write, in his essay on Kipling, that the latter's "grasp of function, of who protects whom, is very sound. He sees clearly that men can only be highly civilized while other men, inevitably less civilized, are there to guard and feed them." (1942)

comment by MichaelGR · 2011-06-02T18:35:34.103Z · LW(p) · GW(p)

The way to maximize outcome is to concentrate on the process.

-Seth Klarman, letter to shareholders

comment by NancyLebovitz · 2011-06-09T18:38:57.875Z · LW(p) · GW(p)

"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance — that principle is contempt prior to investigation."

attribution unknown

I may have posted a little too fast-- I picked up the quote from a site which says it's a misquotation, and apt to be used to support dubious ideas.

On the other hand, contempt can come into play too quickly and reflexively, so I'm not deleting the quote.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-11T21:18:09.742Z · LW(p) · GW(p)

Is this the absurdity heuristic, or a superset? If the later, what else is in the set? Maybe moral absurdity, and affiliation with outgroups (in particular, first encountering the idea during a heated debate or from someone lower-status than you).

comment by Patrick · 2011-06-04T12:43:39.489Z · LW(p) · GW(p)

I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.

Bruce Lee

Replies from: MixedNuts
comment by MixedNuts · 2011-06-06T13:39:24.893Z · LW(p) · GW(p)

Myth busted.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-06-15T15:47:25.010Z · LW(p) · GW(p)

Noooot the same thing.

comment by homunq · 2011-06-28T23:16:17.635Z · LW(p) · GW(p)

I am quite prepared to be told, with regard to the cases I have here proposed, as I have already been told with regard to others, "Oh, that is an extreme case, it would never really happen!" Now I have observed that the answer is always given instantly, with perfect confidence, and without examination of the details of the proposed case. It must therefore rest on some general principle: the mental process being probably something like this — 'I have formed a theory. This case contradicts my theory. Therefore this is an extreme case, and would never occur in practice'.

-Charles Dodgson (Lewis Carroll), relating to the possibility of strategically-induced Condorcet cycles in elections.

comment by NancyLebovitz · 2011-06-26T13:26:06.039Z · LW(p) · GW(p)

"Everyone thinks himself the master pattern of human nature; and by this, as on a touchstone, he tests all others. Behavior that does not square with his is false and artificial. What brutish stupidity!” -- Montaigne

Replies from: Document
comment by Document · 2011-07-10T21:34:13.556Z · LW(p) · GW(p)

"Man, what a pretentious quote. I'm filing it under typical mind fallac-- oh, yeah."

comment by chatquitevoit · 2011-06-24T17:29:30.052Z · LW(p) · GW(p)

"Freedom is the freedom to say that two plus two make four. If that is granted, all else follows."

  • 1984, George Orwell (although I really shouldn't have to attribute this one)

Probably my favorite statement on rationality, it's so practical for launching off into every other sphere of thought - politics, ethics, theology, maths/physics, and, well, all else that follows.

Replies from: MichaelHoward
comment by MichaelHoward · 2012-04-13T06:42:51.837Z · LW(p) · GW(p)

+1. Pyongyang just admitted to their own people that their rocket launch failed. Could this be a sign of the start of something significant?

comment by dvasya · 2011-06-01T17:42:14.873Z · LW(p) · GW(p)

Future possibilities will often resemble today's fiction, just as robots, spaceships, and computers resemble yesterday's fiction. How could it be otherwise? Dramatic new technologies sound like science fiction because science fiction authors, despite their frequent fantasies, aren't blind and have a professional interest in the area.

...

This may seem too good to be true, but nature (as usual) has not set her limits based on human feelings.

K. Eric Drexler, Engines of Creation Chapter 6

Replies from: gjm
comment by gjm · 2011-06-01T17:54:54.905Z · LW(p) · GW(p)

When I read the first sentence quoted above, I assumed it was intended ironically.

Robots, spaceships and computers resemble yesterday's fiction? Hardly, except in so far as their fictional counterparts resemble the robots, spaceships and computers already in existence when the fiction was written. (And, to a lesser extent, in so far as people making new robots, spaceships and computers are inspired by the science fiction they've read.)

Replies from: fubarobfusco, advancedatheist
comment by fubarobfusco · 2011-06-02T02:27:12.197Z · LW(p) · GW(p)

Heinlein's spaceships relied on human beings doing orbital calculations on slide-rules and calling out orders to one another to direct the ship — "Brennschluss!" — to avoid disaster. Today, there are serious projects to move ground automobiles out of direct human control as a safety measure.

Asimov's robots, and the renegade computers on Star Trek, dealt with conflicting evidence so poorly that they could be permanently broken by receiving malicious data. Today's software engineers would call that a denial-of-service attack or "query of death", and fix it.

The real world has much higher standards for safety and reliability than fiction does!

Replies from: orthonormal
comment by orthonormal · 2011-06-02T22:14:07.153Z · LW(p) · GW(p)

Or, to look at it less derisively, fiction needs its computer bugs and exploits to have simple narrative explanations.

comment by advancedatheist · 2011-06-01T20:29:02.316Z · LW(p) · GW(p)

Murray Leinster, who had a few patents to his name, anticipated the web and increasingly capable search engines in his visionary story, published in 1946, titled " A Logic Named Joe."

Replies from: Pavitra
comment by Pavitra · 2011-06-01T22:42:10.780Z · LW(p) · GW(p)

You have to cherry-pick examples to make seeming correlations like that work. If, say, a particular author had several such coincidences, then that might be different, but as far as I can tell, for the most part science fiction predicts science in about the same way that fortune cookies predict lottery numbers.

Replies from: taryneast
comment by taryneast · 2011-06-02T12:44:13.712Z · LW(p) · GW(p)

If, say, a particular author had several such coincidences

Jules Verne

Replies from: Pavitra, taryneast
comment by Pavitra · 2011-06-03T00:41:23.490Z · LW(p) · GW(p)

Verne is a fairly strong example. He was reasonably popular in his time, so he didn't become famous only because of correlation-in-hindsight. We might ask whether he's an outlier or just the tail of the bell curve, but I strongly suspect that what actually happened was that later engineers were consciously influenced by his fictional designs, much like with William Gibson and the modern Internet.

This suggests that fiction-future correlation is largely determined by technical plausibility, which in turn suggests that we may be able to predict in advance which science-fiction predictions are most likely to come true. However, it occurs to me that we do a lot of this last already on this website (cryonics, uploading, nanotech, AI, computronium, ...) so I'm not sure if this quite counts as an "advance" strength of the theory.

(Other predicted technologies I don't remember seeing so much around here: Dyson spheres, space elevators, ... hm. Not that many, actually. Augmented reality and bionic implants are likely only transitional, but will probably have at least a few years of massive popularity at some point.)

Replies from: taryneast, dvasya
comment by taryneast · 2011-06-04T08:07:30.389Z · LW(p) · GW(p)

Dyson spheres are unlikely. They use too much physical matter to create. Ringworlds are slightly more likely.

Space elevators are awesome though. We should do that :)

Still waiting on the ability to grow nanotubes long enough, though... we're getting there. We can build them long enough to turn into thread - but proper long-filament nanotubes are the only thing (that we know of so far) that will be strong enough for the elevator ribbon.

Replies from: dvasya, chatquitevoit
comment by dvasya · 2011-06-09T17:10:38.309Z · LW(p) · GW(p)

'Dyson sphere' is a very broad term encompassing several distinct types of design, including very light ones.

Space elevator is awesome, but there exist much more clever alternative designs that have substantially lower requirements for material strength, as well as geographical positioning - this is also a huge issue with the original space elevator design. It is a beautiful idea, but that doesn't mean we should cling to it and ignore all other proposals :)

Replies from: taryneast
comment by taryneast · 2011-06-10T08:17:55.514Z · LW(p) · GW(p)

Any links to the research? I'd be interested in having a look :)

Replies from: dvasya
comment by dvasya · 2011-06-11T03:12:13.091Z · LW(p) · GW(p)

I started assembling links but then realized that Wikipedia is a good starting point, it has provides a nice summary of all the most notable designs: tethers, bolas, orbital rings, pneumatic towers, the Lofstrom Loop... Each has its own drawbacks, but the important thing is that they do not require nonexistent (even if theoretically possible) materials.

Clever ways to get to space are often covered at Next Big Future, including the author's own nuclear cannon proposal - this one actually literally follows Jules Verne :-)

Replies from: taryneast
comment by taryneast · 2011-06-12T07:55:16.482Z · LW(p) · GW(p)

Cool, thanks :)

comment by chatquitevoit · 2011-06-24T17:24:42.343Z · LW(p) · GW(p)

The most feasible iteration of a Dyson sphere would probably be the least dense, which would have great influence on the ways they could be used, and that makes them less likely because they are less commercially useful. Still, it could happen.

Replies from: taryneast
comment by taryneast · 2011-06-27T09:56:16.890Z · LW(p) · GW(p)

Ok - I hadn't seen any info on that kind. Yes I agree, it could happen - though I suspect that by the time we get to the stage where we could - we'll probably have invented something even cooler/useful :)

comment by dvasya · 2011-06-09T16:56:47.928Z · LW(p) · GW(p)

Actually, +1 for William Gibson!

comment by taryneast · 2011-06-02T12:48:03.772Z · LW(p) · GW(p)

Ok... and just to head off the replies... I know there's always likely to be one in such a large field, and he also got tons of stuff wrong (centre of the earth et al) but still... as an example of one author that consistently got a lot of stuff right (mainly through thoroughly understanding science and extrapolating from there) - he's brilliant.

Replies from: dvasya
comment by dvasya · 2011-06-02T15:42:58.141Z · LW(p) · GW(p)

Arthur C. Clarke?

comment by soreff · 2011-06-17T00:28:39.103Z · LW(p) · GW(p)

"Does it work?" is actually a much more important question than "Should it work?"

  • Druin Burch, "Blood Sports: Does a popular performance-enhancing subterfuge actually work" in Natural History 6/11/2011
comment by TrE · 2011-06-03T06:40:00.890Z · LW(p) · GW(p)

Teach a man to reason, and he'll think for a lifetime.

-- Phil Plait

Replies from: lessdazed
comment by lessdazed · 2011-09-02T13:27:46.067Z · LW(p) · GW(p)

Probably not.

comment by dvasya · 2011-06-01T17:28:39.106Z · LW(p) · GW(p)

Sorry for length, but this a nice sketch on the role of rationality in science :)

A glossary for research reports

Scientific term (Actual meaning)

It has long been known that. . . . (I haven’t bothered to look up the original reference)

. . . of great theoretical and practical importance (. . . interesting to me)

While it has not been possible to provide definite answers to these questions . . . (The experiments didn’t work out, but I figured I could at least get a publication out of it)

The W-Pb system was chosen as especially suitable to show the predicted behaviour. . . . (The fellow in the next lab had some already made up)

High-purity || Very high purity || Extremely high purity || Super-purity || Spectroscopically pure . . . (Composition unknown except for the exaggerated claims of the supplier)

A fiducial reference line . . . (A scratch)

Three of the samples were chosen for detailed study . . . (The results on the others didn’t make sense and were ignored)

. . . accidentally strained during mounting (. . . dropped on the floor)

. . . handled with extreme care throughout the experiments (. . . not dropped on the floor)

Typical results are shown . . . (The best results are shown)

Although some detail has been lost in reproduction, it is clear from the original micrograph that . . . (It is impossible to tell from the micrograph)

Presumably at longer times . . . (I didn’t take time to find out)

The agreement with the predicted curve is excellent (fair) || good (poor) || satisfactory (doubtful) || fair (imaginary) || . . as good as could be expected (non-existent)

These results will be reported at a later date (I might possibly get around to this sometime)

The most reliable values are those of Jones (He was a student of mine)

It is suggested that || It is believed that || It may be that . . . (I think)

It is generally believed that . . . (A couple of other guys think so too)

It might be argued that . . . (I have such a good answer to this objection that I shall now raise it)

It is clear that much additional work will be required before a complete understanding . . . (I don’t understand it)

Unfortunately, a quantitative theory to account for these effects has not been formulated (Neither does anybody else)

Correct within an order of magnitude (Wrong)

It is to be hoped that this work will stimulate further work in the field (This paper isn’t very good, but neither are any of the others in this miserable subject)

Thanks are due to Joe Glotz for assistance with the experiments and to John Doe for valuable discussions (Glotz did the work and Doe explained what it meant)

C. D. Graham, Jr., Metal. Progress 71, 75 (1957) (actual source)

Replies from: MixedNuts, Apprentice
comment by MixedNuts · 2011-06-01T20:47:06.250Z · LW(p) · GW(p)

Funny, but I don't think it is the criticism of science it seems to be. Some items just point out that papers are formal, like

It is suggested that || It is believed that || It may be that . . . (I think)

Yeah, that's what it means. What's your point? (Well, it is useful at face value for people who don't understand formal language, but it's not trying to be.)

Others look like criticism but aren't, like

. . . accidentally strained during mounting (. . . dropped on the floor)

Yes, it's an amusing way of phrasing it, but there's noting wrong with the fact or with the phrasing - the meaning gets across!

Some do show scientists obfuscating problems, like

Typical results are shown . . . (The best results are shown)

but none of them are new. It has long been known that scientists tend to ignore negative results and the like. The most reliable values are those of Ben Goldacre.

Also,

Correct within an order of magnitude (Wrong)

Is just plain correct within an order of magnitude. If I compute the mass of the sun from a weight of a rock in my hands and the shadows of two sticks, being correct within an order of magnitude is incredibly precise.

Replies from: dvasya
comment by dvasya · 2011-06-02T15:44:58.721Z · LW(p) · GW(p)

I don't think this was intended as a criticism of science... ;)

Replies from: MixedNuts
comment by MixedNuts · 2011-06-06T06:42:41.532Z · LW(p) · GW(p)

Small-s science the process that's in fact implemented, not big-S Science the ideal. Though admittedly formality and obfuscation in journal papers isn't a necessary part of current science (as opposed to publish-or-perish in general).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-12T18:52:24.061Z · LW(p) · GW(p)

Or perhaps to make fun of scientists as, for instance, people who drop things on the floor.

comment by Apprentice · 2011-06-03T14:07:41.676Z · LW(p) · GW(p)

It might be argued that . . . (I have such a good answer to this objection that I shall now raise it)

My favorite - I do this all the time.

It is clear that much additional work will be required before a complete understanding . . . (I don’t understand it)

Yeah, this one's familiar too. There's a long-ass section in my thesis that basically ends with this. So much data - so little sense.

comment by [deleted] · 2011-06-07T12:49:07.932Z · LW(p) · GW(p)

Math is cumulative. Even Wiles and Perelman had to stand on the lemma-encrusted shoulders of giants.

Scott Aaronson, "Ten Signs a Claimed Mathematical Breakthrough is Wrong", which is worth reading in its own right.

comment by RobertLumley · 2011-06-02T00:19:11.656Z · LW(p) · GW(p)

I'm new to LW (Well, I've been reading Eliezer's posts in order, and am somewhere in 2008 right now, but I haven't read many of the recent posts) so these may have been posted before. But quote collecting is a hobby of mine and I couldn't pass it up.

"There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle." – Albert Einstein

Replies from: Jayson_Virissimo, NancyLebovitz
comment by Jayson_Virissimo · 2011-06-02T00:56:14.235Z · LW(p) · GW(p)

"There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle."

I very much dislike Einstein quotes that have nothing to do with physics or mathematics. You don't have to be an Einstein to know a false dichotomy when you see one. What about living your life as if some things are miracles and some things aren't like most people who have ever lived? Surely, if most people have done it, then it is possible.

Also, welcome to Less Wrong.

Replies from: RobertLumley
comment by RobertLumley · 2011-06-02T01:07:21.990Z · LW(p) · GW(p)

Well for what it's worth, I don't think he means it literally. Or at least so exactly. My interpretation is that he is saying that you must accept a rational basis and explanation for everything, or believe that nothing can be explained - you must accept that the laws of physics apply to every one and everything, and that there are no mysterious phenomena, or you must deny the laws of physics and believe everything is mystical.

And thanks, it's a great blog. I've learned so much reading Eliezer's work. Well, perhaps learned isn't the best word. Realized may be more appropriate.

Replies from: Normal_Anomaly, Desrtopa, summerstay
comment by Normal_Anomaly · 2011-06-03T00:41:43.689Z · LW(p) · GW(p)

Thank you for laying out that interpretation. I thought for years (perhaps because of the first context I saw it in) that it presented a choice between seeing the beauty in everything or not seeing it anywhere. Your interpretation makes much more sense.

comment by Desrtopa · 2011-06-02T15:01:04.050Z · LW(p) · GW(p)

Many, perhaps most people, appear to believe in separate magisteria of ordinary, explainable things, and unassailable supernatural mysteries.

Replies from: RobertLumley
comment by RobertLumley · 2011-06-02T22:14:20.794Z · LW(p) · GW(p)

But that doesn't make it rational to live that way...

Replies from: Desrtopa
comment by Desrtopa · 2011-06-02T23:07:58.018Z · LW(p) · GW(p)

True, but he didn't say there were only two rational ways of looking at the world.

I don't think the interpretation you gave is what he meant, anyway. Based on his writings about his own religious beliefs, Einstein would almost certainly have categorized himself as being one who saw everything as miraculous. Just because we accept that something is real and follows the same rules as all other known real things doesn't mean we can't have a sense of wonder over it.

comment by summerstay · 2011-06-15T22:26:02.238Z · LW(p) · GW(p)

I think he's saying that there are only two ways to live consistent with the world as it is, and they are identical except that the second includes the sense of awe or wonder. It's a miracle (a wonder, unexplained) that anything exists at all. Religion that believes only some things are miracles is not either of the ways he supports.

comment by NancyLebovitz · 2011-06-06T13:20:41.087Z · LW(p) · GW(p)

You can check on whether quotes have been posted already by using search for the site.

comment by Morendil · 2011-09-02T11:00:00.086Z · LW(p) · GW(p)

It’s not exactly a 'he said she said' argument. It’s a 'top experts on the subject predict significant methane emissions from melting permafrost, but some guy on my blog says they must be wrong' argument.

-- John Baez on Melting Permafrost

comment by [deleted] · 2011-07-02T06:36:57.016Z · LW(p) · GW(p)

“It seems that those who legislate and administer and write about social policy can tolerate any increase in actual suffering so long as the system does not explicitly permit it.”

-Charles Murray

Replies from: Nornagest
comment by Nornagest · 2011-07-02T07:47:46.007Z · LW(p) · GW(p)

This reads as a little applause-lighty for my taste, to be honest. It's really easy to claim that the arbiters of social policy are blind to actual suffering, and not much harder to spin that into an appeal for your particular ideology, which by virtue of its construction or unusual purity or definition of "actual suffering" of course doesn't have these problems.

If a quote on policy would be equally at home heading a libertarian or a socialist or an anarcho-primitivist blog, does it really constrain our anticipations about policy to any meaningful extent?

Replies from: None, gjm
comment by [deleted] · 2011-07-02T14:50:47.828Z · LW(p) · GW(p)

If a quote on policy would be equally at home heading a libertarian or a socialist or an anarcho-primitivist blog, does it really constrain our anticipations about policy to any meaningful extent?

I read it as cautioning us to resist the temptation to unquestioningly accept nice sounding policy as good policy.

Also any quotes that couldn't be read as potentially applicable by a large swath of the political spectrum might trigger blue-green tribalism feelings and kind of defeat the spirit of the no mind killer rule.

comment by gjm · 2011-09-02T10:46:33.233Z · LW(p) · GW(p)

The point isn't to constrain our anticipations about policy; it's to constrain our anticipations about policy-makers. To get actual policy anticipation-control, you need to apply it in a specific context where you know more about the sort of policy the people in question would favour if they (openly) didn't care about actual suffering.

comment by gwern · 2011-06-14T01:51:08.573Z · LW(p) · GW(p)
"...I thank my fortune for it,
My ventures are not in one bottom trusted,
Nor to one place; nor is my whole estate
Upon the fortune of this present year:
Therefore my merchandise makes me not sad."

--Antonio, The Merchant of Venice, Act 1 Scene 1. I have found this quote coming to mind recently apropos the recent Bitcoin price swings.

comment by Owen_Richardson · 2011-06-10T23:37:49.454Z · LW(p) · GW(p)

"Ahh, there's no such thing as mysterious."

~Strong Bad, from sbemail 140 (Probably not originally intended in a rationalist sense.)

comment by RobertLumley · 2011-06-02T00:20:54.881Z · LW(p) · GW(p)

“Life is a tragedy for those who feel, and a comedy for those who think.” – Jean de la Bruyère

Replies from: chatquitevoit
comment by chatquitevoit · 2011-06-24T17:18:09.921Z · LW(p) · GW(p)

And what does this make it for those of us who do both?

Replies from: hamnox
comment by hamnox · 2011-07-02T16:05:37.734Z · LW(p) · GW(p)

It's a tragicomedy, of course.

Replies from: MixedNuts
comment by MixedNuts · 2011-07-02T16:08:50.911Z · LW(p) · GW(p)

And tragicomedies normally have happy endings. Should this raise our probability estimate of a positive Singularity? (No.)

comment by Richard_Kennaway · 2011-06-01T11:16:43.555Z · LW(p) · GW(p)

One should not pursue goals that are easily achieved. One must develop an instinct for what one can just barely achieve through one's greatest efforts.

Unsourced; attributed to Albert Einstein.

Replies from: wedrifid, cousin_it, simplyeric
comment by wedrifid · 2011-06-01T11:35:51.508Z · LW(p) · GW(p)

Or, I could work out what I want and achieve that? There is even a time to focus on a goal over another purely because it is easier.

comment by cousin_it · 2011-06-25T22:31:35.620Z · LW(p) · GW(p)

When I try to learn stuff, I sometimes get good results from the opposite approach: instead of doing the hardest thing, do the easiest thing that counts as progress. In other words, instead of grabbing the highest rung I can reach, I grab the rung I can reach comfortably. Then I take my time to absolutely conquer that rung with perfect technique and control, doing many many repetitions. Then move on to the next.

Advantages of this approach: it's easier, less jerky and more methodical, I can spare attention for ironing out any mistakes in the basics... And most importantly, it feels like I have more "momentum". When my workouts or training sessions look like this, random events are much less likely to derail my schedule of leveling up.

comment by simplyeric · 2011-06-06T17:57:50.681Z · LW(p) · GW(p)

This doesn't seem rational. One must develop an instinct for what one really needs to/wants to/should achieve, and judge whether maximium effort (which I assume would be required to achieve the barely-achievable) is worth the return on that investment.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-06-06T18:47:42.840Z · LW(p) · GW(p)

If you're not putting in maximum effort, you're leaving utility on the table.

Replies from: simplyeric
comment by simplyeric · 2011-06-07T17:46:12.779Z · LW(p) · GW(p)

But if you put out maximum effort, you can leave longevity and/or quality on the table. Silverbacks, pitchers, office workers, day-to-day-life, running, eating... Short term maximum effort might detract from long-term maximum utility. The cost/benefits analysis is at times subjective. "Utility" can mean different things to different people. "Utility", as I interpret in a Rationalist context has a very specific almost "economic" meaning. But you can choose to reduce effort and not push the envelop, and go home, have dinner, relax, and enjoy your life. Some people might refer to that as utility, others as low hanging fruit, still others as a healthy balance.

comment by bogus · 2011-06-01T10:54:01.315Z · LW(p) · GW(p)

Politics are, as it were, the market place and the price mechanism of all social demands - though there is no guarantee that a just price will be struck; and there is nothing spontaneous about politics- it depends on deliberate and continuous activity.

--Bernard Crick

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-06-02T00:51:59.044Z · LW(p) · GW(p)

What is a "social demand"? By what method could we determine how much of a good is "socially demanded"?

Replies from: fubarobfusco
comment by fubarobfusco · 2011-06-02T02:37:57.087Z · LW(p) · GW(p)

Justice, for instance. Can one person be reliably counted upon to measure how much justice he or she has received? Probably not. But political processes do work out various means for delivering more or less justice. These means appear to have something to do with the demands of various people. The market analogy is of course not perfect.

Replies from: Aryn
comment by Aryn · 2011-06-02T09:00:02.139Z · LW(p) · GW(p)

Justice, at least the way I've heard it used, is very much revenge without the stigma.

Replies from: brazzy, Theist
comment by brazzy · 2011-06-03T10:17:55.722Z · LW(p) · GW(p)

Criminal justice only if you tune out the rehabilitation aspect. Civil justice only if you tune out everything except punitive damages (which don't exist in many jurisdictions).

comment by Theist · 2011-06-04T04:44:49.057Z · LW(p) · GW(p)

There is a lot to be gained by delegating to a central authority the responsibility of maintaining a credible threat of retaliation.

comment by wedrifid · 2011-06-01T10:15:24.567Z · LW(p) · GW(p)

Verberationes continuabunt dum animus melior fit. ("The beatings will continue until morale improves")

(My presentation of the quote constitutes an assertion that there is an insight here useful for navigating the world of tribal politics, hence the relevance.)

comment by darius · 2011-06-01T08:59:19.502Z · LW(p) · GW(p)

O shame to men! Devil with devil damned / Firm concord holds; men only disagree / Of creatures rational

-- Milton, Paradise Lost: not on Aumann agreement, alas

Replies from: MixedNuts, Document
comment by MixedNuts · 2011-06-01T09:25:23.373Z · LW(p) · GW(p)

Yeah, but humans only exist of creatures rational.

Replies from: sgeek
comment by sgeek · 2011-06-02T21:56:48.345Z · LW(p) · GW(p)

We're working on that.

comment by Document · 2011-07-10T20:59:22.806Z · LW(p) · GW(p)

Are you posting it for the Aumann-agreement meaning or the intended one?

comment by Risto_Saarelma · 2011-06-15T07:15:38.734Z · LW(p) · GW(p)

Now that's the general solution to a problem that all the programmers in the world are out there inventing for you, the general solution, and nobody has the general problem.

comment by Document · 2011-06-03T05:27:47.822Z · LW(p) · GW(p)

children are scum

can't we figure out a way to get rid of kids but keep the human race alive

we need to get on that shit

-- QDB, on immortalism

Replies from: sketerpot
comment by sketerpot · 2011-06-03T20:02:11.568Z · LW(p) · GW(p)

Man, that site is a funny time sink. Not the best source of rationality quotes, but there are a few that sort of count.

Greatgreen: I'm going to fail :(

NumberGuy: think positively

Greatgreen: I'm going to fail :)

comment by Endovior · 2011-06-24T22:24:38.076Z · LW(p) · GW(p)

Faith is not enough, for faith is blind by nature. Life needs insight. It is the dead, and the dying, that allow themselves to be led.

--Eve Online: Chronicles

comment by rlsmith · 2011-06-02T14:52:41.784Z · LW(p) · GW(p)

"Skepticism is the chastity of the intellect, and it is shameful to surrender it too soon or to the first comer: there is nobility in preserving it coolly and proudly through long youth, until at last, in the ripeness of instinct and discretion, it can be safely exchanged for fidelity and happiness."

--George Santayana, Quoted by Carl Sagan in Contact, Chapter 14 "Harmonic Oscillator", page 231

Replies from: Richard_Kennaway, Eneasz
comment by Richard_Kennaway · 2011-06-03T13:00:21.699Z · LW(p) · GW(p)

Ick.

What, in this metaphor, corresponds to fidelity and happiness in the way that skepticism corresponds to chastity? Is Santayana's idea that we should search long for The Answer, but having found it, we should turn off our skepticism, stop thinking, and sink into the warm fuzzies of faith? It reminds me of the sea squirt that eats its own brain when it has found a comfortable spot to live and no longer needs it.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-03T21:25:16.604Z · LW(p) · GW(p)

For that matter, what corresponds to sexual dysfunction due to 'saving yourself' for someone you're incompatible with?

Replies from: CuSithBell
comment by CuSithBell · 2011-06-05T16:07:10.445Z · LW(p) · GW(p)

Woah! Whole bunch of downvotes. Do you think the subject is inappropriate, the positioning is inappropriate, that my implicit assertion is incorrect? Something like that?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-06-05T16:37:11.130Z · LW(p) · GW(p)

I'm not one of the downvoters (although I certainly wouldn't upvote it), but it's a hypothetical question that makes no sense to me. Whatever your implicit assertion was, it hasn't come through to me. Why is this hypothetical person saving themselves for someone they're incompatible with? How is this eventuality going to cause "sexual dysfunction"? Why is it interesting to imagine this happening?

Replies from: CuSithBell
comment by CuSithBell · 2011-06-05T17:23:40.213Z · LW(p) · GW(p)

The point of the question was to criticize the Santayana quote from the other direction - the sexual-politics position it uses to arrive at the skepticism position.

My understanding is that, often, this notion of chastity as a cardinal virtue causes people to marry people they're sexually incompatible with (because they miss the highest-bandwidth way to check) and have unsatisfying sex lives, partly because late loss of virginity is linked to sexual dysfunctions (though I'm not sure that significant bidirectional causality is clear yet).

I was asserting that the sexual-politics position assumed in the top comment wasn't obviously true or universally held here.

comment by Eneasz · 2011-06-02T22:41:03.736Z · LW(p) · GW(p)

Interestingly, I really like this quote about skepticism, even though I strongly dislike its fetishization of sexual inexperience.

Replies from: knb
comment by knb · 2011-06-05T09:52:00.282Z · LW(p) · GW(p)

How is it a fetish and not a legitimate personal value? And the part relevant to skepticism seems totally off to me. We should never sacrifice skepticism for "fidelity" to an idea.

Replies from: Eneasz
comment by Eneasz · 2011-06-06T01:59:28.221Z · LW(p) · GW(p)

To reply in reverse order - I see how it's relevant to skepticism because people are quick to believe any old thing that feels truthy to them. But there are some things you actually can put your trust in. Things that have been born out over centuries to get us closer and closer to true knowledge, things that produce real results. Things such as empiricism. You can eventually come to trust empiricism, rationality, the experimental method. You don't have to remain forever entirely skeptical of everything. In that sense it's a decent metaphor to compare it to a (good) long-term relationship - that building of trust by experience until it is simply natural and implicit.

I would consider it a fetish because it will make anyone out of their teens seek age-inappropriate partners. If you're in your thirties or later and you are seeking sexual congress with a virgin then you are looking for someone with the sexual maturity of someone a lot younger than you, regardless of that person's chronological age. Fetishes aren't inherently bad of course, there's lots of great ones out there. :) But this one often serves to either A) degrade non-teen women who aren't emotionally stunted, or B) cause people to severely stunt their own emotional growth to fulfill some future partner's fetish. Both of those seem to be bad things to me, and thus my disapproving words.

Replies from: knb
comment by knb · 2011-06-06T02:29:04.992Z · LW(p) · GW(p)

But he isn't recommending preferring chastity in others, but rather being chaste ourselves until we have a "ripeness of instinct and discretion" (i.e. have attained maturity).

This definition of chastity includes not just virgins but everyone who shows discretion in choosing sex partners, and doesn't accept the "first comer".

Replies from: Eneasz
comment by Eneasz · 2011-06-07T00:11:06.062Z · LW(p) · GW(p)

I wouldn't have any problem with the quote if that's the case, discretion is good. However I've never seen chastity used in a way that didn't mean virgin. Actually, come to think of it, the English language could really use a word for "discerning but liberated person".

comment by sirciny · 2011-07-01T23:52:55.729Z · LW(p) · GW(p)

"For the thinker, as for the artist, what counts in life is not the number of rare and exciting adventures he encounters, but the inner depth in that life, by which something great may be made out of even the paltriest and most banal of occurrences." -- William Barrett

comment by Patrick · 2011-06-03T14:13:16.021Z · LW(p) · GW(p)

If a process is potentially good, but 90+% of the time smart and well-intentioned people screw it up, then it's a bad process. So they can only say it's the team's fault so many times before it's not really the team's fault.

comment by MattFisher · 2011-06-23T15:05:22.055Z · LW(p) · GW(p)

I argued earlier that the only circumstances under which it should be morally acceptable to impose a particular way of thinking on children, is when the result will be that later in life they come to hold beliefs that they would have chosen anyway, no matter what alternative beliefs they were exposed to. And what I am now saying is that science is the one way of thinking — maybe the only one — that passes this test. There is a fundamental asymmetry between science and everything else.

comment by EvelynM · 2011-06-14T15:18:49.602Z · LW(p) · GW(p)

"I smile and start to count on my fingers: One, people are good. Two, every conflict can be removed. Three, every situation, no matter how complex it initially looks, is exceedingly simple. Four, every situation can be substantially improved; even the sky is not the limit. Five, every person can reach a full life. Six, there is always a win-win solution. Shall I continue to count?" Dr. Eliyahu M. Goldratt 1947- 2011

Replies from: gjm, lessdazed
comment by gjm · 2011-09-02T10:49:25.436Z · LW(p) · GW(p)

It might be easier to tell whether this is really a rationality quote or an anti-rationality quote if we had a bit more context. For instance, is Goldratt (or whatever character he's put those words into the mouth of) endorsing those propositions, or listing them as false assumptions someone else is making, or what?

comment by lessdazed · 2011-09-02T13:16:50.043Z · LW(p) · GW(p)

Shall I continue to count?

That's not how it goes. I'm pretty sure he finishes by saying "Six! Six fingers. AH AH AH AH AH!"

comment by advancedatheist · 2011-06-01T20:43:01.664Z · LW(p) · GW(p)

From Countdown to Immortality, by FM-2030:

TIME REENTRY AND ADAPTATION

How will an individual suspended today adjust to life upon reentry in the future?

Time-reentry adjustment will not be a serious problem for the following reasons:

Anyone suspended in these years will probably not have to wait long for reanimation. In act the time will come when long-term suspension will make no sense. Deathcorrection will be quick and therefore catch-up will not be a problem.

People are living longer and longer. Therefore many of the reanimate’s friends and acquaintances will be around.

More and more people are signing up for cryonic suspension. When they are eventually brought back, they will find other reanimates from their original time zones.

What if you do not find any familiar faces upon reentry? What of it? You will make new friends. Why not start afresh? Isn’t this precisely what tends of millions of people now do when they voluntarily move from one part of the planet to another? In our fluid times many of our friendships and associations are not lifelong and continuous any way.

We humans are remarkably adaptable. In recent decades we have seen entire populations switch eons - from Stone Age to Electronic Age - from the feudal/agrarian world to the industrial and the telespheral. There is no limit to our adaptability.

Entire generations are now born into worlds of real-time acceleration. To them and to all of us rapid realignment is the norm. We are not even aware we are continually desynchronizing.

In the coming decades reanimates may not be the only ones having to readapt. Increasing numbers of people will drop out of our world and start new lives elsewhere in the solar system. Some of these extraterrestrials will come back and may also have to zone in.

In the new century we will learn about Time and Space reentry and devise catch-up skills. For example: rapid updates via onbody computers and audio/visuals - rapid playbacks and overviews via touch-and-enter holospheres - body-attached or brain-implanted decision-assists - automatic information-transfer procedures and so on. We may also have rapid genetic fine-tuning to help returnees improve their concentration - memory - adaptability - learn/unlearn.

Finally in the coming years and decades the world will grow more and more open and friendly. This very day we are outgrowing age-old adversarial barriers: tribalism - racism - classism - sexism - nationalism. The freeflow of people across the planet is speeding up. My projection is that a person suspended in the coming years and reentering decades later will at first have more problems with the relative friendliness and openness of the new century than anything else.

If Simon Baron-Cohen gets his way, a future society could make a certain high level of empathy the norm, with potentially interesting consequences for revived cryonauts.

Replies from: wedrifid
comment by wedrifid · 2011-06-01T23:04:08.935Z · LW(p) · GW(p)

(Perhaps a little large for a quote? If it's an inspiring excerpt consider a discussion post including a bit on why you like it!)

comment by ugquestions · 2011-06-11T12:12:31.560Z · LW(p) · GW(p)

The questioner is nothing but the answer. We are not ready to accept this because it will put an end to all the answers which we have accepted as being the real answers. u.g. Krishnamurti

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-06-12T00:35:10.095Z · LW(p) · GW(p)

What?

comment by advancedatheist · 2011-06-01T20:21:23.255Z · LW(p) · GW(p)

More from Space Viking:

He was crucified, and crowned with a crown of thorns. Who had they done that to? Somebody long ago, on Terra.

Replies from: gwern
comment by gwern · 2011-06-07T15:56:18.544Z · LW(p) · GW(p)

Why - I don't see how this is a good quote at all.

Replies from: gjm
comment by gjm · 2011-09-02T11:29:07.393Z · LW(p) · GW(p)

Maybe the idea is that it expresses the expectation that eventually religion (or at least Christianity) will be nothing but a historical curiosity, and advancedatheist likes that idea. I can't see that it's in any useful sense a rationality quote, though.