Rationality Quotes November 2013

post by MalcolmOcean (malcolmocean) · 2013-11-02T20:35:55.780Z · LW · GW · Legacy · 390 comments

Contents

390 comments

Another month has passed and here is a new rationality quotes thread. The usual rules are:

390 comments

Comments sorted by top scores.

comment by JQuinton · 2013-11-06T18:35:10.160Z · LW(p) · GW(p)

A newspaper is better than a magazine. A seashore is a better place than the street. At first it is better to run than to walk. You may have to try several times. It takes some skill, but it is easy to learn. Even young children can enjoy it. Once successful, complications are minimal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing the same thing can also cause problems. One needs lots of room. If there are no complications, it can be very peaceful. A rock will serve as an anchor. If things break loose from it, however, you will not get a second chance.

Is this paragraph comprehensible or meaningless? Feel your mind sort through potential explanations. Now watch what happens with the presentation of a single word: kite. As you reread the paragraph, feel the prior discomfort of something amiss shifting to a pleasing sense of rightness. Everything fits; every sentence works and has meaning. Reread the paragraph again; it is impossible to regain the sense of not understanding. In an instant, without due conscious deliberation, the paragraph has been irreversibly infuesed with a feeling of knowing.

Try to imagine other interpretations for the paragraph. Suppose I tell you that this is a collaborative poem written by a third-grade class, or a collage of strung-together fortune cookie quotes. Your mind balks. The presense of this feeling of knowing makes contemplating alternatives physically difficult.

Robert Burton, from On Being Certain: Believing You’re Right Even When You’re Not reminding me of Epiphany Addictions

Replies from: NancyLebovitz, None
comment by NancyLebovitz · 2013-11-07T23:23:32.862Z · LW(p) · GW(p)

It looked like nonsense to me. I stopped reading after a few sentences.

I'm not saying I'm immune to epiphany addiction, but I want the good stuff.

Replies from: pjeby, JQuinton
comment by pjeby · 2013-11-08T00:00:12.514Z · LW(p) · GW(p)

It looked like nonsense to me. I stopped reading after a few sentences.

I thought it was a puzzle or riddle, so I went back and looked at it again. My first guess was that it was something to do with running, then paper airplanes (which can be made from newspaper, but not a magazine). The rock as anchor made me realize there needed to be something attached, which made me realize it was a kite.

On the other hand, I don't have any trouble seeing alternative interpretations; perhaps it's because I already tried several and came to the conclusion myself. (Or maybe it's just that I'm more used to looking at things with multiple interpretations; it's a pretty core skill to changing one's self.)

Then again, I also don't see the paragraph as infused with irreversible knowing. I read the words literally every time, and have to add words like, "for flying a kite" to the sentences in order to make the link. I could just as easily add "in bed", though, at which point the paragraph actually becomes pretty hilarious -- much like a strung-together collage of fortune cookie quotes... in bed. ;-)

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-12T19:09:31.281Z · LW(p) · GW(p)

I could just as easily add "in bed", though, at which point the paragraph actually becomes pretty hilarious

:-D

comment by JQuinton · 2013-11-08T14:53:38.139Z · LW(p) · GW(p)

The reason I posted the link to epiphany addiction was that this quote is an example of how confusion doesn't feel good (it prompted you to stop reading...), and that "sense of knowing" feels pleasant. The danger being that we have very little control over when we feel either, so the feeling of knowing is no substitute for rationality.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-08T15:39:43.318Z · LW(p) · GW(p)

Thanks. I had no idea that was what you had in mind.

"The feeling of knowing" is probably worth examining in detail.

Sorry no cite, but I heard about a prisoner whose jailers talked nonsense to him for a week. When they finally asked him a straight question it was such a relief he blurted out the answer.

comment by [deleted] · 2014-12-30T19:46:59.843Z · LW(p) · GW(p)

I tried to come up with a different 'magic word' and thought about bombs. The DIY kind (like sulfuric acid + KMnO4 + ...), with newspaper being better because it is easier to tear... Anybody has other ideas?

ETA: another possibility is a herbarium press, though the running part becomes confusing. Still, it might be better to run if you follow an expert on a survey, and to walk afterwards, trying to recall what you learned on the trip.

What makes it hard to think of alternatives is an automatic arrangement expectation of parsimony and classical unities of time and space. Bias?

comment by Kaj_Sotala · 2013-11-04T13:15:49.211Z · LW(p) · GW(p)

But there’s a big difference between “impossible” and “hard to imagine.” The first is about it; the second is about you!

-- Marvin Minsky

Replies from: Randy_M
comment by Randy_M · 2013-11-08T20:58:13.851Z · LW(p) · GW(p)

And your experiences to date, which is also a thing about reality.

Replies from: glomerulus
comment by glomerulus · 2013-11-11T13:27:54.091Z · LW(p) · GW(p)

True, the availability heuristic, which the quote condemns, often does give results that correspond to reality - otherwise it wouldn't be a very useful heuristic, now would it! But there's a big difference between a heuristic and a rational evaluation.

Optimally, the latter should screen out the former, and you'd think things along the lines of "this happened in the past and therefore things like it might happen in the future," or "this easily-imaginable failure mode actually seems quite possible."

"This is an easily-imaginable failure mode therefore this idea is bad," and its converse, are not as useful, unless you're dealing with an intelligent opponent under time constraints.

comment by James_Miller · 2013-11-01T15:19:40.259Z · LW(p) · GW(p)

"For my own part,” Ms. Yellen said, “I did not see and did not appreciate what the risks were with securitization, the credit ratings agencies, the shadow banking system, the S.I.V.’s — I didn’t see any of that coming until it happened.” Her startled interviewers noted that almost none of the officials who testified had offered a similar acknowledgment of an almost universal failure.

Economist and likely future chairperson of the Federal Reserve Board Janet Yellen shows the key rationality trait of being able to admit you were wrong.

Replies from: hyporational
comment by hyporational · 2013-11-07T05:47:22.405Z · LW(p) · GW(p)

Alternatively, she thought that kind of a lie would be well received. It's a widely used social skill to admit you were wrong even though you think you weren't.

Replies from: Stabilizer
comment by Stabilizer · 2013-11-08T08:03:58.423Z · LW(p) · GW(p)

Why would she claim she hadn't seen it coming, when it would be have been much more to her benefit if she had claimed that she had seen the crisis coming?

Replies from: zslastman, hyporational
comment by zslastman · 2013-11-18T09:14:38.065Z · LW(p) · GW(p)

That claim a) begs the question of why she didn't say something at the time, or short the stock market, and b) is somewhat cliched anyway.

Replies from: Stabilizer
comment by Stabilizer · 2013-11-19T04:57:37.953Z · LW(p) · GW(p)

I think you nailed it.

comment by hyporational · 2013-11-08T08:08:43.098Z · LW(p) · GW(p)

Well, I know nothing of her role in what happened, but what you're suggesting is much harder to sell if her past actions and statements contradict it, which I assume is the case here.

Lots of people benefitted from the crisis and the events that preceded it.

comment by Stabilizer · 2013-11-02T01:41:02.737Z · LW(p) · GW(p)

A good stack of examples, as large as possible, is indispensable for a thorough understanding of any concept, and when I want to learn something new, I make it my first job to build one.

-Paul Halmos

comment by Alejandro1 · 2013-11-01T13:48:51.152Z · LW(p) · GW(p)

Is time real? …In one sense, it’s a silly question. The “reality” of something is only an interesting issue if its a well-defined concept whose actual existence is in question, like Bigfoot or supersymmetry. For concepts like “time,” which are unambiguously part of a useful vocabulary we have for describing the world, talking about “reality” is just a bit of harmless gassing. They may be emergent or fundamental, but they’re definitely there.

Sean Carroll

Replies from: Strilanc, snafoo
comment by Strilanc · 2013-11-03T18:26:02.815Z · LW(p) · GW(p)

Sometimes it's disturbing how good Sean Carrol is at articulating my thoughts. Especially when it pertains to, as above, the philosophy of science. Here's another:

We should not think of the big bang as the beginning of the universe. We should think of it as the end [of] our [current] understanding of what is happening.

comment by snafoo · 2013-11-04T02:33:51.543Z · LW(p) · GW(p)

"I don't understand why a question is interesting, so clearly it's meaningless."

comment by Alejandro1 · 2013-11-01T13:46:41.972Z · LW(p) · GW(p)

“What else [have you learned]?”

“Never make a decision blindfolded.”

The teacher laughed. “An impossible wish. We’re all wearing blindfolds, every moment of our lives, and they come off far less easily than this cheap piece of cloth.”

“Then what should we do, when we can’t take the blindfold off?”

“Do the best you can,” the teacher said, “and never forget that you’re wearing it.”

Math with Bad Drawings

comment by pjeby · 2013-11-06T00:46:38.486Z · LW(p) · GW(p)

Realistically, most people have poor filters for sorting truth from fiction, and there’s no objective way to know if you’re particularly good at it or not. Consider the people who routinely disagree with you. See how confident they look while being dead wrong? That’s exactly how you look to them.

Scott Adams, in How to Fail at Almost Everything and Still Win Big

comment by lukeprog · 2013-11-14T00:55:08.255Z · LW(p) · GW(p)

God give me the serenity to accept the things I cannot predict, the courage to predict the things I can, and the wisdom to buy index funds.

Nate Silver

(h/t Rob Wiblin)

comment by Vaniver · 2013-11-13T03:51:46.496Z · LW(p) · GW(p)

When the tech geeks raised concerns about their ability to deliver the website on time, they are reported to have been told “Failure is not an option.” Unfortunately, this is what happens when you say “failure is not an option”: You don’t develop backup plans, which means that your failure may turn into a disaster.

From an article about Obamacare.

comment by Bundle_Gerbe · 2013-11-01T11:26:39.800Z · LW(p) · GW(p)

The theme of this book, then, must be the coming to consciousness of uncertain inference. The topic may be compared to, say, the history of visual perspective. Everyone can see in perspective, but it has been a difficult and long-drawn-out effort of humankind to become aware of the principles of perspective in order to take advantage of them and imitate nature. So it is with probability. Everyone can act so as to take a rough account of risk, but understanding the principles of probability and using them to improve performance is an immense task.

James Franklin, The Science of Conjecture: Evidence and Probability before Pascal

comment by MalcolmOcean (malcolmocean) · 2013-11-01T11:01:56.775Z · LW(p) · GW(p)

"Next time you’re in a debate, ask yourself if someone is on offense or defense. If they’re neither, then you know you have someone you can learn from"

Julien Smith

Replies from: somervta, Eliezer_Yudkowsky, FiftyTwo, Benito
comment by somervta · 2013-11-01T11:18:40.278Z · LW(p) · GW(p)

Corollary 1: Always try to be that person.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-01T20:59:59.921Z · LW(p) · GW(p)

Disputed. Some people are naturally on the defensive even when debating true propositions. Defensiveness though is more often a bad sign, since somebody defending a false proposition that they know on some level to be false, is more likely to try to hold territory and block opponent progress. Many advocating true propositions very commonly go on the offensive, nor is it clear to me that this is always wrong in human practice.

Replies from: somervta, John_Maxwell_IV, Strange7, brazil84
comment by somervta · 2013-11-01T23:39:19.353Z · LW(p) · GW(p)

Nitpicking, but the quote stated that people who are on neither offensive nor defensive are people you can learn from - it didn't say that people who are on the offensive or defensive are necessarily wrong to do so.

Replies from: NancyLebovitz, Randaly
comment by NancyLebovitz · 2013-11-04T13:14:19.875Z · LW(p) · GW(p)

I'm not sure that's just a nitpick. It's a mistake so common that it should probably be listed under biases. It might be a variation on availability bias-- what's actually mentioned fills in the mental space so that the cases which aren't mentioned get ignored.

Replies from: Nate_Gabriel
comment by Nate_Gabriel · 2013-11-16T09:38:26.519Z · LW(p) · GW(p)

And I'm not sure it's a mistake. If you're getting your information in a context where you know it's meant completely literally and nothing else (e.g., Omega, lawyers, Spock), then yes, it would be wrong. In normal conversation, people may (sometimes but not always; it's infuriating) use "if" to mean "if and only if." As for this particular case, somervta is probably completely right. But I don't think it's conducive to communication to accuse people of bias for following Grice's maxims.

comment by Randaly · 2013-11-03T08:36:56.676Z · LW(p) · GW(p)

I also dispute this- obvious cases include partial disagreement and partial agreement between parties, somebody who is simply silent or who says nothing of substance, and someone who is themself trying to learn from you/the other side.

(In particular, consider a debate between a biologist and the Pope on evolution. I would expect the Pope to be neither offensive nor defensive- though I'm not totally clear on the distinction here, and how a debater can be neither- but I would expect to learn much more from the biologist than the Pope.)

comment by John_Maxwell (John_Maxwell_IV) · 2013-11-04T07:09:40.970Z · LW(p) · GW(p)

One reading: "offense" as "trying to lower another's status" and "defense" as "trying to preserve one's own status". The people you can learn from are the ones whose brains focus on facts rather than status.

Some people are naturally on the defensive even when debating true propositions.

I'm not sure if this is relevant.

In a technical sense, of course you can learn from people on offense/defense, since they are giving you information.

comment by Strange7 · 2013-11-13T02:19:37.024Z · LW(p) · GW(p)

The set of people who hold and argue for accurate beliefs about a given issue, and the set of people from whom it is possible to learn about a given issue, may often overlap but are not at all the same.

comment by brazil84 · 2013-12-24T17:55:59.512Z · LW(p) · GW(p)

I agree, besides which, if there is a popular rule, sign, or proxy for determining who is on the better side of a debate, you can bet that the intellectually dishonest fellow will start waving the "I'm right" flag.

comment by FiftyTwo · 2013-11-04T22:17:24.222Z · LW(p) · GW(p)

I didn't kill my wife!

You're sounding awfully defensive there...

Replies from: lmm
comment by lmm · 2013-11-17T23:33:27.748Z · LW(p) · GW(p)

That doesn't sound like a debate I could learn anything from listening to.

comment by Ben Pace (Benito) · 2013-11-01T16:14:03.903Z · LW(p) · GW(p)

Corollary 2: If they are on offense or defense, check with yourself what you expect to gain from continuing with the debate.

comment by roland · 2013-11-05T20:22:03.198Z · LW(p) · GW(p)

Efficiency is doing things right; effectiveness is doing the right things.

-- Peter Drucker

Replies from: Kaj_Sotala, snafoo
comment by Kaj_Sotala · 2013-11-09T07:40:36.953Z · LW(p) · GW(p)

"It is far better to improve the [quality] of testing first than to improve the efficiency of poor testing. Automating chaos just gives faster chaos." -- Mark Fewster & Dorothy Graham, Software Test Automation

comment by snafoo · 2013-11-05T22:02:17.311Z · LW(p) · GW(p)

See also: the distinction between verification and validation, or between quality control and quality assurance.

Replies from: roland
comment by roland · 2013-11-06T19:51:50.813Z · LW(p) · GW(p)

I dont get the distinction between verification and validation.

Replies from: passive_fist
comment by passive_fist · 2013-11-07T19:02:18.079Z · LW(p) · GW(p)

I really like this sentence (from wikipedia):

It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?"

Replies from: roland
comment by Gvaerg · 2013-11-01T21:57:14.334Z · LW(p) · GW(p)

"I spread the map out on the dining room table, and I held down the corners with cans of V8. The dots from where I'd found things looked like the stars in the universe. I connected them, like an astrologer, and if you squinted your eyes like a Chinese person, it kind of looked like the word 'fragile'. [...] I erased and connected the dots to make 'porte'. I had the revelation that I could connect the dots to make 'cyborg', and 'platypus', and 'boobs', and even 'Oskar', if you were extremely Chinese. I could connect them to make almost anything I wanted, which meant I wasn't getting closer to anything. And now I'll never know what I was supposed to find. And that's another reason I can't sleep."

Jonathan Safran Foer, Extremely Loud and Incredibly Close (emphasis mine)

Replies from: NancyLebovitz, malcolmocean
comment by NancyLebovitz · 2013-11-12T11:54:22.065Z · LW(p) · GW(p)

The "23 Enigma" is the Discordian belief that all events are connected to the number 23, given enough ingenuity on the part of the interpreter.

comment by MalcolmOcean (malcolmocean) · 2013-11-07T23:49:15.858Z · LW(p) · GW(p)

Apophenia.

Replies from: Gvaerg
comment by Gvaerg · 2013-11-08T15:40:28.037Z · LW(p) · GW(p)

Well, sort of - the protagonist is a child who tries to decipher a clue for a treasure hunt and so he realizes that a model that can predict anything is useless.

comment by NancyLebovitz · 2013-11-04T13:36:55.293Z · LW(p) · GW(p)

Any man can learn to learn from the wise once he can find them: but learn to learn from a fool and all the world’s your faculty.

--John Ciardi

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-12T13:03:34.422Z · LW(p) · GW(p)

I guess the most difficult part is learning to extract signal from noise. Then all you have to do is keep the fools talking, and they will gladly do.

comment by hairyfigment · 2013-11-04T06:34:46.618Z · LW(p) · GW(p)

That's why it's so important to understand how unworried I was. I wasn't $400 worth of worried, or $100 worth of worried, or even $20 worth. I wouldn't have gone to the dermatologist if I didn't have health insurance. I probably wouldn't have gone if I had insurance but it had a big deductible, or even any real co-pay. The only reason I went to have my life saved is because it cost me zero dollars.

  • Jon Schwarz, A Tiny Revolution
Replies from: army1987, nshepperd, rule_and_line
comment by A1987dM (army1987) · 2013-11-04T19:59:09.876Z · LW(p) · GW(p)

Opportunity costs of time?

comment by rule_and_line · 2013-11-08T15:36:37.193Z · LW(p) · GW(p)

To what nugget of rationality does this point?

Replies from: lmm
comment by lmm · 2013-11-08T23:58:27.197Z · LW(p) · GW(p)

That behaviourally people treat free very differently from even $1, and that effective policymaking requires removing even trivial-seeming barriers to desired actions.

Replies from: hairyfigment
comment by hairyfigment · 2013-11-18T02:31:01.924Z · LW(p) · GW(p)

Yes. I also take it as a warning against hyperbolic discounting.

I'm a little bemused by the fact that my karma dropped, shortly after this, by exactly twice the then-current score of the great-grandparent. The precision confuses me at least as much as the magnitude.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-11-25T17:56:33.768Z · LW(p) · GW(p)

Out of all the possible things that might have happened this month for which you would have surprisedly noticed that they had only a 1% chance of happening, how many have actually happened?

Replies from: Jiro
comment by Jiro · 2013-11-25T18:52:02.772Z · LW(p) · GW(p)

Like most human beings, I'm not very good at estimating the exact probability of low probability events, so would have no idea if some event had a probability of 1% as opposed to 10% or 0.0001% in a situation where the distinction is actually important.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-11-25T21:12:32.191Z · LW(p) · GW(p)

1% is a very conservative lower bound for the probability of the change of your karma being twice the then-current score of its great-grandparent. The cursive words are those whose many possible modifications already yield enough possibilities that noting one of them occuring is completely uninteresting, except for this resulting free lesson in combinatorics ;)

comment by undermind · 2013-11-07T00:50:59.022Z · LW(p) · GW(p)

Oh, Death was never enemy of ours!

We laughed at him, we leagued with him, old chum.

No soldier's paid to kick against His powers.

We laughed, -knowing that better men would come,

And greater wars: when each proud fighter brags

He wars on Death, for lives; not men, for flags.

-Wilfred Owen

Replies from: gwern
comment by NancyLebovitz · 2013-11-06T17:55:38.917Z · LW(p) · GW(p)

"One of the miseries of life is that everybody names things a little bit wrong, and so it makes everything a little harder to understand in the world than it would be if it were named differently."

--Richard Feynman

Replies from: Daniel_Burfoot, shminux
comment by Daniel_Burfoot · 2013-11-10T17:41:41.435Z · LW(p) · GW(p)

Tsze-lu said, "The ruler of Wei has been waiting for you, in order with you to administer the government. What will you consider the first thing to be done?"

The Master replied, "What is necessary is to rectify names."

"So! indeed!" said Tsze-lu. "You are wide of the mark! Why must there be such rectification?"

The Master said, "How uncultivated you are, Yu! A superior man, in regard to what he does not know, shows a cautious reserve.

"If names be not correct, language is not in accordance with the truth of things. If language be not in accordance with the truth of things, affairs cannot be carried on to success.

Analects of Confucius

comment by Shmi (shminux) · 2013-11-07T20:57:02.087Z · LW(p) · GW(p)

That's because words are in general a grossly inadequate way to express thoughts. I wonder if telepathy would help with that.

Replies from: ChristianKl, TheOtherDave, NancyLebovitz
comment by ChristianKl · 2013-11-12T13:40:31.972Z · LW(p) · GW(p)

What do you mean with telepathy? You can't just transfer data from one neural net into another with having some sort of common data protocal that makes classifications. You still need some sort of language.

comment by TheOtherDave · 2013-11-07T21:32:26.710Z · LW(p) · GW(p)

I suspect shared access to the conceptual referents of words would help with that specific problem, yes.
I suspect (with Feynman) that it would make only a small difference.
In particular, I suspect that if we did that we'd run more often into the currently-masked problem that everybody thinks about things a little bit wrong.

(People seem to differ in their interpretations of "telepathy," so I've started trying to develop the habit of Tabooing the word. The irony of this in the current context does not escape me.)

comment by NancyLebovitz · 2013-11-12T12:41:53.907Z · LW(p) · GW(p)

I wonder if telepathy would help with that.

That would depend partly on the specific problems with the words, and partly on the rationality of the people.

If everyone is making the same mistake, telepathy will just amplify the problem. There will be a chorus of agreement which is amplifying the mistake.

If people are using the wrong word, but with different shadings, then perhaps people will look into how well the word fits the concept it is supposed to indicate. However, the odds favor people yelling at each other about who's right.

comment by Eugine_Nier · 2013-11-02T22:58:12.846Z · LW(p) · GW(p)

Someone I know at TAC opined that everyone knows this stuff, and talking about it is just mean. I think he is mistaken: you have to state important facts every so often, or nobody knows them anymore.

West Hunter

Replies from: ChristianKl
comment by ChristianKl · 2013-11-03T03:49:07.902Z · LW(p) · GW(p)

The article contains the line:

Average cranial capacity in Europeans is about 1362; 1380 in Asians, 1276 in Africans. It’s about 1270 in New Guinea.

What's wrong here? 4 degrees of accuracy for brain size and no error bars? That's a sign of someone being either intentionally or unintentionally dishonest.

Quick Googling shows that there's a paper published that states that European's average cranial capacities is 1347.

Rather then describing the facts as they are he paints things as more certain than they are. I think that people who do that in an area, where false beliefs lead to people being descrimited, are in no position to complain when they some social scorn.

Replies from: TheAncientGeek, Eugine_Nier, Randaly, army1987
comment by TheAncientGeek · 2013-11-04T12:09:23.981Z · LW(p) · GW(p)

How meaningful are figures on brain size without figures on overall body size?

comment by Eugine_Nier · 2013-11-03T04:15:35.656Z · LW(p) · GW(p)

Quick Googling shows that there's a paper published that states that European's average cranial capacities is 1347.

That's close enough to not effect his point, or even the order. I think you're engaging in motivated continuing to avoid having to acknowledge conclusions you find uncomfortable.

Rather then describing the facts as they are he paints things as more certain than they are. I think that people who do that in an area, where false beliefs lead to people being descrimited, are in no position to complain when they some social scorn.

Do you also apply the same criticism to the (much larger number of) people how make (much larger errors) in the direction of no difference? Also, could you taboo what you mean by "descrimited". Steelmanning suggests you mean "judged according to inaccurate priors", yet you also seem be implying that inaccurately equaliterian priors aren't a problem.

Replies from: TheAncientGeek, ChristianKl
comment by TheAncientGeek · 2013-11-04T12:19:59.384Z · LW(p) · GW(p)

Whatever the problem with non-factually-based equality may be, it is not a problem of discrimination, so the same criticism does not apply.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-05T01:37:09.065Z · LW(p) · GW(p)

This gets back to the issue that neither you nor Christian have defined what you mean by "discrimination". I gave one definition: "judged according to inaccurate priors", according to which your comment is false. If you want to use some other definition, please state it.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-05T09:09:52.345Z · LW(p) · GW(p)

Why would you think we are not using it in the standard sense? "Discrimination is the prejudicial and/or distinguishing treatment of an individual based on their actual or perceived membership in a certain group or category"

Replies from: Jiro
comment by Jiro · 2013-11-05T16:57:22.745Z · LW(p) · GW(p)

By that reasoning, refusing to hire someone who doesn't have good recommendations, is discrimination, because you're giving him distinguishing treatment (refusing to hire him) based on membership in a category (people who lack good recommendations).

I think you have some assumptions that you need to make explicit, after thinking them through first. (For instance, one obvious change is to replace "category" with "irrelevant category", but that won't work.)

Replies from: army1987, TheAncientGeek
comment by A1987dM (army1987) · 2013-11-11T19:20:24.951Z · LW(p) · GW(p)

Well, the recommendations you have are to some extent the result of your choices and actions, but whether your name sounds black hardly is. So, regret-of-rationality considerations apply more to the former than to the latter.

Replies from: Jiro, Eugine_Nier
comment by Jiro · 2013-11-12T16:19:46.375Z · LW(p) · GW(p)

So you are saying that I should modify the definition of "discrimination" supplied by TAG, to include a qualifier "only as a result of your choices and actions"?

That seems to say that some forms of religious discrimination don't count (choosing not to convert to Christianity is a result of your own choices and actions). It also ignores the fact that it is possible for someone to fall into a group where some of the group's members got there by their own choices and actions but some don't--not every person who can't get good recommendations is in that situation because of his own choices and actions. In fact, there's a continuum; what if, say, 10% of the people in a category got there by their own choices and actions but the other 90% had no choice?

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-12T16:49:54.447Z · LW(p) · GW(p)

Yes, it's a continuum. That's why I said “to some extent”, “hardly”, and “more ... than”.

Replies from: Jiro
comment by Jiro · 2013-11-14T16:43:47.015Z · LW(p) · GW(p)

That would still mean that if I say "convert to Christianity or I won't hire you" and you refuse, that wouldn't count as discrimination. It would also mean that refusing to hire gay people would not be discrimination, as long as you only refused to hire people who participated in some activity, whether having gay sex, wearing rainbow flags, having a gay wedding, etc.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-15T00:58:09.074Z · LW(p) · GW(p)

That would still mean that if I say "convert to Christianity or I won't hire you" and you refuse, that wouldn't count as discrimination.

So what? I'm not particularly outraged that atheists aren't allowed into the Swiss Guards.

It would also mean that refusing to hire gay people would not be discrimination, as long as you only refused to hire people who participated in some activity, whether having gay sex, wearing rainbow flags, having a gay wedding, etc.

For some reason, this sounds more problematic to me; not sure why.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-17T03:16:54.183Z · LW(p) · GW(p)

For some reason, this sounds more problematic to me; not sure why.

Probably because you've been more exposed to rhetoric saying how horrible it is to be "anti-gay" than rhetoric saying how horrible it is to be "anti-atheist".

Replies from: army1987, JoshuaZ
comment by A1987dM (army1987) · 2013-11-17T03:20:29.426Z · LW(p) · GW(p)

Actually I think it's the other way round, at least in the last few years. (There are many more atheists than gay people in my social circles.)

comment by JoshuaZ · 2013-11-18T05:08:11.834Z · LW(p) · GW(p)

Alternate hypothesis: one has a choice and the other one does not.

Replies from: Vaniver, Eugine_Nier
comment by Vaniver · 2013-11-21T05:49:30.483Z · LW(p) · GW(p)

You realize this is circling back around, right? The issue is that the two are both defined by choices, because the second group is not gay men but men who have consensual sex with men, and the second group's membership is voluntary regardless of whether or not the first's is.

Eugine_Nier responded, in effect, that the most likely reason to see those restrictions as being in different classes is the "who--whom?" of religious hiring restrictions being 'okay' and sexual-behavior-based hiring restrictions being 'not okay' because of status alignments of religion and sexual behavior.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-21T05:51:52.768Z · LW(p) · GW(p)

That's a good point.

comment by Eugine_Nier · 2013-11-18T07:08:00.968Z · LW(p) · GW(p)

What definition of "choice" are you using that makes that statement true?

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-18T14:56:01.080Z · LW(p) · GW(p)

I'm not sure there's a coherent one here. But for issues such as this, what people, or society are comfortable with is a to a large extent a function of what intuitions they have. In this sort of context, the distinction is likely connected to intuitions of free will, whether or not that makes any sense.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-19T00:39:53.805Z · LW(p) · GW(p)

I don't see any a priori reason why those intuitions should apply to the case of religion but not homosexuality.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-19T01:46:14.728Z · LW(p) · GW(p)

How a priori reason do you want it to be? There's at this point a general consensus that sexual orientation, while culturally mediated, has a large genetic component. I don't think that anyone thinks there's say a gene for Christianity or the like. And while self-identified sexual orientation can change, that's substantially less common an occurrence as changes in one's religious self-identification. For example, in the US about half of all adults have changed their religion at least once in their lives. See here. Some of those people are people changing from one version of Protestantism to another, but even when you take them out, one is still talking about around a third of the population. And this is fairly well known- people understand that religious beliefs can change: indeed, many major religions have specific rules about how to handle conversions, and many forms of major religions (especially in Christianity and Islam) have active commandments to go and convert people.

Replies from: army1987, passive_fist, Eugine_Nier
comment by A1987dM (army1987) · 2013-11-20T21:46:27.387Z · LW(p) · GW(p)

I don't think that anyone thinks there's say a gene for Christianity or the like.

Well, atheism is correlated with IQ and (ISTM) Openness to Experience, which are both somewhere around 50% inheritable.

Look at what I found by googling for atheism twin studies!

Replies from: Lumifer
comment by Lumifer · 2013-11-20T22:10:25.385Z · LW(p) · GW(p)

Well, atheism is correlated with IQ and (ISTM) Openness to Experience, which are both somewhere around 50% inheritable.

Yeah, well, Judaism is correlated with IQ as well :-D and I bet any non-mainstream religious beliefs will be correlated with Openness to Experience.

Replies from: army1987
comment by A1987dM (army1987) · 2013-12-15T10:03:16.213Z · LW(p) · GW(p)

Well, I guess Jews and Wiccans would be as reluctant to get baptised in order to get a job as atheists are -- possibly more so, insofar as I can easily imagine the latter thinking ‘if a lie is what you want, a lie is what you'll get’ (there was a comment somewhere on LW to that effect, but I can't find it).

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-15T15:43:13.620Z · LW(p) · GW(p)

Such a requirement would bother (to put it mildly) me much more as a Jew than it would as an atheist, FWIW.

Replies from: army1987
comment by A1987dM (army1987) · 2013-12-15T16:45:41.996Z · LW(p) · GW(p)

Yes, on re-reading my comment it sounds like I was trying to win an award for the Understatement of the Year.

comment by passive_fist · 2013-11-20T22:24:25.692Z · LW(p) · GW(p)

I'm not aware of any 'consensus' that sexual orientation has a 'large' genetic component. Some research suggests there is a genetic component, and it is known that there are developmental factors as well (i.e. what happens in the womb and afterwards) and also environmental factors. It is widely accepted that homosexuality is 'not a (conscious) choice' but that's most definitely not the same as saying "it's genetic".

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-20T23:38:17.826Z · LW(p) · GW(p)

See other discussion in this subthread for evidence that there's a large genetic component and that most papers in this context matter. I agree that there's likely developmental and environmental issues at play, as well as stochastic effects.

comment by Eugine_Nier · 2013-11-20T02:26:14.142Z · LW(p) · GW(p)

There's at this point a general consensus that sexual orientation, while culturally mediated, has a large genetic component.

And near as I can tell, the only things underlying this consensus are the "we can get it to show up on brain scans, therefore it must be genetic" fallacy and the accusation of being an EVIL HOMOPHOBE!!1!! hurled against anyone who questions it.

The fact that the kind of people responsible for these kinds of consensuses believe it is perfectly acceptable to promote falsehoods under the name of science for the "greater good" doesn't exactly increase my confidence in it.

Replies from: JoshuaZ, TheAncientGeek, army1987
comment by JoshuaZ · 2013-11-20T02:35:54.061Z · LW(p) · GW(p)

And near as I can tell the only things underlying this consensus are the "we can get it to show up on brain scans, therefore it must be genetic" fallacy and the accusation of being an EVIL HOMOPHOBE!!1!! hurled against anyone who questions it.

Twin studies show a strong genetic component. There's also (weak) evidence for epigenetic effects. See here. There's also circumstantial evidence of a link to female fertility.

The brain evidence is interesting, but it goes more to a lack of choice, rather than a genetic aspect. Of course, as with any complicated behavioral trait with some variation, it is likely that at least part of the trait is due to environmental effects at a young age, possibly including womb environment.

The fact that the kind of people responsible for these kinds of consensuses believe it is perfectly acceptable to promote falsehoods under the name of science for the "greater good" doesn't exactly increase my confidence in it.

And you are A) Linking to a discussion where you and I already discussed exactly this and why your response didn't make sense, and B) making a massive leap that from some political groups presenting selective evidence, that one should therefore doubt scientific studies. This does not follow.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-20T03:03:10.923Z · LW(p) · GW(p)

Twin studies show a strong genetic component.

The page you linked to doesn't exactly present evidence of a strong genetic component (or much of a consensus for that matter).

The brain evidence is interesting, but it goes more to a lack of choice, rather than a genetic aspect.

Except it's not evidence of lack of choice (unless you adopt a very specific kind of Cartesian dualism) either.

And you are A) Linking to a discussion where you and I already discussed exactly this and why your response didn't make sense,

Your reasoning was basically "everyone lies thus some political group lying isn't evidence against their other claims". Not a particularly strong argument at the best of times, certainly not here were the situation is extremely analogous to the lie at the link (just substitute "transgender" for "homosexuality").

B) making a massive leap that from some political groups presenting selective evidence, that one should therefore doubt scientific studies.

It wasn't just presenting selective evidence, it was getting BS or outright lies declared part of the scientific consensus.

Replies from: gwern, JoshuaZ
comment by gwern · 2013-11-20T03:09:43.202Z · LW(p) · GW(p)

The page you linked to doesn't exactly present evidence of a strong genetic component (or much of a consensus for that matter).

Could you elaborate? For example, the estimate from the largest study in the page (3,826 pairs) seems pretty nontrivial: 39% of variance! That's not small or trivial by any means - that's getting up there with some twin estimates of intelligence. And especially since the shared-environment estimate is 0.

comment by JoshuaZ · 2013-11-20T03:16:11.593Z · LW(p) · GW(p)

The page you linked to doesn't exactly present evidence of a strong genetic component (or much of a consensus for that matter).

Of the studies in question, the only one which showed a weak correlation listed is Bearman and Brückner, which still showed a 7.7% concordance rate for male homosexuality. The female rate is lower, which shouldn't be that surprising, given that the other studies included also showed lower concordance rates for females, and that there's other evidence for female sexuality being more malleable than male sexuality. Moreover, the largest study in question, the Sweden study, used literally every single pair of twins born in the country, which makes for a much larger study and handles the selection bias problems. And that study agreed with the results of the other studies excepting Bearman and Bruckner.

The brain evidence is interesting, but it goes more to a lack of choice, rather than a genetic aspect.

Except it's not evidence of lack of choice (unless you adopt a very specific kind of Cartesian dualism) either.

What does this have to do with Cartesian dualism at all? For that matter, how is this at all relevant, since as I stated earlier, the matter under discussion is what is descriptively thought of as a choice or not by members of society. The entire point was discussing why someone would have a different reaction to the homosexuality discrimination issue than the atheism discrimination issue. In that context, colloquial intuitive notions of choice are relevant.

Your reasoning was basically "everyone lies thus some political group lying isn't evidence against their other claims". Not a particularly strong argument at the best of times, certainly not here were the situation is extremely analogous to the lie under discussion (just substitute "transgender" for "homosexuality").

That' isn't what the argument is. Please reread that discussion.

It wasn't just presenting selective evidence, it was getting BS or outright lies declared part of the scientific consensus.

Really? Did you see any evidence in that discussion that any aspect of this has had any influence on the scientific studies in question? Have they somehow managed to fake data and get it through? Do you think they've had papers get rejected? The only plausible sort of situation is selective granting, which requires specific aspects of the LGBTQE movement (not even generically but people who would agree with this lying idea, which the vast majority would not) to have somehow gotten positions in grant giving institutions. Do you have any evidence for that?

Replies from: Douglas_Knight, Eugine_Nier
comment by Douglas_Knight · 2013-11-20T22:55:46.227Z · LW(p) · GW(p)

which still showed a 7.7% concordance rate for male homosexuality

In isolation, this number tells us nothing. What is important is the gap between concordance rates for MZ, DZ, and siblings. There are several popular hypotheses that predict a large effect of prenatal environment. In fact, the particular paper give 7.7% MZ, 4.2% DZ, and 4.5% siblings, so it detects no effect of prenatal environment and does suggest genetics.

comment by Eugine_Nier · 2013-11-21T02:34:30.031Z · LW(p) · GW(p)

The brain evidence is interesting, but it goes more to a lack of choice, rather than a genetic aspect.

Except it's not evidence of lack of choice (unless you adopt a very specific kind of Cartesian dualism) either.

What does this have to do with Cartesian dualism at all?

Unless you accept some form of Cartesian dualism you should expect every aspect of the mind to show up on sufficiently advanced brain scans, whether or not it is a choice. Thus, the brain evidence isn't particularly interesting.

For that matter, how is this at all relevant, since as I stated earlier, the matter under discussion is what is descriptively thought of as a choice or not by members of society.

And my point is that this perception isn't grounded in anything objective and thus itself needs an explanation.

Your reasoning was basically "everyone lies thus some political group lying isn't evidence against their other claims". Not a particularly strong argument at the best of times, certainly not here were the situation is extremely analogous to the lie under discussion (just substitute "transgender" for "homosexuality").

That' isn't what the argument is. Please reread that discussion.

I have, twice in the last few days in fact. Why don't you try rereading it?

I'm going to give you the benefit of the doubt and assume I misunderstood your argument there, if so could you state it more explicitly.

Really? Did you see any evidence in that discussion that any aspect of this has had any influence on the scientific studies in question?

Well Julian practically admitted it.

Have they somehow managed to fake data and get it through? Do you think they've had papers get rejected?

Sometimes, generally pressure is applied to people who produce politically incorrect results. Look what happened to Mark Regnerus or Jason Richwine, or to use a more famous example James Watson.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-21T02:45:02.679Z · LW(p) · GW(p)

Eugine, within 2 minutes of your comment above, I received a block downvote to all my recent comments regardless of subject. Given this thread, which I presume you must have already seen, my interest in whether this was your action should be clear. If you are attempting to simply ignore the issues, you may want to be aware that this is causing serious concerns and problems within the community, and at least one LW user has considered implementing a response of block downvoting all your comments and those of people with views similar to your own. This situation is creating a mindkilling political mindfield and it is in your and everyone else's best interest for it to be headed off before the situation deteriorates. Therefore a straight answer from you as to whether you are involved in this would be both appreciated and in the best interest of the community.

If you think you can get away with ignoring ialdabaoth's requests because of the user's relatively low status, you should be aware that my own status on LW, while not very high, is likely not so low that simply ignoring me will be a remotely productive response.

You have made some valid points above, but I cannot given recent events discuss them or any other issue with you, until you address the community's concerns here.

Replies from: Vaniver, Dorikka
comment by Vaniver · 2013-11-21T05:53:28.241Z · LW(p) · GW(p)

I would like to loudly add that I am very interested in the norms of polite discourse being followed on LW, and am throwing whatever status I have behind this request. Furthermore, if this turns out to be the sort of thing that is best resolved in PMs rather than comments, I am willing to facilitate conversations that way.

comment by Dorikka · 2013-11-21T06:00:45.315Z · LW(p) · GW(p)

Attempted to reverse the effects of the block downvote. Note that any permanent solution to this problem cannot rely on the co-operation of possible defectors. If there is not a method for detecting such defections and preferably determing the source, it continues to be a viable method of asymmetric warfare.

Also might be a decent idea for someone to take formal responsibility for moderating the website. Several people have moderator powers, but I do not know of anyone who is actually responsible for such moderation. (It seems that this would fall to Eliezer by default, but this does not seem to be Eliezer's comparative advantage.)

ETA: A designated moderator is necessary even with a system such as I described in place since the automated detection can screw up. A human overseer spending, say, an hour on this each week, may be greatly productive if thy have the appropriate tools available.

comment by TheAncientGeek · 2013-11-20T10:48:47.685Z · LW(p) · GW(p)

I didn't relaise that "falsehood" meant "not necessarily true". I also remain unclear why a statment as guarded "believed to have a genetic component" would need to be qualiied with a further "maybe".

comment by Eugine_Nier · 2013-11-13T04:07:33.955Z · LW(p) · GW(p)

So would you oppose discrimination against wheelchair bound construction workers?

comment by TheAncientGeek · 2013-11-05T17:49:13.034Z · LW(p) · GW(p)

Oh dear. Whoever wrote the WP article I was quoting didn't steelman their definition.

Replies from: Jiro
comment by Jiro · 2013-11-05T20:50:21.145Z · LW(p) · GW(p)

Wikipedia is supposed to use what's in the sources. They're not allowed to steelman.

It may just mean "group or category which I like", but I wouldn't count that as steelmanning.

The best I can come up with is "group or category which has, in the past, often been subject to inaccurately negative judgment based on inaccurate priors". In fact, let's try that one.

comment by ChristianKl · 2013-11-03T05:15:06.124Z · LW(p) · GW(p)

First sorry for the typo.

That's close enough to not effect his point, or even the order. I think you're engaging in motivated continuing to avoid having to acknowledge conclusions you find uncomfortable.

Claiming 4 degrees of accuracy means, claiming that the factor of uncertainity about the difference is off by a factor of more than ten.

Understanding the uncertainity that exist in vital for reasoning effectively about what's true.

Do you also apply the same criticism to the (much larger number of) people how make (much larger errors) in the direction of no difference?

Different people have different goals. If your goal is the search for truth than it matters greatly whether what you speaking is true.

If your goal is to spread memes that produce social change than it makes sense to use different criteria.

What does discrimination mean? If a job application with a name that common with black people gets rejected while an identical one with a name that's common with white people gets accepted that would be an example of bad discrimination.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-03T05:19:34.330Z · LW(p) · GW(p)

If a job application with a name that common with black people gets rejected while an identical one with a name that's common with white people gets accepted that would be an example of bad discrimination.

Does it matter if having said name is in fact correlated with job performance?

Replies from: army1987, ChristianKl
comment by A1987dM (army1987) · 2013-11-04T09:46:12.300Z · LW(p) · GW(p)

Only if it's still correlated when you control for anything else on the CV and cover letter, incl. the fact that the candidate is not currently employed by anyone else.

comment by ChristianKl · 2013-11-03T08:26:44.296Z · LW(p) · GW(p)

Does it matter if having said name is in fact correlated with job performance?

Being correlated isn't very valuable in itself. Even if you do believe that blacks on average have a lower IQ, scores on standardized test tell you a lot more about someone IQ.

The question would be whether the name is a better predictor of job performance than grades to distinguish people in the population of people who apply or whether the information that comes from the names adds additional predictive value.

But even if various proxies of social status would perform as predictors I still value high social mobility. Policies that increase it might not be in the interest of the particular employeer but of interest to society as a whole.

Replies from: Vaniver
comment by Vaniver · 2013-11-03T15:56:35.978Z · LW(p) · GW(p)

The question would be whether the name is a better predictor of job performance than grades to distinguish people in the population of people who apply or whether the information that comes from the names adds additional predictive value.

Emphasis mine. I don't think this is the question at all, because you also have the grade information; the only question is if grades screen off evidence from names, which is your second option. It seems to me that the odds that the name provides no additional information are very low.

To the best of my knowledge, no studies have been done which submit applications where the obviously black names have higher qualifications in an attempt to determine how many GPA points an obviously black name costs an applicant. (Such an experiment seems much more difficult to carry out, and doesn't have the same media appeal.)

Replies from: TheOtherDave, ChristianKl
comment by TheOtherDave · 2013-11-03T17:00:36.382Z · LW(p) · GW(p)

So, this "only question" formulation is a little awkward and I'm not really sure what it means. For my part I endorse correctly using (grades + name) as evidence, and I doubt that doing so is at all common when it comes to socially marked names... that is, I expect that most people evaluate each source of information in isolation, failing to consider to what extent they actually overlap (aka, screen one another off).

Replies from: Vaniver
comment by Vaniver · 2013-11-03T17:57:00.195Z · LW(p) · GW(p)

So, this "only question" formulation is a little awkward and I'm not really sure what it means.

ChristianKI brought up the proposition "(name)>(grades)" where > means that the prediction accuracy is higher, but the truth or falsity of that proposition is irrelevant to whether or not it's epistemically legitimate to include name in a decision, which is determined by "(name+grades)>(grades)".

I doubt that doing so is at all common when it comes to socially marked names

Doing things correctly is, in general, uncommon. But the shift implied by moving from 'current' to 'correct' is not always obvious. For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less. It's possible that hiring departments are actually less biased against people with obviously black names than they should be.

Replies from: TheOtherDave, NancyLebovitz, ChristianKl
comment by TheOtherDave · 2013-11-03T19:53:52.406Z · LW(p) · GW(p)

if their estimates became more accurate, we might see more smokers, not less

...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.

It's possible that hiring departments are actually less biased against people with obviously black names than they should be.

Well, it's odd to use "bias" to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it's possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation... in other words, that names are more reliable evidence than they are given credit for being.

That said, if I'm right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.

Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.

But even if that's true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn't as reliable a long-term strategy as starting to correctly use all the evidence we have.

Replies from: Vaniver
comment by Vaniver · 2013-11-04T01:15:03.458Z · LW(p) · GW(p)

That said, if I'm right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.

Agreed. Any sort of decision process which uses multiple pieces of information should be calibrated on all of those pieces of information together whenever possible.

comment by NancyLebovitz · 2013-11-04T13:32:46.794Z · LW(p) · GW(p)

It's even possible that if the costs of smoking are overestimated, more people should be smoking-- part of the campaign against smoking is to underestimate the pleasures and social benefits of smoking.

comment by ChristianKl · 2013-11-03T23:41:52.052Z · LW(p) · GW(p)

For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less.

That in no way implies that it would be a good choice for people to smoke more. People don't make those decisions through rational analysis.

comment by ChristianKl · 2013-11-03T22:56:47.225Z · LW(p) · GW(p)

Emphasis mine. I don't think this is the question at all, because you also have the grade information; the only question is if grades screen off evidence from names, which is your second option. It seems to me that the odds that the name provides no additional information are very low.

If you combine a low noise signal with a high noise signal the combined signal can be of medium noise. Combining information isn't always useful if you want to use both signal as proxy for the same thing.

For combining information in such a way you would have to believe that the average black with a IQ of 120 will get a higher GPA score than the average white person of the same IQ.

I think there little reason to believe that's true.

Without actually running a factor analysis on the outcomes of hiring decision it will be very difficult to know in which direction it would correct the decision.

Even if you do run factor analysis integrating addtional variables costs you degrees of freedom so it not always a good choice to integrate as much variables as possible in your model. Simple models often outperform more complicated ones.

Human's are also not good at combining multiple sources of information.

Replies from: Vaniver, Moss_Piglet
comment by Vaniver · 2013-11-04T02:26:17.982Z · LW(p) · GW(p)

If you combine a low noise signal with a high noise signal the combined signal can be of medium noise. Combining information isn't always useful if you want to use both signal as proxy for the same thing.

Agreed that if you have P(A|B) and P(A|C), then you don't have enough to get P(A|BC).

But if you have the right objects and they're well-calibrated, then adding in a new measurement always improves your estimate. (You might not be sure that they're well-calibrated, in which case it might make sense to not include them, and that can obviously include trying to estimate P(A|BC) from P(A|C) and P(A|B).)

For combining information in such a way you would have to believe that the average black with a IQ of 120 will get a higher GPA score than the average white person of the same IQ.

Not quite. Regression to the mean implies that you should apply shrinkage which is as specific as possible, but this shrinkage should obviously be applied to all applicants. (Regressing black scores to the mean, and not regressing white scores, for example, is obviously epistemic malfeasance, but regressing black scores to the black mean and white scores to the white mean makes sense, even if the IQ-grades relationship is the same for blacks and whites.)

It could also be that the GPA-job performance link is different for whites and blacks, even if the IQ-GPA link is the same for whites and blacks. (And, of course, race could impact job performance directly, but it seems likely the effects should be indirect for almost all jobs.)

I think there little reason to believe that's true.

If you're just comparing GPAs, rather than GPAs weighted by course difficulty, there could be a systematic difference in the difficulty of classes that applicants take by race. I've had a hard time getting numerical data on this, for obvious reasons, but there are rumors that some institutions may have a grade bias in favor of blacks. (Obviously, you can't fit a parameter to a rumor, but this is reason to not discount an effect that you do see in your data.)

Simple models often outperform more complicated ones.

Yes, but... motivated cognition alert. If you're building models correctly, you take this into account by default, and so there's no point in bringing it up for any particular input because you should already be checking it for every input.

comment by Moss_Piglet · 2013-11-03T23:48:44.493Z · LW(p) · GW(p)

For combining information in such a way you would have to believe that the average black with a IQ of 120 will get a higher GPA score than the average white person.

I think there little reason to believe that's true.

Could you explain your reasoning here?

IQ is a strong predictor of academic performance, and a 1.5 sd gap is a fairly significant difference. The only thing I could think of to counterbalance it so that the average white would get a higher GPA would be through fairly severe racial biases in grading policies in their favor, which seems at odds with the legally-enforced racial biases in admissions / graduation operating in the opposite direction. Not to mention that black African immigrants, legal ones anyway, seem to be the prototype of high-IQ blacks who outperform average whites.

I am a little puzzled by the claim, which leads me to believe I've misunderstood you somehow or overlooked something fairly important.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-03T23:53:41.495Z · LW(p) · GW(p)

I missed the qualification of speaking of whites with the same IQ. I added it via an edit.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-11-03T23:58:20.775Z · LW(p) · GW(p)

Right, okay. I did misunderstand you. I'll correct my comment as soon as I figure out the strikethrough function here.

Replies from: Vaniver, Eugine_Nier
comment by Vaniver · 2013-11-04T01:16:55.445Z · LW(p) · GW(p)

I believe the primary way to get strikethrough is to strikethrough the entire comment, by retracting it.

comment by Eugine_Nier · 2013-11-04T03:55:40.834Z · LW(p) · GW(p)

You can use unicode.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-04T09:53:40.849Z · LW(p) · GW(p)

I'd recommend Vaniver's solution instead -- IME Android phones don't like yours.

comment by Randaly · 2013-11-03T08:47:44.453Z · LW(p) · GW(p)

Source is here. SD for Asians and Europeans is 35, SD for Africans was 85. N=20,000.

What's wrong here? 4 degrees of accuracy for brain size and no error bars? That's a sign of someone being either intentionally or unintentionally dishonest.

...no? Why in the world would he present error bars? The numbers are in line with other studies, without massive uncertainty, and irrelevant to his actual, stated and quoted, point.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-03T10:52:08.530Z · LW(p) · GW(p)

His stated point is about telling things that everybody is supposed to know.

If you have an SD of 35 for an average of 1362 you have no idea about whether the last digit should be a 2. That means either you do state an error interval or you round to 1360.

Human height changed quite a bit over the last century. http://www.voxeu.org/article/reaching-new-heights-how-have-europeans-grown-so-tall . Taking data about human brainsize with 4 digit accuracy and assuming that it hasn't changed over the last 30 years is wrong.

European gained a lot of bodymass over the last 100 years due to better nutrition. The claim that it's static at 4 digit in a way where you could use 30 year old data to describes todays situation, gives the impression that human brainsize is something with is relatively fixed.

The difference in brain size between Africans and European in brainsize in that study is roughly the difference in height between todays Europeans and Europeans 100 years ago.

Given that background taking a three decades old average from one sample population and claiming that it's with 4 digits accuracy the average that exist today is wrong.

Replies from: Randaly, dspeyer
comment by Randaly · 2013-11-03T11:09:46.411Z · LW(p) · GW(p)

His stated point is about telling things that everybody is supposed to know.

No, that was absolutely not his point. I don't understand how you could have come away thinking that- literally the entire next paragraph directly stated the exact opposite:

Graduate students in anthropology generally don’t know those facts about average brain volume in different populations. Some of those students stumbled onto claims about such differences and emailed a physical anthropologist I know, asking if those differences really exist. He tells them ‘yep’ – I’m not sure what happens next. Most likely they keep their mouths shut. Ain’t it great, living in a free country?

More generally, that was not a tightly reasoned book/paper about brainsize. That line was a throwaway point in support of a minor example ("For example, average brain size is not the same in all human populations") on a short blog post. Arguments about the number of significant figures presented, when you don't even disagree about the overall example or the conclusion, are about as good an example of bad disagreement as I can imagine.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-03T21:51:30.327Z · LW(p) · GW(p)

No, that was absolutely not his point. I don't understand how you could have come away thinking that- literally the entire next paragraph directly stated the exact opposite:

I don't think that the following classes are the same:
(1) Facts everyone should know.
(2) Facts everyone knows.

I think the author claims that this is a (1) fact but not a (2) fact.

Replies from: Randaly
comment by Randaly · 2013-11-04T02:38:33.555Z · LW(p) · GW(p)

His claim was:

(a) Everybody knew that different ethnicities had different brain sizes (b) It was an uncomfortable fact, so nobody talked about it (c) Now nobody knows that different ethnicities have different brain sizes

comment by dspeyer · 2013-11-04T05:43:41.121Z · LW(p) · GW(p)

If you have an SD of 35 for an average of 1362 you have no idea about whether the last digit should be a 2. That means either you do state an error interval or you round to 1360.

If individual datapoints have an SD of 35, and you have 20000 datapoints, then the SD of studies like this is 35/sqrt(20000)≈0.24. So giving a one's digit for the average is perfectly reasonable.

Replies from: ChristianKl, army1987
comment by ChristianKl · 2013-11-04T09:10:16.202Z · LW(p) · GW(p)

According to the paper the total mean brain size for males is 1,427 while for females it's 1,272. Given around half women and half men the SD per point should be higher than 35.

comment by A1987dM (army1987) · 2013-11-04T09:41:29.318Z · LW(p) · GW(p)

(Assuming the sample is unbiased.)

comment by A1987dM (army1987) · 2013-11-03T15:23:10.147Z · LW(p) · GW(p)

4 degrees of accuracy for brain size and no error bars?

Well, he did say “about”.

comment by elharo · 2013-11-11T19:41:03.799Z · LW(p) · GW(p)

Most people think that the negotiation is about substance: I’m a financial expert, I’m a medical doctor, I’m an environmental lawyer, I’m an energy expert, I’m a mechanic. But studies show that less than 10 percent of the reason why people reach agreement has anything to do with the substance. More than 50 percent has to do with the people—do they like each other, do they trust each other, will they hear what each other has to say? Just over a third has to do with the process they use. That is, do they decide to explore each other’s needs (rational and emotional)? Do they agree on an agenda? Do they make genuine commitments to each other?

If you believe that negotiations are about the substantive issues, sadly, you will be right more than you are persuasive. That means that the truth, the facts, are only one argument in a negotiation. The people and the process are much more important. This is particularly hard for people who are focused on the substance—doctors, engineers, financial experts—to accept. But, based on research, it is true. You can’t even use substantive issues to persuade effectively unless and until the other party is ready to hear about them.

--Stuart Diamond, Getting More, 2010, pp. 51-52

Replies from: Mestroyer, NancyLebovitz, Bugmaster, Eugine_Nier
comment by Mestroyer · 2013-11-11T21:43:38.034Z · LW(p) · GW(p)

I was once at a meetup, and there were some people there new to LessWrong. After listening to a philosophical argument between two long-time meetup group members, where they agreed on a conclusion that was somewhere between their original positions, a newcomer said "sounds like a good compromise," to which one of the old-comers (?) said "but that has nothing to do with whether it's true... in fact now that you point that out I'm suspicious of it."

Later in the meetup, an argument ended with another conclusion that sounded like a compromise. I pointed it out. One of the arguers was horrified to agree with me that compromising was exactly what he was doing.

Is this actually a failure mode though, if you only "compromise" with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.

Replies from: DanArmak, ChristianKl, Lumifer
comment by DanArmak · 2013-11-15T15:57:29.092Z · LW(p) · GW(p)

Is this actually a failure mode though, if you only "compromise" with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.

Each side should update on the other's arguments and data, and on the fact that the other side believes what it does (inasfar we can't perfectly trust our own reasoning process). This often means they update towards the other's position. But it certainly doesn't mean they're going to update so much as to agree on a common position.

You don't need to try to approximate Aumann agreement because you don't believe that either yourself or the other party is perfectly rational, so you can't treat your or the other's beliefs as having that kind of weight.

Also, people who start out looking for a compromise might be led to compromise in a bad way: A's theory predicts ball will fall down, B's theory predicts ball will fall up, compromise theory predicts it will stay in place, even though both A and B have evidence against that.

comment by ChristianKl · 2013-11-12T15:21:17.210Z · LW(p) · GW(p)

Is this actually a failure mode though, if you only "compromise" with people you respect intellectually?

Part of intellectual debate is that you judge arguments on their merits instead of negotiating what's true. Comprosing suggests that you are involved in a negotiation over what's true instead of search for the real truth.

Replies from: Mestroyer
comment by Mestroyer · 2013-11-12T18:17:01.470Z · LW(p) · GW(p)

It doesn't matter what it sounds like you are doing or what you think you are doing. One thing matters: how good is your actual answer?

Replies from: ChristianKl
comment by ChristianKl · 2013-11-12T19:38:24.160Z · LW(p) · GW(p)

Yes, but if you followed a crappy reasoning process it's less likely that you end up with a high quality answer than when you followed a good process.

comment by Lumifer · 2013-11-11T22:05:21.823Z · LW(p) · GW(p)

Theoretically, if you treat your own previous position as a prior and the other guy's arguments as some evidence, the standard updating will lead you to have a new position somewhere in between which will look like a compromise.

Obviously there are are lot of caveats -- e.g. the assumption that an intermediate position makes sense (that is, the two positions are on some kind of continuous axis), etc.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-11-12T00:17:49.074Z · LW(p) · GW(p)

Theoretically, if you treat your own previous position as a prior and the other guy's arguments as some evidence, the standard updating will lead you to have a new position somewhere in between

Not to the extent your current position already takes those arguments into account (in which case the arguments fail to address any disagreement). More than that, by conservation of expected evidence some arguments should change your mind in the opposite direction from what they are intended to argue.

comment by NancyLebovitz · 2013-11-12T12:49:09.632Z · LW(p) · GW(p)

You can’t even use substantive issues to persuade effectively unless and until the other party is ready to hear about them.

Just underlining a bit I especially like.

comment by Bugmaster · 2013-11-14T02:24:23.299Z · LW(p) · GW(p)

In my experience, bringing substance into the conversation greatly reduces your chances of convincing anyone of anything. If your goal is not to seek the truth, but to actually convince someone, it's best to stay away from substance and to reach directly for their emotional levers.

comment by Eugine_Nier · 2013-11-12T01:25:53.995Z · LW(p) · GW(p)

do they like each other, do they trust each other, will they hear what each other has to say? Just over a third has to do with the process they use. That is, do they decide to explore each other’s needs (rational and emotional)? Do they agree on an agenda? Do they make genuine commitments to each other?

I don't think this is a failure of rationality: in disagreements about facts you have to trust the other person to not lie, in a negotiation you have to trust the other person to keep his end of the bargain.

comment by Ben Pace (Benito) · 2013-11-02T10:03:40.853Z · LW(p) · GW(p)

On not doing the impossible:

Ferrucci says. “We constantly underestimate—we did in the ’50s about AI, and we’re still doing it—what is really going on in the human brain.”

The question that [Douglas] Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don’t you come study it?

...

Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”

-Article at The Atlantic

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-02T21:11:58.362Z · LW(p) · GW(p)

Think big. Then think bigger than that. Don’t stop dreaming until you change the entire world. If you’re thinking about things that already exist, you’re doing it wrong. Never settle for good enough.

Will Burns

comment by roland · 2013-11-10T18:12:07.915Z · LW(p) · GW(p)

If you want something new, you have to stop doing something old.

-- Peter Drucker

comment by Jayson_Virissimo · 2013-11-06T06:04:44.647Z · LW(p) · GW(p)

I know that what I see through the microscope is veridical because we made the grid to be just that way. I know that the process of manufacture is reliable, because we can check the results with the microscope. Moreover we can check the results with any kind of microscope, using any of a dozen unrelated physical processes to produce an image. Can we entertain the possibility that, all the same, this is some gigantic coincidence?

-- Ian Hacking, Images of Science: Essays on Realism and Empiricism

Replies from: AlexSchell
comment by AlexSchell · 2013-11-23T19:51:16.104Z · LW(p) · GW(p)

That is (ETA: also) from Hacking's Representing and Intervening.

comment by rule_and_line · 2013-11-06T02:39:09.048Z · LW(p) · GW(p)

The idea that a self-imposed external constraint on action can actually enhance our freedom by releasing us from predictable and undesirable internal constraints is not an obvious one. It is hard to be Ulysses.

-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)

The "Ulysses" reference is to the famous Ulysses pact in the Odyssey.

comment by [deleted] · 2013-11-13T14:30:23.820Z · LW(p) · GW(p)

I confess that there are several parts of this constitution which I do not at present approve, but I am not sure I shall never approve them: For having lived long, I have experienced many instances of being obliged by better information or fuller consideration, to change opinions even on important subjects, which I once thought right, but found to be otherwise. It is therefore that the older I grow, the more apt I am to doubt my own judgment, and to pay more respect to the judgment of others. Most men indeed as well as most sects in Religion, think themselves in possession of all truth, and that whereever others differ from them it is so far error. Steele, a Protestant in a Dedication tells the Pope, that the only difference between our Churches in their opinions of the certainty of their doctrines is, the Church of Rome is infallible and the Church of England is never in the wrong. But though many private persons think almost as highly of their own infallibility as of that of their sect, few express it so naturally as a certain french lady, who in a dispute with her sister, said "I don't know how it happens, Sister but I meet with no body but myself, that's always in the right"--"Il n'y a que moi qui a toujours raison."

--Benjamin Franklin

comment by Shmi (shminux) · 2013-11-08T15:56:07.455Z · LW(p) · GW(p)

Natural selection is a tinkerer, not an idiot!

SMBC comics on the relative proximity of excretory and reproductive outlets in humans.

Replies from: Cyan, Roxolan, Swimmer963, joaolkf
comment by Cyan · 2013-11-08T18:01:44.526Z · LW(p) · GW(p)

Evo-devo (that is to say, actual real science) gives an even better account of that accident of evolutionary history. For simple sessile animals, reproduction often involves dumping quantities of spores or gametes into the environment. And what other system already dumps quantities of stuff into the environment...?

comment by Roxolan · 2013-11-09T09:26:57.410Z · LW(p) · GW(p)

Who puts sanitation next to recreation? Well here's why your excretory organs should be separate from your other limbs and near the bottom of your body.

Okay, but why should the reproductive outlets be there too?

I agree connotationally, but the comic only answers half of the question.

Replies from: IlyaShpitser, hyporational
comment by IlyaShpitser · 2013-11-12T13:16:42.385Z · LW(p) · GW(p)

I am a fan of SMBC, but the entire explanation is wrong. The events that led to the integration of reproductive and digestive systems happened long before a terrestrial existence of vertebrates, and certainly long before hands. To get a start on a real explanation you have to go back to early bilaterals:

http://www.leeds.ac.uk/chb/lectures/anatomy9.html

As near as I can tell it was about pipe reuse. But you can't make a funny comic about that (or maybe you can?). Zach is a "bard", not a "wizard." He entertains.

comment by hyporational · 2013-11-12T03:36:18.460Z · LW(p) · GW(p)

Try carrying the fetus and giving birth from any other location. I suppose having the fun parts somewhere else than the reproductive dumping tube could be nice, but wouldn't make any sense.

Replies from: Articulator
comment by Articulator · 2013-11-12T05:06:19.981Z · LW(p) · GW(p)

Consider the chicken, with its ingenious production line of eggs. Constant fertilization from a different orifice seems ideal, as (the source I just Googled suggests that) chickens have very short fertilization cycles. (They don't have separate orifices. Poor cloacas.)

Since fertilization occurs at one end of a long tube, and birth occurs at the other, I wouldn't be surprised if the optimal arrangement involved separate organs.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-11-09T02:33:04.602Z · LW(p) · GW(p)

Natural selection also led us to breathe and eat through the same hole. Seriously???? This causes so many problems. Well, not enough problems for natural selection to change it, I guess.

Replies from: None, hyporational, DanielLC
comment by [deleted] · 2013-11-09T06:21:44.877Z · LW(p) · GW(p)

Having two (three, technically) holes you can breath through has its advantages. Ever had a nasty head cold that clogs your sinuses so bad you can't breathe?

Replies from: hyporational
comment by hyporational · 2013-11-09T11:59:09.207Z · LW(p) · GW(p)

You still have just one pharynx, though.

comment by hyporational · 2013-11-09T12:02:33.215Z · LW(p) · GW(p)

Being able to smell what you're chewing is a huge advantage. I suppose achieving that some other way could get pretty convoluted.

comment by DanielLC · 2013-11-12T02:51:43.215Z · LW(p) · GW(p)

I've read horses can only breathe through their noses.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-12T04:03:59.554Z · LW(p) · GW(p)

I've never heard this, but I have read and just re-checked, and apparently whales and dolphins have separate passages for breathing (connected to their blowholes) and eating, and thus cannot choke on food.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-12T10:27:24.243Z · LW(p) · GW(p)

It's all a matter of which fish we evolved from and what solution evolution came up with, in the development from that water-breathing creature to air breathers. It could have produced separate air and foodways, as it did for whales and dolphins. But the blind idiot has no foresight and it can't get there from here any more.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-12T23:22:56.932Z · LW(p) · GW(p)

Dolphins and whales have the same fish ancestors we do; they're former terrestrial mammals who returned to the sea and share the same common ancestors of all mammals.

comment by joaolkf · 2013-11-08T19:28:20.132Z · LW(p) · GW(p)

" a morally blind, fickle, and tightly shackled tinkerer" (1) who "should be in jail for child abuse and murder"(2)

(1) POWELL, Russell & BUCHANAN, Allen. "Breaking evolution's chains: the prospect of deliberate genetic modification in humans." In: SAVULESCU, J. & MEULEN, Rudd ter (orgs.) “Enhancing Human Capacities”. Wiley-Blackwell. 2011.

(2) BOSTROM, Nick. “In defense of posthuman dignity.” Bioethics, v. 19, n. 3, p. 202-214, 2005.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-12T12:45:44.521Z · LW(p) · GW(p)

There is no escape from evolution (variation and selection).

Deliberate genetic selection is just more complicated evolution.

Replies from: DanArmak, joaolkf
comment by DanArmak · 2013-11-15T16:08:53.804Z · LW(p) · GW(p)

There is no escape from evolution (variation and selection).

Sure there is. Organisms could, in theory, create perfect replicas without variation for selection to act on. Contrariwise, they could create new organisms depending on what they needed that would bear no relation to themselves and would not reproduce in kind (or at all).

If I could write an AI, the last thing I'd want is to make it reproduce with random variations. If I could genetically engineer myself or my children, I'd want to introduce deliberate changes and eliminate random ones. (Apart from some temporary exceptions like the random element in our current immune systems.)

I think you're overusing the term "evolution". If you let it include any kind of variation (deliberate design) and any kind of selection (deliberate intelligent selection), you can't make any predictions that would hold for all "evolving" systems.

Replies from: Vaniver
comment by Vaniver · 2013-11-15T19:58:28.829Z · LW(p) · GW(p)

Organisms could, in theory, create perfect replicas without variation

In which theory? I don't think this is true if temperatures are above absolute zero, for example.

I think you're overusing the term "evolution". If you let it include any kind of variation (deliberate design) and any kind of selection (deliberate intelligent selection), you can't make any predictions that would hold for all "evolving" systems.

I suspect that you're being too restrictive- it doesn't seem like variation has to be blind, and selection done by replication, for 'evolution' to be meaningful. Now, blind biological evolution and engineering design evolution will look different, but it seems reasonable to see an underlying connection between them.

Replies from: DanArmak
comment by DanArmak · 2013-11-15T22:02:49.954Z · LW(p) · GW(p)

In which theory? I don't think this is true if temperatures are above absolute zero, for example.

True, you can't create perfect physical copies or even keep a single object perfectly unchanged for long. But macro-scale systems designed to eliminate variance and not to let microscopic deviations affect their macro-scale behavior can, for practical purposes, be made unchanging. Especially given an intelligent self-repairing agent that fixes unavoidable damage over time.

it doesn't seem like variation has to be blind, and selection done by replication, for 'evolution' to be meaningful. Now, blind biological evolution and engineering design evolution will look different, but it seems reasonable to see an underlying connection between them.

So, what kind of statements are valid for all kinds of evolution?

Replies from: Vaniver
comment by Vaniver · 2013-11-15T22:28:12.136Z · LW(p) · GW(p)

So, what kind of statements are valid for all kinds of evolution?

The direction (and often magnitude) of expected change over time is generally predictable, for example.

Replies from: DanArmak
comment by DanArmak · 2013-11-16T01:29:34.259Z · LW(p) · GW(p)

Can you be more specific? What is the expected direction of change for all evolutionary processes?

Replies from: Vaniver
comment by Vaniver · 2013-11-17T18:53:13.311Z · LW(p) · GW(p)

Can you be more specific?

In general, the entities undergoing evolution will look more like the complement of their environments as time goes on.

Replies from: DanArmak
comment by DanArmak · 2013-11-17T19:16:56.806Z · LW(p) · GW(p)

I'm sorry, I don't understand. What is the "complement of the environment"?

Replies from: Vaniver
comment by Vaniver · 2013-11-17T20:36:00.462Z · LW(p) · GW(p)

Suppose a gazelle lives in a savannah; we should expect the gazelles to digest savannah grass, flee from cheetahs, be sexy to other gazelles, etc., and become that way if not already so. I think Dawkins has a good explanation of this somewhere, but I was unable to find it quickly, that genes are in some sense records of the ancestral environment.

Similarly, internet memes are in some sense a record of the interests of internet users, and car designs a record of the interests of car buyers and designers, and so on. Is that a clearer presentation?

Replies from: DanArmak
comment by DanArmak · 2013-11-18T19:27:21.325Z · LW(p) · GW(p)

It seems clear what you mean (though not why you called it the complement of the environment). But I still don't see what's common to all kinds of evolutions, so maybe I'm still misunderstanding.

It's certainly true that any evolved object is a function of its environment and we can deduce features of the environment from looking at the object. But this is also true for any object that has a history of being influenced by its environment. A geologist looks at a stone and tells you how it was shaped by rain. An astronomer looks at a nebula and tells you how it was created by a supernova. "Being able to learn about a thing's past environment from looking at its present shape" is so general that you must have meant something more than that, but what?

Replies from: Vaniver
comment by Vaniver · 2013-11-18T21:23:56.101Z · LW(p) · GW(p)

"Being able to learn about a thing's past environment from looking at its present shape" is so general that you must have meant something more than that, but what?

That's basically what I meant, actually, with the inclusion of "looking at a thing's present environment tells you about its likely future shapes." I chose "complement" because it seemed like a better word than "mirror," but I'm not sure it was the best choice, and think "record" might have been better.

comment by joaolkf · 2013-11-13T00:02:57.667Z · LW(p) · GW(p)

What about AGI? Radical human enhancement? Computronium? Post-biological civilizations? All impossible?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-13T03:00:36.489Z · LW(p) · GW(p)

Offhand, I think they'd all include variation and selection.

Replies from: lmm
comment by lmm · 2013-11-13T12:31:06.411Z · LW(p) · GW(p)

I've seen an argument that a nanotech organism with a reasonable level of error-correction could with high probability make error-free clones of itself until the heat death of the universe.

Replies from: Lumifer, NancyLebovitz, Eugine_Nier
comment by Lumifer · 2013-11-13T16:23:28.877Z · LW(p) · GW(p)

That assumes the lack of black swans. Not a very good assumption when we extrapolate things until the heat death of the universe.

Replies from: lmm
comment by lmm · 2013-11-13T21:50:03.948Z · LW(p) · GW(p)

True, but it would have to be an exceedingly black swan to result in evolutionary-like mutations rather than simple annihilation.

Replies from: Lumifer
comment by Lumifer · 2013-11-13T21:56:55.193Z · LW(p) · GW(p)

First, annihilation is good enough -- a destroyed nanobot fails at making "error-free clones of itself until the heat death of the universe".

Second, all you need to do is to screw up the error-correction mechanism, the rest will take care of itself naturally.

comment by NancyLebovitz · 2013-11-13T12:37:22.481Z · LW(p) · GW(p)

That seems plausible to me, but it's still likely to be subject to selection because of competition for resources. Depending on its intelligence level and ethical structure, it might also be affected by arguments that it should limit its reproduction.

Replies from: lmm
comment by lmm · 2013-11-13T21:56:54.170Z · LW(p) · GW(p)

The point is there'd be no variation for evolution to select on.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-14T05:02:08.302Z · LW(p) · GW(p)

Why do you think there'd only be one sort of nanotech organism to select on and/or that perfect self-replication is the best or only strategy?

comment by Eugine_Nier · 2013-11-14T06:15:50.684Z · LW(p) · GW(p)

Well, the organism would need to be preprogrammed to survive in whatever environment it might find itself in until then.

comment by Stabilizer · 2013-11-08T00:51:47.208Z · LW(p) · GW(p)

He was not a very careful person as a mathematician. He made a lot of mistakes. But he made mistakes in a good direction. I tried to emulate him. But I've realized that it's very difficult to make good mistakes.

-Goro Shimura on Yutaka Taniyama

comment by roland · 2013-11-17T11:56:23.290Z · LW(p) · GW(p)

The only rigorous method, the only one that enables us to test an opinion against reality, is based on the clear recognition that opinions come first[as opposed to facts]—and that this is the way it should be. Then no one can fail to see that we start out with untested hypotheses—in decision-making as in science the only starting point. We know what to do with hypotheses—one does not argue them; one tests them. One finds out which hypotheses are tenable, and therefore worthy of serious consideration, and which are eliminated by the first test against observable experience.

-- Peter Drucker The Effective Executive

Replies from: None
comment by [deleted] · 2013-12-23T00:12:24.672Z · LW(p) · GW(p)

So what about MWI?

comment by DSimon · 2013-11-11T17:17:50.367Z · LW(p) · GW(p)

The next best thing to have after a reliable ally is a predictable enemy.

-- Sam Starfall, FreeFall #1516

comment by arundelo · 2013-11-09T16:27:04.049Z · LW(p) · GW(p)

What is the experience of eating a chocolate brownie like? Can you describe it?

I believe it is ineffable. There is nothing you can say about chocolate that would mean anything to someone who has not tasted it.

Chocolate brownies are one of my favorite things -- but I don't think their ineffability is a big deal.

All experiences are ineffable. The best we can ever do is say "it's like this other thing."

-- David Chapman

Replies from: AndHisHorse
comment by AndHisHorse · 2013-11-09T18:07:02.763Z · LW(p) · GW(p)

Saying that something is ineffable and saying that nothing we can say is meaningful without the exact same shared experience are rather different things. To use your own example, comparision is possible - so we can imperfectly describe chocolate in terms of sugar and (depending on the type) bitterness, even if our audience has never heard of chocolate.

Conveniently, this allows us to roughly fathom experiences that nobody has ever had. Playwrights, for example, set out to create an experience that does not yet exist and prompt actors to react to situations they have never lived through, and through their capability to generalize they can imperfectly communicate their ideas.

comment by Shmi (shminux) · 2013-11-07T20:47:50.058Z · LW(p) · GW(p)

”I don’t believe in shouldn’t, like there’s some universal rules about the way things should be, the way people should act.”

“So there’s no right or wrong? People and animals should do whatever?”

“No, there’s always going to be consequences. Believe me when I say I know about that. But I do think there’s always going to be extenuating circumstances, where a lot of things we normally assume are wrong become excusable.”

Skitter the bug girl on morality, consequentialism and metaethics in Worm, the online serial recommended by Eliezer for HPMoR withdrawal symptoms.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-08T00:04:33.616Z · LW(p) · GW(p)

But I do think there’s always going to be extenuating circumstances, where a lot of things we normally assume are wrong become excusable.

We're also biased toward believing we're in one of those circumstances when we're not.

Replies from: shminux
comment by Shmi (shminux) · 2013-11-08T01:12:39.494Z · LW(p) · GW(p)

Yep, and the part after the quote alludes to that.

comment by jsbennett86 · 2013-11-11T23:21:12.446Z · LW(p) · GW(p)

snip

Replies from: Vaniver
comment by Vaniver · 2013-11-12T03:19:22.469Z · LW(p) · GW(p)

Dupe.

Replies from: jsbennett86
comment by jsbennett86 · 2013-11-12T06:46:10.226Z · LW(p) · GW(p)

Thanks. I just read the article, so I guess I was assuming it was new and wouldn't have been quoted.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-12T10:38:29.898Z · LW(p) · GW(p)

Why did you edit the text of the quote away rather than retracting the comment?

comment by dspeyer · 2013-11-01T15:34:43.172Z · LW(p) · GW(p)

Robert Morris has a very unusual quality: he's never wrong. It might seem this would require you to be omniscient, but actually it's surprisingly easy. Don't say anything unless you're fairly sure of it. If you're not omniscient, you just don't end up saying much. More precisely, the trick is to pay careful attention to how you qualify what you say. ... He has an almost superhuman integrity. He's not just generally correct, but also correct about how correct he is.

--Paul Graham

Replies from: Benito, Eugine_Nier, KaynanK
comment by Eugine_Nier · 2013-11-02T05:20:15.547Z · LW(p) · GW(p)

I can't help but wondering if he's overcompensating due to a certain incident.

comment by Multicore (KaynanK) · 2013-11-02T02:05:43.864Z · LW(p) · GW(p)

So he's one of those Fair Witnesses from Stranger in a Strange Land?

Replies from: JoshuaFox, gwern
comment by JoshuaFox · 2013-11-03T08:52:36.130Z · LW(p) · GW(p)

Heinlein's Fair Witnesses show a rationality failure.

It is impossible to report things "just as they are" without imposing implicit interpretation.

All observation and all language depends on an implicit model.

comment by gwern · 2013-11-02T02:26:40.576Z · LW(p) · GW(p)

As I recall, Fair Witnesses didn't ever give probabilities, they talked in binary terms. Morris, on the other hand, sounds calibrated.

Replies from: shminux
comment by Shmi (shminux) · 2013-11-02T03:39:46.177Z · LW(p) · GW(p)

They never drew conclusions.

comment by Jayson_Virissimo · 2013-11-14T18:09:15.764Z · LW(p) · GW(p)

For every ailment under the sun;
There is a remedy, or there is none;
If there be one, try to find it;
If there be none, never mind it.

-- Mother Goose

Replies from: AndHisHorse
comment by AndHisHorse · 2013-11-14T20:44:46.697Z · LW(p) · GW(p)

Presumably, a wise implementation of this quote would consider a continuum of remedies, ranging from mild treatment of symptoms to vaccination against the possibility of ever contracting the ailment. Even if there is no cure for an ailment, there is still value in mitigating its negative effects.

comment by elharo · 2013-11-11T19:36:00.277Z · LW(p) · GW(p)

It's one thing to feel your own problems more acutely than those of other people, even millions of other people, even many whose problems make yours look trivial by comparison. We all do that, and we could barely function if we didn't. It's quite another thing to expect that other people will see your problems as more important than those of millions. I sprained my ankle a few weeks ago, and I'll admit that in the time since I've given more thought to my ankle's recovery than I have to the 660,000 people who die every year from malaria. But if I asked you why you aren't thinking more about my ankle than you are about malaria, you'd wonder if it was my brain that I had sprained.

--Paul Waldman Why Isn't Everyone More Worried about Me? November 11, 2013

comment by Eugine_Nier · 2013-11-02T06:12:33.366Z · LW(p) · GW(p)

Utilitarianism is not in our nature. Show me a man who would hold a child’s face in the fire to end malaria, and I will show you man who would hold a child’s face in the fire and entirely forget he was originally planning to end malaria.

James A. Donald

Replies from: FiftyTwo, wiresnips, DanielLC, Jayson_Virissimo, Armok_GoB
comment by FiftyTwo · 2013-11-04T22:12:55.551Z · LW(p) · GW(p)

Medicine is not in our nature. Show me a man who would cut someone open to remove cancer, and I will show you man who would cut someone open and entirely forget he was originally planning to remove a tumour

Exact same argument. Does it sound equally persuasive to you?

Replies from: Jiro, lmm, Eugine_Nier
comment by Jiro · 2013-11-05T17:02:37.421Z · LW(p) · GW(p)

I'd extend Eugene's reply and point out that both the original and modified version of the sentence are observations. As such, it doesn't matter that the two sentences are grammatically similar; it's entirely possible that one is observed and the other is not. History has plenty of examples of people who are willing to do harm for a good cause and end up just doing harm; history does not have plenty of examples of people who are willing to cut people open to remove cancer and end up just cutting people open.

Also, the phrasing "to end malaria" isn't analogous to "to remove cancer" because while the surgery only has a certain probability of working, the uncertainty in that probability is limited. We know the risks of surgery, we know how well surgery works to treat cancer, and so we can weigh those probabilities. When ending malaria (in this example), the claim that the experiment has so-and-so chance of ending malaria involves a lot more human judgment than the claim that surgery has so-and-so chance of removing cancer.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-08T02:09:37.392Z · LW(p) · GW(p)

History has plenty of examples of people who are willing to do harm for a good cause and end up just doing harm.

Yes, but keep in mind the danger of availability bias; when people are willing to do harm for a good cause, and end up doing more good than harm, we're not so likely to hear about it. Knut Haukelid and his partners caused the death of eighteen civilians, and may thereby have saved several orders of magnitude more. How many people have heard of him? But failed acts of pragmatism become scandals.

Also, some people (such as Hitler and Stalin) are conventionally held up as examples of the evils of believing that ends justify means, but in fact disavowed utilitarianism just as strongly as their critics. To quote Yvain on the subject, "If we're going to play the "pretend historical figures were utilitarian" game, it's unfair to only apply it to the historical figures whose policies ended in disaster."

Replies from: Jiro
comment by Jiro · 2013-11-08T16:36:42.743Z · LW(p) · GW(p)

We already have a situation where we can cause harm to innocent people for the general good. It's called taxes.

Since I got modded down for that before, here's a hopefully less controversial example: the penal system. If you decide that your society is going to have a penal system, you know (since the system isn't perfect) that your system will inevitably punish innocent people. You can try to take measures to reduce that, but there's no way you can eliminate it. Nobody would say we shouldn't put a penal system into effect because it is wrong to harm innocent people for the greater good--even though harming innocent people for the greater good is exactly what it will do.

I don't think anyone really objects to hurting innocent people for the greater good. The kind of scenarios that most people object to have other characteristics than just that and it may be worth figuring out what those are and why.

Also, some people (such as Hitler and Stalin) are conventionally held up as examples of the evils of believing that ends justify means, but in fact disavowed utilitarianism just as strongly as their critics.

It seems to me that utilitarianism decides how to act based on what course of action benefits people the most; deciding who counts as people is not itself utilitarian or non-utilitarian.

And even ignoring that, Hitler and Stalin may be valuable as examples because they don't resemble strict utilitarianism, but they do resemble utilitarianism as done by fallible humans. Actual humans who claim that the ends justify the means also try to downplay exactly how bad the end is, and their methods of downplaying that do resemble ideas of Hitler and Stalin.

Replies from: Desrtopa, NancyLebovitz, TheOtherDave, Eugine_Nier, lmm
comment by Desrtopa · 2013-11-11T02:15:37.391Z · LW(p) · GW(p)

And even ignoring that, Hitler and Stalin may be valuable as examples because they don't resemble strict utilitarianism, but they do resemble utilitarianism as done by fallible humans. Actual humans who claim that the ends justify the means also try to downplay exactly how bad the end is, and their methods of downplaying that do resemble ideas of Hitler and Stalin.

Can you provide examples of this? In my experience, while utilitarianism done by fallible humans may be less desirable than utilitarianism as performed by ideal rationalists, the worst failures of judgment on an "ends justify the means" basis tend not to come from people actually proposing policies on a utilitarian basis, but from people who were not utilitarians whose policies are later held up as examples of what utilitarians would do, or from people who are not utilitarians proposing hypotheticals of their own as what policies utilitarianism would lead to.

Non utilitarians in my experience generally point to dangers of a hypothetical "utilitarianism as implemented by someone much dumber or more discriminatory than I am," which is why for example in Yvain's Consequentialism FAQ, the objections he answered tended to be from people believing that utilitarians would engage in actions that those posing the objections could see would lead to bad consequences.

Utilitarianism as practiced by fallible humans would certainly have its failings, but there are also points of policy where it probably offers some very substantial benefits relative to our current norms, and it's disingenuous to focus only on the negative or pretend that humans are dumber than they actually are when it comes to making utilitarian judgments.

Replies from: Jiro
comment by Jiro · 2013-11-30T03:28:28.253Z · LW(p) · GW(p)

Any example I could give you of humans fallibly being utilitarian you could equally well describe as an example of humans not being utilitarian at all. After all, that's what "fallible" means--"doing X incorrectly" is a type of "not doing X".

If you want an example of humans doing something close to utilitarian (which is all you're going to get, given how the word "fallible" works), Stalin himself is an example. Just about everything he did was described as being for the greater good, because building the perfect Soviet society is for the greater good and the harm done to someone by giving him a show trial and executing him is necessary to build that society. Of course, you could explain how Stalin wasn't really utilitarian, but if he was really utilitarian, he wouldn't be fallibly utilitarian.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-30T19:47:40.910Z · LW(p) · GW(p)

I would accept any figure as "fallibly utilitarian" if they endorsed utilitarian ethics and claimed to be attempting to follow it, but Stalin did not do so, and while his actions might be interpreted as a fallible attempt at utilitarianism, his own pronouncements don't particularly invite such interpretation.

Replies from: Jiro
comment by Jiro · 2013-11-30T23:57:35.678Z · LW(p) · GW(p)

That doesn't actually contain any of his own pronouncements.

I managed to Google this for Hitler: "It is thus necessary that the individual should come to realize that his own ego is of no importance in comparison with the existence of his nation; that the position of the individual ego is conditioned solely by the interests of the nation as a whole ... that above all the unity of a nation's spirit and will are worth far more than the freedom of the spirit and will of an individual. .... This state of mind, which subordinates the interests of the ego to the conservation of the community, is really the first premise for every truly human culture .... we understand only the individual's capacity to make sacrifices for the community, for his fellow man."

This may not be technically utilitarian,. but it is an example of Hitler endorsing the idea that some people should be harmed to benefit others (in this case to benefit society), and is an example of how that doesn't go well.

Replies from: Desrtopa
comment by Desrtopa · 2013-12-01T01:33:52.553Z · LW(p) · GW(p)

People have been proposing that some people should be harmed to benefit others long before anyone proposed the idea of utilitarianism; usually it was justified because the people being harmed are outgroup members, or simply Less Important compared to the people being helped, or to subordinate individual identity to the group identity.

Sometimes, this doesn't go well. In some cases, such as, by your own example, in the case of taxes or a justic system, it goes much better than a refusal to harm some people to help others. Utilitarianism is a construction for formalizing under what conditions it is or is not a good idea to attempt such tradeoffs. Calling the purges of Hitler or Stalin failings of Utilitarianism is about as fair as calling every successful government intervention or institution ever, which after all all involve sacrificing resources of the public for a common good, successes of Utilitarianism.

comment by NancyLebovitz · 2013-11-12T11:59:09.168Z · LW(p) · GW(p)

Another way that a penal system is extremely likely to harm innocents is that the imprisoned person may have been supplying a net benefit to their associates in non-criminal ways, and they can't continue to supply those benefits while in prison. This is especially likely for some of the children of prisoners, even if the prisoners were guilty..

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-13T04:09:37.069Z · LW(p) · GW(p)

That's indirect harm. Non-utilitarians don't have to care about it (and certainly not care about it as much as utilitarians).

comment by TheOtherDave · 2013-11-08T20:03:16.022Z · LW(p) · GW(p)

Nobody would say we shouldn't put a penal system into effect because it is wrong to harm innocent people for the greater good--even though harming innocent people for the greater good is exactly what it will do.

I am unsure how to map decisions under uncertainty to evidence about values as you do here.

A still-less-controversial illustration: I am shown two envelopes, and I have very high confidence that there's a $100 bill in exactly one of those envelopes. I am offered the chance to pay $10 for one of those envelopes, chosen at random; I estimate the EV of that chance at $50, so I buy it. I am then (before "my" envelope is chosen) offered the chance to pay another $10 for the other envelope, this chance to be revoked once the first envelope is selected. For similar reasons I buy that too.

I am now extremely confident that I've spent $10 for an empty envelope... and I endorse that choice even under reflection. But it seems ridiculous to conclude from this that I endorse spending $10 for an empty envelope. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.

It seems to me that if I punish a hundred people who have been convicted of a crime, even though I'm confident that at least some of those people are innocent, I'm in a somewhat analogous situation to paying $10 for an empty envelope... and concluding that I endorse punishing innocent people seems equally ridiculous. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.

Replies from: Jiro
comment by Jiro · 2013-11-08T21:33:37.928Z · LW(p) · GW(p)

In your example, you are presenting "I think you should spend $10 for an empty envelope" as a separate activity, and you are being misleading because you are not putting it into context and saying "I think you should spend $10 for an empty envelope, if this means you can get a full one".

With the justice system example, I am presenting the example in context--that is, I am not just saying "I think you should harm innocent people", I am saying "I think you should harm innocent people, if other people are helped more". It's the in-context version of the statement that I am presenting, not the out-of-context version.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-08T21:47:48.731Z · LW(p) · GW(p)

(nods) Yes, that makes sense.
Thanks.

comment by Eugine_Nier · 2013-11-10T20:19:47.320Z · LW(p) · GW(p)

I (and James Donald) agree. Remember that the traditional ethical laws this is based on also have traditional exceptions, e.g., for punishment and war, and additional laws governing when and how those exceptions apply. The thing to remember is that you are not allowed to add to the list of exceptions as you see fit, nor are you allowed to play semantic games to expand them. In particular, no "war on poverty", or "war on cancer", even "war on terror" is pushing it.

Replies from: Jiro
comment by Jiro · 2013-11-11T16:32:37.407Z · LW(p) · GW(p)

I think you're misunderstanding me. We all know that most ethical systems think it's okay to punish criminals. I'm not referring to the fact that criminals are punished, but the fact that when we try to punish criminals we will, since no system is perfect, inevitably end up punishing some innocent people as well. Those people did nothing wrong, yet we are hurting them, and for the greater good.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-12T01:40:35.685Z · LW(p) · GW(p)

This is no different from the fact that it's okay to fly planes even though some of them will inevitably crash.

Note that if a judge punishes someone who turns out to be innocent, we believe he should feel guilty about this rather then simply shrugging and saying "mistakes will happen". Similarly, if an engeneer makes a mistake than causes a plane to crash.

Replies from: Jiro
comment by Jiro · 2013-11-12T15:20:21.732Z · LW(p) · GW(p)

Just like not all people punished are guilty, not all innocent people punished are discovered; there's always going to be a certain residue of innocent people who are punished, but not discovered, with no guilty judges or anything else to make up for it. Hurting such innocent people is nevertheless an accepted part of having a penal system.

comment by lmm · 2013-11-09T00:17:46.714Z · LW(p) · GW(p)

If you decide that your society is going to have a penal system, you know (since the system isn't perfect) that your system will inevitably punish innocent people. You can try to take measures to reduce that, but there's no way you can eliminate it. Nobody would say we shouldn't put a penal system into effect because it is wrong to harm innocent people for the greater good--even though harming innocent people for the greater good is exactly what it will do.

Sure. So we're sometimes willing to do some harm to innocents for the greater good. But if we were utilitarians we would always be thus willing. Civilized societies don't torture or execute their criminals, and wouldn't do so even if it was for the greater good.

Replies from: Jiro
comment by Jiro · 2013-11-09T10:31:27.302Z · LW(p) · GW(p)

The US has executions yet is otherwise considered civilized. So you must be claiming that the US is not civilized because it has executions, which makes it into a "no true Scotsman" argument.

Replies from: lmm
comment by lmm · 2013-11-09T22:27:03.801Z · LW(p) · GW(p)

Nope; there are other things about the US (poor public healthcare / higher education, high religiosity, weak labour laws...) that move it away from the empirical cluster of societies I'm thinking of. I made a lazy and politicized choice of terminology but it's a natural category, not a funny boundary I'm drawing arbitrarily.

Replies from: Jiro
comment by Jiro · 2013-11-10T00:37:44.956Z · LW(p) · GW(p)

That seems like a pretty low bar for "not civilized", which is a seriously bad characterization, and some of those are downright bizarre. It reads as though you took a laundry list of things you don't like about the US and decided that that's your definition of "not civilized". Would it make any sense if I pointed out that Europe tends to have weaker freedom of speech than the US, and a higher tendency towards anti-Semitism, and higher taxes, therefore Europe is "not civilized"?

If "uncivilized" means anything it has to mean something other than "has policies I hate". And defining it to mean "has policies that I hatem and which hurt people" is no good--everyone thinks that policies that they hate hurt people, so that collapses down to just "has policies I hate".

BTW, do you consider Japan to be uncivilized? It has the death penalty.

(My own theory on how Japan manages to keep the death penalty is that it's easy for activists in Europe to connect to activists in nearby countries where some people are bilingual, but hard to connect with activists on the other side of the world who have no languages in common.)

Replies from: lmm
comment by lmm · 2013-11-10T02:41:29.791Z · LW(p) · GW(p)

That seems like a pretty low bar for "not civilized", which is a seriously bad characterization, and some of those are downright bizarre. It reads as though you took a laundry list of things you don't like about the US and decided that that's your definition of "not civilized". Would it make any sense if I pointed out that Europe tends to have weaker freedom of speech than the US, and a higher tendency towards anti-Semitism, and higher taxes, therefore Europe is "not civilized"?

It was a bad choice of word. But it seems like you agree that there are distinct empirical clusters for Europe-like and America-like (and Japan resides somewhere between the two - like most empirical classifications it's imperfect, but still useful).

I think the case can be made that Europe is a better place to live (and that US states that practice executions are worse than those that don't). But in any case this is all beside the point; the fact that there is this kind of resistance to the death penalty, even if only in Europe, demonstrates that humans are not naturally utilitarian.

Replies from: somervta, Jiro
comment by somervta · 2013-11-10T06:04:50.130Z · LW(p) · GW(p)

the fact that there is this kind of resistance to the death penalty, even if only in Europe, demonstrates that humans are not naturally utilitarian.

Or that the death penalties utilitarian merits are debatable. Or that in some societies, 'natural' utilitarian tendencies are subverted/modified/removed/replaced by the cultural environment.

comment by Jiro · 2013-11-10T19:51:15.883Z · LW(p) · GW(p)

But it seems like you agree that there are distinct empirical clusters for Europe-like and America-like

I agree that if you try to list the differences between Europe and the US, you can come up with a list. However, many of the items on the list are related to each other mostly by historical accident. Europe lacks the death penalty because once activists get a foothold in one place, that makes it easier for them to get a foothold in other culturally and geographically close places. Not because people who like high taxes necessarily have to oppose the death penalty. The state of Europe right now is very path-dependent.

the fact that there is this kind of resistance to the death penalty, even if only in Europe, demonstrates that humans are not naturally utilitarian

Or that Europe is run by an unrepresentative subset of humans.

Or that humans are not generally naturally anything to the exclusion of everything else. (Of course, in the limit, everything is utilitarian--people in Europe may get displeasure from using the death penalty the same way they get displeasure from bad-tasting food. Is someone who avoids bad tasting food for food that costs more a utilitarian, because pleasure from food taste is a form of utilon?)

Replies from: lmm
comment by lmm · 2013-11-11T19:50:53.949Z · LW(p) · GW(p)

Not because people who like high taxes necessarily have to oppose the death penalty.

Ok, we actually disagree then. I think that progress in the progressive sense is real, that most of today's politics is a product of historical/technological/etc. forces and more-or-less inevitable. (If I had to guess why the US is different from Europe I'd say it's largely an artifact of which groups of people originally settled there, and I expect the US to become more European in the future). I predict that even in legislatures far away from Europe we'd observe a correlation between support for high taxation and opposition to the death penalty, and that more generally if we did a NOMINATE-style analysis we'd find that positions on many issues were largely explained by a single axis of variation, and the list of things I mentioned would be at one end of it.

Or that humans are not generally naturally anything to the exclusion of everything else.

Sure. I'm not arguing that we're naturally virtue ethicists or anything. But I don't think utilitarianism is an adequate description of intuitive human morality (even American morality). Perhaps the fat man in the trolley problem is a better example; while there are no doubt many clever arguments that people are being utilitarian via some convoluted route, it's not the result we would naturally predict utilitarian thinkers to come to.

Of course, in the limit, everything is utilitarian--people in Europe may get displeasure from using the death penalty the same way they get displeasure from bad-tasting food. Is someone who avoids bad tasting food for food that costs more a utilitarian, because pleasure from food taste is a form of utilon?

I understand utilitarian to mean someone who tries to maximize some pseudo-economically consistent objective function of the external world. If someone assigns different values to actions that have the same result but get there by different paths, or evaluates a future state differently depending on the current state of the world, or believes that the same action could have a different moral value depending solely on the internal state of the person performing it, then they're not a utilitarian.

comment by lmm · 2013-11-17T23:20:00.527Z · LW(p) · GW(p)

Yep. I've heard similar speculations regarding surgeons before. Fortunately nowadays we can take appropriate measures to compensate (surgeons are highly paid and closely monitored; we take a lot of care that medicine be evidence-based; the rationale behind specific medical interventions is carefully documented and checked multiple times; we require medical professionals to train for longer than any other profession). But note that for most of human history, the interventions performed by almost all medical professionals were literally worse than nothing.

comment by Eugine_Nier · 2013-11-05T01:41:01.250Z · LW(p) · GW(p)

The second sentence is an empirical observation that is clearly false in your example.

comment by wiresnips · 2013-11-03T19:05:40.781Z · LW(p) · GW(p)

Utilitarianism isn't a description of human moral processing, it's a proposal for how to improve it.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-04T04:12:22.617Z · LW(p) · GW(p)

One problem is that if we, say, start admiring people for acting in "more utilitarian" ways, what we may actually be selecting for is psychopathy.

Replies from: wiresnips
comment by wiresnips · 2013-11-04T06:15:33.210Z · LW(p) · GW(p)

Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn't be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.

Ah- and this is the point of the quote. Oh, I like that.

comment by DanielLC · 2013-11-02T23:20:05.022Z · LW(p) · GW(p)

Our nature is not purely utilitarian, but I wouldn't go so far as to say that utilitarianism is not in our nature. There are things we avoid doing regardless of how they advance our goals, but most of what we do is to accomplish goals. If you can't understand that there are things you need to do to eat, then you won't eat.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-03T04:17:26.310Z · LW(p) · GW(p)

Strawman. Does any moral system anyone's ever proposed say we should never attempt to accomplish goals?

comment by Jayson_Virissimo · 2013-11-02T17:45:26.268Z · LW(p) · GW(p)

I agree that utilitarianism is "not in our nature," but what has this to do with rationality?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-02T18:49:56.181Z · LW(p) · GW(p)

I agree that utilitarianism is "not in our nature," but what has this to do with rationality?

Utilitarianism is pretty fundamental around here. Not everyone here agrees with it, but pretty much all ethical discussions here take it as a precondition for even having a discussion. The assertion that we are not, cannot be, and never will be utilitarians is therefore very relevant.

If you are suggesting by that emphasis on "nature" that we might act to change our nature and remake ourselves into better utilitarians, I would ask, if we are in fact not utilitarians, why should we make ourselves so? Infatuation with the tidiness of the VNM theorem?

Replies from: Mestroyer
comment by Mestroyer · 2013-11-02T23:32:52.441Z · LW(p) · GW(p)

We us::should try to be as utilitarian as we can because our intuitive morality is kind of consequentialist, so we care about how the world actually ends up, and utilitarianism helps us win.

If we ever pass up a chance to literally hold one child's face to a fire and end malaria, we have screwed up. We are not getting what we care about most.

It's not the "tidiness" in any aesthetic sense of VNM axioms that are important, it's the not-getting-money-pumped. Not being able to be money pumped is important not because getting money pumped is stupid and we can't be stupid, but because we need to use our money on useful stuff.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-03T04:21:53.243Z · LW(p) · GW(p)

If we ever pass up a chance to literally hold one child's face to a fire and end malaria, we have screwed up.

In another comment James A. Donald suggests a way torturing children could actually help cure malaria:

To cure malaria, we really need to experiment on people. For some experiments, obtaining volunteers is likely to be difficult, and if one experimented on non volunteering adults, they would probably create very severe difficulties. Female children old enough to have competent immune systems, but no older, would be ideal.

Would you be willing to endorse this proposal? If not, why not?

Replies from: Armok_GoB, Mestroyer, linkhyrule5, Moss_Piglet
comment by Armok_GoB · 2013-11-04T01:41:36.496Z · LW(p) · GW(p)

If I'm not fighting the hypothetical, yes I would.

If I encountered someone claiming that in the messy real world, then I run the numbers VERY careful and most likely conclude the probability is infinitesimal of him actually telling the truth and being sane. Specifically, of those claims the one that it'd be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.

Replies from: Vaniver
comment by Vaniver · 2013-11-04T02:32:30.430Z · LW(p) · GW(p)

Specifically, of those claims the one that it'd be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.

What's your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)

Replies from: Desrtopa, Armok_GoB
comment by Desrtopa · 2013-11-05T16:51:24.041Z · LW(p) · GW(p)

In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.

The Tuskegee experiment may have produced some useful data, but it certainly didn't produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I've encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn't easy to escape.

Any effective utilitarian must account for the fact that we're operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.

Replies from: NancyLebovitz, fubarobfusco, Eugine_Nier
comment by NancyLebovitz · 2013-11-06T18:08:01.810Z · LW(p) · GW(p)

Here's one with actual information gained: Imperial Japanese experimentation about frostbite

For example, Unit 731 proved that the best treatment for frostbite was not rubbing the Limb, which had been the traditional method but immersion in water a bit warmer than 100 degrees, but never mom than 122 degrees.

The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the "frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck."

I don't get the impression that those experiments destroyed a lot of trust-- nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.

However, it might be worth noting that that sort of experimentation doesn't seem to happen to people who are affiliated with the scientists or the government.

Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don't know of any real-world examples.

Replies from: Jiro, Desrtopa
comment by Jiro · 2013-11-06T18:48:27.699Z · LW(p) · GW(p)

I don't get the impression that those experiments destroyed a lot of trust-- nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.

It's hard for experiments to destroy trust when those doing the experiments aren't trusted anyway because they do other things that are as bad (and often on a larger scale).

comment by Desrtopa · 2013-11-08T02:19:53.224Z · LW(p) · GW(p)

Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don't know of any real-world examples.

I was going to say that I didn't think that medical researchers had ever solicited volunteers for experiments which are near certain to produce such traumatic effects, but on second thought, I do recall that some of the early research on the effects of decompression (as experienced by divers) was done by a scientist who solicited volunteers to be subjected to decompression sickness. I believe that some research on the effects of dramatic deceleration was also done similarly.

Replies from: Strange7
comment by Strange7 · 2013-11-13T01:57:17.183Z · LW(p) · GW(p)

I have heard of someone who was trying to determine the biomechanics of crucifixion, what part of the forearm the nail goes through and whether suffocation is the actually the main cause of death and so on, who ran some initial tests with medical cadavers, and then with tied-up volunteers, some of whom were disappointed that they weren't going to have actual nails driven through their wrists. Are extreme masochists under-represented on medical ethics boards?

comment by fubarobfusco · 2013-11-06T18:50:05.664Z · LW(p) · GW(p)

Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-06T19:05:59.555Z · LW(p) · GW(p)

Probably.

In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we're a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn't a good recipe for social harmony.

comment by Eugine_Nier · 2013-11-06T04:58:37.985Z · LW(p) · GW(p)

So would you be in favor of educating people why things like the Tuskegee experiment or human experimentation on abducted children are good things?

Replies from: Desrtopa
comment by Desrtopa · 2013-11-06T05:37:31.715Z · LW(p) · GW(p)

Not directly, because I don't think it would be likely to work. I do think that people should be educated in practical applications of utilitarianism (for instance, the importance of efficiency in charity,) but I don't think that this would be likely to result in widespread approval of such practices.

In the specific case of the Tuskegee experiment, the methodology was not good, and given that treatments were already available, the expected return was not that great, so it's not a very good example from which to generalize the potential value of studies which would be considered exploitative of the test subjects.

comment by Armok_GoB · 2013-11-04T06:16:35.082Z · LW(p) · GW(p)

That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn't have good enough methodology to have gotten anything useful out of it in either case. There's a general air of incompetence surrounding the whole thing that worries me more than the morality.

As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it's own is extremely small, and there are all sorts of other practical problems. That's the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you're using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.

It's a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It's also a fact that there are many if's and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it's a fact that most things done in the name of ANY moral system is actually bullshit excuses.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-04T13:27:11.217Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment

Several African American health workers and educators associated with Tuskegee Institute helped the PHS to carry out its experimentation and played a critical role in its progression, though the extent to which they were aware of methodology of the study is not clear in all cases. Robert Russa Moton, the head of Tuskegee Institute at the time, and Eugene Dibble, of the Tuskegee Medical Hospital, both lent their endorsement and institutional resources to the government study. Nurse Eunice Rivers, an African-American trained at Tuskegee Institute who worked at its affiliated John Andrew Hospital, was recruited at the start of the study.

Vonderlehr was a strong advocate for Nurse Rivers' participation, as she was the direct link to the community. During the Great Depression of the 1930s, the Tuskegee Study began by offering lower class African Americans, who often could not afford health care, the chance to join "Miss Rivers' Lodge". Patients were to receive free physical examinations at Tuskegee University, free rides to and from the clinic, hot meals on examination days, and free treatment for minor ailments.

Based on the available health care resources, Nurse Rivers believed that the benefits of the study to the men outweighed the risks.

What do you think of that utilitarian calculation? I'm not sure what I think of it.

Replies from: gjm, Lumifer
comment by gjm · 2013-11-04T15:47:16.543Z · LW(p) · GW(p)

It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.

In cases 1 and 2, it doesn't really matter what we think of her calculations; if you're fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn't have anything better to do -- but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do -- if someone is pointing a gun at your head and will definitely kill you if you don't. That doesn't say much about the merits of self-amputation in less ridiculous situations.)

I find #3 very implausible, for what it's worth.

(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that's not what the article says.)

Replies from: ialdabaoth, Jiro, NancyLebovitz
comment by ialdabaoth · 2013-11-04T17:08:17.289Z · LW(p) · GW(p)

It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.

Or (4), she was led to believe, either explicitly or implicitly, that her career and livelihood would be in jeopardy if she did not participate - thus motivating her to subconsciously sabotage her own utility calculations and then convince herself that the sabotaged calculations were valid.

comment by Jiro · 2013-11-04T21:26:48.986Z · LW(p) · GW(p)

In cases 1 and 2, it doesn't really matter what we think of her calculations; if you're fed sufficiently wrong information then correct algorithms can lead you to terrible decisions.

But that might still matter. It may be that utilitarianism produces the best results given no bad information, but something else, like "never permit experimentation without informed consent" would produce better results (on the average) in a world that contains bad information. Especially since whether the latter produces better results will depend on the frequency and nature of the bad information--the more the bad information encourages excess experimentation, the worse utilitarianism comes out in the comparison.

Replies from: Protagoras
comment by Protagoras · 2013-11-04T21:39:03.854Z · LW(p) · GW(p)

But a good utilitarian will certainly take into account the likelyhood of bad information and act appropriately. Hence the great utilitarian Mill's advocacy of minimal interference in people's lives in On Liberty, largely on the basis of the ways that ubiquitous bad information will make well-intentioned interference backfire often enough to make it a lower expected utility strategy in a very wide range of cases.

Replies from: Strange7
comment by Strange7 · 2013-11-13T02:45:18.445Z · LW(p) · GW(p)

A competent utilitarian might be able to take into account the limitations of noisy information, maybe even in some way more useful than passivity. That's not the same class of problem as information which has been deliberately and systematically corrupted by an actual conspiracy in order to lead the utilitarian decisionmaker to the conspiracy's preferred conclusion.

comment by NancyLebovitz · 2013-11-04T16:03:30.408Z · LW(p) · GW(p)

The cure was discovered after the experiment had been going on for eight years, which complicates matters. At this point, I think her best strategy would have been to arrange for the men to find out about the cure in some way which can't be traced back to her.

She may have believed that the men would have died more quickly of poverty if they hadn't been part of the experiment.

comment by Lumifer · 2013-11-04T17:28:26.234Z · LW(p) · GW(p)

What do you think of that utilitarian calculation?

Which one? The presumed altruistic one or the real-life one (which I think included the utilitly of having a job, the readiness to disobey authority, etc.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-04T18:14:12.040Z · LW(p) · GW(p)

The altruistic one, mostly.

comment by Mestroyer · 2013-11-03T04:40:21.637Z · LW(p) · GW(p)

Endorse? You mean, publicly, not on LessWrong, where doing so will get me much more than downvotes, and still have zero chance of making it actually happen? Of course not, but that has nothing to do with whether it's a good idea.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-03T05:05:52.017Z · LW(p) · GW(p)

I meant "endorse" in the sense that, unlike the Milgram experiment, there is no authority figure to take responsibility on your behalf.

Do you think it's a good idea?

Replies from: Mestroyer
comment by Mestroyer · 2013-11-03T05:15:07.942Z · LW(p) · GW(p)

If it will actually work, and there's no significant (as in at least the size of malaria being cured faster), and bad, consequences we're missing, or there are significant bad consequences but they're balanced out by significant good consequences we're missing, then yes.

comment by linkhyrule5 · 2013-11-03T05:02:28.524Z · LW(p) · GW(p)

The question is not "would this be a net benefit" (and it probably would, as much as I cringe from it). The question is, are there no better options?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-03T05:11:18.115Z · LW(p) · GW(p)

The question is, are there no better options?

Such as? Experimenting on animals? That will probably cause progress to be slower and think about all the people who would die from malaria in the meantime.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-11-03T07:43:00.340Z · LW(p) · GW(p)

Yes. How many more? Would experimenting on little girls actually help that much? Also consider that many people consider a child's life more valuable than an adult one, that even in a world where you would not have to kidnap girls and evade legal problems and deal with psychological costs on the scientists caring for little humans is significantly more expensive then caring for little mice, that said kidnapping, legal, and psychological costs do exist, that you could instead spend that money on mosquito nets and the like and save lives that way...

The answer is not obviously biased towards "experiment on little girls.". In fact, I'd say it's still biased towards "experiment on mice." Morality isn't like physics, the answer doesn't always add up to normality, but a whole lot of the time it does.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-03T08:30:39.378Z · LW(p) · GW(p)

Would experimenting on little girls actually help that much?

...

The answer is not obviously biased towards "experiment on little girls.". In fact, I'd say it's still biased towards "experiment on mice."

So your answer is that in fact it would not work. That is a reasonable response to an outrageous hypothetical. Yet James A. Donald suggested a realistic scenario, and beside it, the arguments you come up with look rather weak.

Would experimenting on little girls actually help that much? Also consider that many people consider a child's life more valuable than an adult one

Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.

...evade legal problems and deal with psychological costs...

This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.

You are not a utilitarian. Neither is anyone else. This is why there would be psychological costs and why there are legal obstacles. You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.

caring for little humans is significantly more expensive then caring for little mice

But not any more expensive than caring for chimpanzees. Where, of course, "care for" does not mean "care for", but means "keep sufficiently alive for experimental purposes".

This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.

Morality isn't like physics

Can you expand on what you see as the differences?

Replies from: linkhyrule5, Desrtopa, Armok_GoB
comment by linkhyrule5 · 2013-11-03T22:37:31.946Z · LW(p) · GW(p)

Would experimenting on little girls actually help that much?

...

No, seriously. I've read the original comment, James A. Donald does not support his claim.

But not any more expensive than caring for chimpanzees. Where, of course, "care for" does not mean "care for", but means "keep sufficiently alive for experimental purposes".

This is granted. References to small mice were silly and are now being replaced by "small chimpanzees." However...

Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.

This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It's not a flat "little girls versus millions of malaria deaths."

This is, quite frankly, not clear to me, and I'd want to call in an actual medical researcher to clarify. Doubly so, with artificial human organs becoming more and more possible (such organs are obviously significantly cheaper than humans.)

This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.

Actually, I was interpreting the hypothetical as "utilitarian government in our world." But fine, least convenient possible world and all that. That's why I set the non-society costs aside from the rest.

You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.

This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.

Honestly, this is probably true - case in point, I would rather not write a similar post from the opposite side. That being said, looking through my arguments, most of them hinge on the implausibility of human experimentation really being all that more effective compared to chimpanzee and artificial organ experimentation.

Morality isn't like physics

Can you expand on what you see as the differences?

The physics calculations around us have already been done perfectly. If, when we try to emulate them with our theories, we get something abnormal, it means our calculations are wrong and we need to either fix the calculation or the model. When we've done it all right, it should all add up to normality.

Our current morality, on the other hand, is a thing created over a few thousand years by society as a whole, that occasionally generates things like slavery. It is not guaranteed to already be perfectly calculated, and if our calculations turn out something abnormal, it could mean that either our calculations or the world is wrong.

Replies from: Richard_Kennaway, Eugine_Nier
comment by Richard_Kennaway · 2013-11-04T14:34:50.619Z · LW(p) · GW(p)

This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.) It's not a flat "little girls versus millions of malaria deaths."

Point taken.

This is, quite frankly, not clear to me, and I'd want to call in an actual medical researcher to clarify.

Well, yes. I doubt that JAD has particular expertise in malarial research, I don't and neither do you. To know whether a malarial research programme would benefit scientifically from a supply of humans to experiment on with no more restraint than we use with chimpanzees, one would have to ask someone with that expertise. But I think the hypothesis prima facie plausible enough to conduct the hypothetical argument, in a way which merely saying "suppose you could save millions of lives by torturing some children" is not.

After all, all medical interventions intended for humans must at some point be tested on humans, or we don't really know what they do in humans. At present, human testing is generally the last phase undertaken. That's partly because humans are more expensive than test-tubes or mice. (I'm not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.) But it is also partly because of the ethical problems of involving humans earlier.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-05T01:50:29.468Z · LW(p) · GW(p)

That's partly because humans are more expensive than test-tubes or mice. (I'm not sure how they compare with chimpanzees, given the prices that poor people in some parts of the world sell their children for.)

Note also that getting humans to experiment on by buying them from poor third world parents is generally frowned upon.

comment by Eugine_Nier · 2013-11-05T01:52:48.067Z · LW(p) · GW(p)

This is not the calculation being made. Using your numbers, experimenting on little girls needs to be at least 1.001 times as effective as experimenting on chimpanzees or mice to be worthwhile (because then you save an extra thousand lives for your thousand girls sacrificed.)

Well, given that more then 1 in 1000 drugs that look promising in animals fail human trials, I'd say that is a ridiculously low bar to pass.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-05T17:37:03.910Z · LW(p) · GW(p)

How many drugs that look promising in one human trial fail to pass later human trials?

comment by Desrtopa · 2013-11-08T02:46:18.755Z · LW(p) · GW(p)

Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.

If it would result in a timely cure for malaria which would result in the disease's global eradication or near-eradication, I would say that it would be worth kidnapping a few thousand children. But not only would a world where you could get away with doing so differ from our own in some very significant ways, I honestly doubt that a few thousand captive test subjects constitute a decisive and currently limiting factor in the progress of the research.

comment by Armok_GoB · 2013-11-04T01:51:18.936Z · LW(p) · GW(p)

Of wait we're talking about an entire society thats utilitarian and rational. In that case I'm (coordinating with everyone else via auman agreement) just dedicating the entire global population to a monstrous machine for maximally efficient FAI research where 99% of people are suffering beyond comprehension with no regard for their own well being in order to support a few elite researchers as the dedicate literally every second of their lives to thinking at maximal efficiency while pumped full of nootropics that'll kill them in a few years.

comment by Moss_Piglet · 2013-11-04T03:05:25.609Z · LW(p) · GW(p)

Would you be willing to endorse this proposal? If not, why not?

This particular proposal? No.

But mainly because we already have the tech to effectively cure malaria; it's called "DDT" and the only reason we aren't using it now is a lack of political will to challenge the environmental movement. If we lived in the Donaldverse where this proposal could be taken seriously, it wouldn't be hard to get a widespread mosquito eradication movement started; after all, sentimental concerns are the main reason we're handicapping ourselves here in the first place.

In general though, I think human experimentation does have merits. So much of what we know about our biology, especially the biology of the brain, comes from examining the victims of rare mutations diseases or accidents which impaired the functioning of a specific chemical pathway or tissue. If we could do organized knockout studies there is a good chance that we could gain a lot of knowledge which otherwise might take decades to uncover. But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us; there's really no chance of this sort of thing being allowed in the current political climate, so speculating about it is idle almost by definition.

Replies from: kalium, Richard_Kennaway
comment by kalium · 2013-11-04T07:08:24.819Z · LW(p) · GW(p)
  • DDT is widely used in the third world right now
  • DDT resistance in mosquitoes is rampant due to overuse
  • Current WHO regulations specify not using it where resistance is observed. Hardly the sort of regulation we have against DDT in the US (where malaria is not really a problem)
comment by Richard_Kennaway · 2013-11-04T15:37:36.018Z · LW(p) · GW(p)

But like a lot of other interesting ideas, the Nazis kind of messed this one up for the rest of us

That is backwards. It is not because the Nazis did it that experimenting on non-consenting human subjects is considered repugnant. It is because it is repugnant, that the Nazis are condemned for doing it.

there's really no chance of this sort of thing being allowed in the current political climate

Is that an expression of regret for lost possibilities? There is no chance of this sort of thing being allowed in any non-evil political climate.

Replies from: Lumifer, TheOtherDave
comment by Lumifer · 2013-11-05T17:18:57.059Z · LW(p) · GW(p)

There is no chance of this sort of thing being allowed in any non-evil political climate.

While that may be true, the catch may lie in finding "non-evil" political climate.

Here's what has been happening in reality in the XXI century: ...after 9/11, health professionals working with the military and intelligence services "designed and participated in cruel, inhumane and degrading treatment and torture of detainees". Medical professionals were in effect told that their ethical mantra "first do no harm" did not apply, because they were not treating people who were ill. (Link)

comment by TheOtherDave · 2013-11-04T15:39:09.498Z · LW(p) · GW(p)

There is no chance of this sort of thing being allowed in any non-evil political climate.

Would it were that this were so.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-04T15:58:59.834Z · LW(p) · GW(p)

It's more a matter of what evil means. If it is allowed, that's worth a good many points in the evil column of the report card.

It certainly has happened in climates, than which we know of more evil ones, but that case was enabled by the general lack of human regard for the class of people experimented on.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-04T16:11:30.710Z · LW(p) · GW(p)

It's more a matter of what evil means.

Yes, agreed. I'm not sure I know what "evil" means, but I'm fairly sympathetic to the view that, as the saying goes, good folk can allow evil to thrive by doing nothing.

comment by Armok_GoB · 2013-11-04T01:36:32.068Z · LW(p) · GW(p)

Then risk being the later man, while taking as many precautions as possible to preserve your intent.

comment by Eugine_Nier · 2013-11-05T02:24:13.186Z · LW(p) · GW(p)

Experts are simply people with more information and experience. But they are not necessarily as intelligent as you are, they often lack some of the most relevant information, and they usually have no skin in the game so they often don't even bother paying serious attention to the matter at hand.

Some of my biggest mistakes have been because, against my better judgment, I trusted the expert to know what he was doing. The main problem, I think, is that the expert is usually making a probabilistic decision based on the averages without bothering to apply the specific details that happen to alter the odds. And this doesn't even include the more serious, but less common problem of when the expert has a financial incentive to make a particular determination.

As we know, someone with a financial incentive to see things a certain way tends to have a very difficult time seeing it any other way, regardless of their level of expertise. The expert investment adviser wants you to invest in something, anything, and the more churn the better. The expert real estate salesman wants to sell your house quickly, with as little marketing expense as possible, and he doesn't care if you get the best price or not. The expert banker wants you to take out the largest loan he can get you to sign for, even if you can't really afford it. The expert IT guy just wants you to shut up, stop asking questions, and do what he tells you.

None of this means that expert advice is useless. Often they have a considerable amount of useful information. But that doesn't mean you should ever let them make your decisions for you. Listen and learn, but do not trust.

Vox Day

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-11-05T23:38:26.144Z · LW(p) · GW(p)

I upvoted this comment, but I want to add an important caveat. Whether, and how much, you trust your own judgment over that of an expert should depend at least in part on the degree to which you think your situation is unusual.

The IT guy wants you to shut up and go away, but (if in fact he is an expert and not a trained monkey reading a script) he's not going to spout random nonsense at you just to get you to leave. He's going to tell you things relevant to what is, in his experience, the usual situation.

Consider well whether you're sure your problem is some special snowflake. The IT guy has seen a lot of issues. Sometimes he can, before you finish your first sentence, know exactly what your problem is and how to fix it, and if he sounds bored when he tells you "just reboot it", that doesn't mean that he's wrong. If it costs you little, try his advice first.

Replies from: fubarobfusco, Eugine_Nier
comment by fubarobfusco · 2013-11-06T18:39:00.457Z · LW(p) · GW(p)

Whether, and how much, you trust your own judgment over that of an expert should depend at least in part on the degree to which you think your situation is unusual.

The expert also is better equipped to discern whether a situation is unusual, because the expert has seen more.

To the non-expert, something really mysterious and weird must be going on to explain these puzzling symptoms. Computer A can ping computer B, but B can't ping A? That's so strange! After all, ping is supposed to test whether two computers can talk to each other on the network, right? How could it possibly work one way but not the other? Is something wrong with the switch? Is one of the network cards broken? Is it a virus?!

To the expert, that's not unusual at all. One computer has the wrong subnet mask set. Almost every time. Like, that's 20 to 100 times more likely than a hardware problem or something broken in the network infrastructure, and it can be checked in seconds. And while the machine may have a virus too, that's not what causes these symptoms.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-11-06T19:51:57.206Z · LW(p) · GW(p)

The expert also is better equipped to discern whether a situation is unusual, because the expert has seen more.

Very true as well, though I will add the counter-caveat that the expert is usually biased toward concluding that your situation is not unusual. This is why many "tech support horror stories" have a bit where the narrator goes "... and then, when they finally got it through their heads that yes, I had tried restarting it five times, and no, I didn't have the wrong settings ..."

Replies from: fubarobfusco, Desrtopa
comment by fubarobfusco · 2013-11-06T22:25:52.748Z · LW(p) · GW(p)

I suspect there are a couple of things going on there.

One, it's important to distinguish consulting an expert from consulting a tech support script. Most of the time when you call up tech support, you're talking to a human being, but not an expert. You're talking to a person whose job it is to execute a script in order to relieve the experts from dealing with the common cases.

(And yes, it's in the interest of a consumer tech-support department to spend as little money on expensive experts as they can get away with — which is why when a Windows box has gotten laggy, they say "reboot it" and not "pop open the task manager and see what's using 100% of your CPU". They don't want to diagnose the long-term problem (your Scrabble game that you left running in the background has a bug that makes it busy-wait if it's back there for 26 hours); they want to make your computer work now and get you off the line. That's a different case from, for instance, an institutional IT department (at, say, a university) that has to maintain a passable reputation with the faculty who actually care about getting their research done.)

Two, there's narrative bias. The much-more-numerous cases where the simple fix works don't make for good "horror stories", so you don't hear them retold. Especially the ones where the poor user is now embarrassed because they have to admit they were outguessed by a tech-support script after giving the support tech a hard time.

(Yeah, I like good tech support too; that's part of why I use the local awesome option (Sonic.net) for my ISP instead of Comcast. I can call them up and talk to someone who actually knows what ARP means. But sometimes the problem does go away for months when you power-cycle the damn modem.)

comment by Desrtopa · 2013-11-08T02:53:15.598Z · LW(p) · GW(p)

Very true as well, though I will add the counter-caveat that the expert is usually biased toward concluding that your situation is not unusual.

Well, we don't know that they're actually biased in this direction until we know how their assessment of the probability that the usual thing is going on compares to the actual probability that the usual thing is going on.

Yes, there are plenty of "tech support horror stories" where the consultant has a hard time catching on to the fact that the complainant is not dealing with a usual or trivial problem, but for every one of those, there tends to be a slew of horror stories from the other end, of people getting completely wound up over something that the consultant can solve trivially, and failing to follow the simple advice needed to do so.

The consultants could be very well calibrated, and still occasionally be dramatically wrong. Beware availability bias.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-11-08T17:50:52.422Z · LW(p) · GW(p)

Note that cases where the tech tells you that it's usual problem X, and you deny this, asserting that your thing is a special snowflake, is NOT a case of the opposite bias. It's just a case of correct identification.

The opposite bias would be if the usual thing was going on, but the tech thought that it was some unusual thing.

Other IT-experienced people are welcome to correct me on this, but in my experience, the latter almost never happens, and when it does, it's mostly with newbie techies, recent hires/trainees, etc.

This makes it substantially more likely that tech people have "usual problem bias" than that they have "unusual problem bias", and that they are well-calibrated. The usual problem bias could be small or it could be large, but available evidence is fairly clear that it exists.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-11T01:43:19.265Z · LW(p) · GW(p)

The point I was making was not that tech support is likely to have an "unusual problem bias," but that being correctly calibrated with respect to usual and unusual problems will tend to appear like a "usual problem bias" when you examine in isolation the cases where they're wrong, because you would tend to observe cases where they need a significant amount of evidence to persuade them of the presence of an unusual problem, but not an unusual one.

If you examine cases where they're right, you may find a large number of cases where the customer insists that the problem is not addressed by the tech support's script, only to be proven wrong; these often appear in the horror stories posted by tech support. Thus, tech support may to some extent be rationally discounting evidence favoring unusual problems.

comment by Eugine_Nier · 2013-11-09T07:43:46.067Z · LW(p) · GW(p)

(if in fact he is an expert and not a trained monkey reading a script)

This brings up another related problem, namely how often supposed "experts" actually aren't.

comment by Eugine_Nier · 2013-11-02T05:38:11.526Z · LW(p) · GW(p)

Fallacies do not cease to be fallacies because they become fashions.

GK Chesterton

Replies from: jimmy
comment by jimmy · 2013-11-04T01:04:46.798Z · LW(p) · GW(p)

I don't really like quotes like this. It's not that it's not true and it's not that it's not that no one commits the error it warns against.

It's that no one who is blind to fallacies due to popularity is going to notice their mistake and change - it's too easy to agree with the quote without firing up the process that would lead you to making the mistake.

Good quotes will make it easy to put yourself in either position so that you can mentally bridge the two. If you're thinking "I can't imagine how they might make that mistake!", then you won't recognize that thought process when you go through it yourself.

comment by MC_Escherichia · 2013-11-04T23:18:42.489Z · LW(p) · GW(p)

The enemy of the enemy of my enemy is my enemy.

Harrap's First Law

Replies from: snafoo, beoShaffer
comment by snafoo · 2013-11-05T22:03:11.060Z · LW(p) · GW(p)

"The enemy of my enemy has their own relationship with me."

Replies from: simplicio
comment by simplicio · 2013-11-06T23:46:34.257Z · LW(p) · GW(p)

Yup.

comment by beoShaffer · 2013-11-07T06:42:44.144Z · LW(p) · GW(p)

The enemy of my enemy is my enemy's enemy. No more. No less.

Maxim 29

comment by NancyLebovitz · 2013-11-12T11:52:04.712Z · LW(p) · GW(p)

It turns out that when you are really really sleepy your favorite pieces of code are always the most 'obvious' ones. Thinking is not fun in the middle of the night, and it shouldn't be necessary all of the time.

--Paul

Found here.

comment by lukeprog · 2013-11-07T18:26:06.306Z · LW(p) · GW(p)

Nothing can be done [both] hastily and prudently.

Publilius Syrus

Replies from: dougclow, TheOtherDave
comment by dougclow · 2013-11-08T14:40:02.385Z · LW(p) · GW(p)

I'm not sure that's true in general. I can think of situations where the prudent course of action is to act as fast as possible. For instance, if you accidentally set yourself on fire on the cooker, if you are acting prudently, you will stop, drop and roll, and do it hastily.

comment by TheOtherDave · 2013-11-07T18:44:05.423Z · LW(p) · GW(p)

The more I look at this, the less sure I am what "hastily" means.

More precisely... if I understand "hastily" to mean, roughly, "more rapidly/sloppily than prudence dictates", then this statement is trivially true. If I assume the statement is nontrivial, I'm not sure how to test whether something is being done hastily.

Replies from: Vaniver
comment by Vaniver · 2013-11-07T20:25:53.918Z · LW(p) · GW(p)

if I understand "hastily" to mean, roughly, "more rapidly/sloppily than prudence dictates", then this statement is trivially true.

Trivial statements are often useful as reminders of facts, particularly when those facts are tradeoffs we would rather not have to face.

Replies from: HalMorris
comment by HalMorris · 2013-11-08T15:32:24.608Z · LW(p) · GW(p)

A good observation. I was going to suggest throwing a live hand grenade out of your tent as a counter-example -- but you don't want to do it so hastily that it misses the opening bounces back and lands in your lap.

comment by Halfwitz · 2013-11-03T03:22:44.045Z · LW(p) · GW(p)

Ignorance isn't a concatenation operator.

James Argo

Replies from: tut
comment by tut · 2013-11-03T10:22:37.493Z · LW(p) · GW(p)

Would you mind unpacking that a bit. I don't understand it, and neither DuckDuckGo nor Google turns up anything but people on Twitter and Hacker News quoting the same sentence.

Replies from: Halfwitz
comment by Halfwitz · 2013-11-03T16:14:29.122Z · LW(p) · GW(p)

puts "quantum" + " consciousness"

=> quantum consciousness

Here ignorance is acting as the "+" operator, binding the two strings into one. On reflection, it's not as clever as I thought it was when I read it on Hacker News. Rationality quotes should be scrutable. I'll retract.

comment by Eugine_Nier · 2013-11-02T05:50:12.315Z · LW(p) · GW(p)

all interesting behavior is overdetermined

Eric Raymond

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-11-02T17:39:46.021Z · LW(p) · GW(p)

I don't get it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-02T19:29:54.824Z · LW(p) · GW(p)

It’s something I learned from animal ethology. An “overdetermined” behavior is one for which there are multiple sufficient explanations. To unpack: “For every interesting behavior of animals and humans there is more than one valid and sufficient causal theory.” Evolution likes overdetermined behaviors; they serve multiple functions at once.

Eric Raymond

Google Is My Friend.

comment by Eugine_Nier · 2013-11-02T05:27:50.664Z · LW(p) · GW(p)

You often see in the papers things saying events we just saw should happen every ten thousand years, hundred thousand years, ten billion years. Some faculty here in this university had an event and said that a 10-sigma event should happen every, I don't know how many billion years. Do you ever regard how worrisome it is, when someone makes a statement like that, "it should happen every ten thousand years," particularly when the person is not even two thousand years old?

So the fundamental problem of small probabilities is that rare events don't show in samples, because they are rare. So when someone makes a statement that this in the financial markets should happen every ten thousand years, visibly they are not making a statement based on empirical evidence, or computation of the odds, but based on what? On some model, some theory.

Nassim Taleb

Replies from: DanArmak, nshepperd
comment by DanArmak · 2013-11-02T12:56:15.329Z · LW(p) · GW(p)

they are not making a statement based on empirical evidence, or computation of the odds, but based on what? On some model, some theory.

What's the difference between "based on computation of the odds" and "based on some model"?

Replies from: Lumifer, Protagoras
comment by Lumifer · 2013-11-04T17:25:10.417Z · LW(p) · GW(p)

What's the difference between "based on computation of the odds" and "based on some model"?

Taleb is doing some handwaving here.

"Some model" in this context is just the assumption of a specific probability distribution. So if, for example, you believe that the observation values are normally distributed with the mean of 0 and the standard deviation of 1, the chance of seeing a value greater than 3 (a "three-sigma value") is 0.13%. The chance of seeing a value greater than 6 (a "six-sigma value") is 9.87e-10. E.g. if your observations are financial daily returns, you effectively should never ever see a six-sigma value. The issue is that in practice you do see such values, pretty often, too.

The problem with Taleb's statement is that to estimate the probabilities of seeing certain values in the future necessarily requires some model, even if implicit. Without one you can not do the "computation of the odds" unless you are happy with the conclusion that the probability to see a value you've never seen before is zero.

Taleb's criticism of the default assumption of normality in much of financial analysis is well-founded. But when he starts to rail against models and assumptions in general, he's being silly.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-04T20:31:28.288Z · LW(p) · GW(p)

So, this.

Replies from: Lumifer
comment by Lumifer · 2013-11-04T21:04:51.846Z · LW(p) · GW(p)

Well, yeah, sure. Yvain wrote it up nicely, but the main point -- that what the model says and how much do you trust the model itself are quite different things -- is not complicated.

To get back to Taleb, he is correct in pointing out that estimating what the tails of an empirical distribution look like is very hard because you don't see a lot of (or, sometimes, any) data from these tails. But if you need an estimate you need an estimate and saying "no model is good enough" isn't very useful.

Replies from: Protagoras
comment by Protagoras · 2013-11-04T21:20:03.277Z · LW(p) · GW(p)

But surely Taleb isn't saying "no model is good enough." He explicitly advocates greater care in model-building and greater awareness of the risks of error, not people throwing up their hands and giving up. He says at the end:

We cannot escape it unfortunately in finance, ever since we left the stone-age, our random variables became more and more complex. We cannot escape it. We can become more robust.

Replies from: Lumifer
comment by Lumifer · 2013-11-04T21:43:27.446Z · LW(p) · GW(p)

But surely Taleb isn't saying "no model is good enough."

Actually, yes, he is. He is not terribly consistent, but when he goes into his "philosopher" mode he rants against all models.

In fact, his trademark concept of a black swan is precisely what no model can predict.

comment by Protagoras · 2013-11-04T16:33:04.939Z · LW(p) · GW(p)

Maybe it isn't the clearest way of describing it, but it seems that by "computation of odds" he means using at least some observation of frequencies, and is contrasting this with computing the probability of events for which there have as yet been no occurrences, so no observation of frequency has been possible.

Replies from: DanArmak
comment by DanArmak · 2013-11-05T00:36:49.296Z · LW(p) · GW(p)

No two real world events are exactly identical. You always need some model to generalize and say the ones you observed are like the ones you predict in some relevant way to reuse the observed frequency in your prediction. Without a model all you can say is that if the circumstances were to repeat exactly, then so would the outcome. And that just isn't very useful.

comment by nshepperd · 2013-11-02T08:29:35.718Z · LW(p) · GW(p)

Hmm. But, if you multiply "once in every ten thousand years" by all the different kinds of things that could be said to happen once every ten thousand years, don't you get something closer to "many times a day"?

Replies from: Richard_Kennaway, DanielLC
comment by Richard_Kennaway · 2013-11-02T09:02:41.305Z · LW(p) · GW(p)

But, if you multiply "once in every ten thousand years" by all the different kinds of things that could be said to happen once every ten thousand years, don't you get something closer to "many times a day"?

The computation is not relevant, because when you make a prediction that, say, some excursion in the stock market will happen only once in ten thousand years, you are making a prediction about that specific thing, not ten thousand things. It will be a thing you have never seen, because if you had seen it happen, you could not claim it would only happen once in ten thousand years—the observation would be a refutation of that claim. Since you have not seen it, you are deriving it from a theory, and moreover a theory applied at an extreme it has never been tested at. For such a prediction to be reliable, you need to know that your theory actually grasps the basic mechanism of the phenomenon, so that the observations that you have been able to make justify placing confidence in its extremes. This is a very high bar to reach. Here are a few examples of theories where extremes turned out to differ from reality:

Newtonian gravity --> precession of Mercury

Ideal gas laws --> non-ideal gases

Daltonian atomic theory --> multiple isotopes of the same element

Replies from: nshepperd
comment by nshepperd · 2013-11-02T10:00:33.249Z · LW(p) · GW(p)

The computation is not relevant,

The computation is directly relevant, given that Taleb is talking about how often he sees "should only happen every N years" in newspapers and faculty news. Doesn't he realise how many things newspapers report on? Astronomy faculties are pretty good for this too, since they watch ridiculous numbers of stars at once.

You can't just ignore the multiple comparisons problem by saying you're only making a prediction about "one specific thing". What about all the other predictions about the stock market you made, that you didn't notice because they turned out to be boringly correct?

Intuition pump: my theory says that the sequence of coinflips HHHTHHTHTT-THHTHHHTT-TTHTHTTTTH-HTTTHTHHHTT, which I just observed, should happen about once every 7 million years.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-02T15:34:11.981Z · LW(p) · GW(p)

Intuition pump: my theory says that the sequence of coinflips HHHTHHTHTT-THHTHHHTT-TTHTHTTTTH-HTTTHTHHHTT, which I just observed, should happen about once every 7 million years.

Intuition pump: if I choose an interesting sequence of coinflips in advance, I will never see it actually happen if the coinflips are honest. There aren't enough interesting sequences of 40 coinflips to ever see one. Most of them look completely random, and in terms of Kolmogorov complexity, most of them are: they cannot be described much more compactly than by just writing them out.

Now, we have a good enough understanding of the dynamics of tossed coins to be fairly confident that only deliberate artifice would produce a sequence of, say, 40 consecutive heads. We do not have such an understanding of the sort of things that appear in the news as "should only happen every N years".

Feynman on the same theme.

Replies from: Omegaile
comment by Omegaile · 2013-11-03T21:07:57.857Z · LW(p) · GW(p)

There aren't enough interesting sequences of 40 coinflips to ever see one.

Every sequence of 40 coin flips is interesting. Proof: Make a 1 to 1 relation on the sequence of 40 coin flips and a subset of the natural numbers, by making H=1 and T=0 and reading the sequence as a binary representation. Proceed by showing that every natural number is interesting.

comment by DanielLC · 2013-11-16T08:43:21.885Z · LW(p) · GW(p)

If you made ten million predictions of things that happen once every ten thousand years, and about three times a day one of them happens, then it would be sensible to conclude that a given one happens about once every ten thousand years. Most people don't do this, however. If someone managed to make ten million such predictions, they'd likely end up with a lot more than three of them happening a day.

comment by Stabilizer · 2013-11-02T01:44:47.562Z · LW(p) · GW(p)

I find for myself that my first thought is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom. It’s only by concentrating, sticking to the question, being patient, letting all the parts of my mind come into play, that I arrive at an original idea.

-William Deresiewicz

Replies from: gwern
comment by gwern · 2013-11-02T02:01:15.970Z · LW(p) · GW(p)

Dupe

comment by Jayson_Virissimo · 2013-11-06T05:58:40.878Z · LW(p) · GW(p)

Rationality is only bridled irrationality.

-- Bas van Fraassen, Laws and Symmetry

Replies from: gwern, simplicio
comment by gwern · 2013-12-02T00:00:05.097Z · LW(p) · GW(p)

I think Hume's 'reason is the slave of the passions' expresses the sentiment more clearly.

comment by simplicio · 2013-11-06T23:40:09.761Z · LW(p) · GW(p)

Rescue by saying

Rationality is only bridled heuristics.

?

comment by James_K · 2013-11-02T04:44:43.434Z · LW(p) · GW(p)

Rachel: I'll have to write that into the new gospel.

E-Merl: New gospel?

Rachel: Gospels should be updated regularly.

Guilded Age

Edit: mispelling of "write" corrected.

Replies from: DanArmak
comment by DanArmak · 2013-11-02T12:58:01.190Z · LW(p) · GW(p)

Write, not right.

Sorry if you feel this is nitpicky; it broke up my concentration.

Replies from: James_K
comment by James_K · 2013-11-02T18:44:39.996Z · LW(p) · GW(p)

Thanks, fixed.

comment by WalterL · 2013-11-05T23:32:53.006Z · LW(p) · GW(p)

In fact, the more you ponder it, the more inevitable it seems. Evolution gave us the cognition we needed, nothing more. To the degree we relied on metacognition and casual observation to inform our self-conception, the opportunistic nature of our cognitive capacities remained all but invisible, and we could think ourselves the very rule, stamped not just the physical image of God, but in His cognitive image as well. Like God, we had no back side, nothing to render us naturally contingent. We were the motionless centre of the universe: the earth, in a very real sense, was simply enjoying our ride. The fact of our natural, evolutionarily adventitious componency escaped us because the intuition of componency requires causal information, and metacognition offered us none.

-Scott Bakker

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2013-11-10T22:04:58.230Z · LW(p) · GW(p)

This looks like it might mean something. Can anyone dumb it down to the point where I can understand it?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-11T00:42:53.291Z · LW(p) · GW(p)

The basic gist of it seems to be that we don't have, and ought not expect evolved systems like ourselves to have, the cognitive capability to understand our cognitive limitations, at least not without information about the causes of those limitations that comes from something other than "metacognition".

I'm not sure what Bakker means by "metacognition"; if he means this I'm pretty confident he's just wrong.

comment by Nic_Smith · 2013-11-10T15:09:09.335Z · LW(p) · GW(p)

How would you move Mount Fuji?

Take some time. Think about it.

Got an answer?

Good.

Throw it away.

You can't move Mount Fuji.

-- Stefan Kendall

Replies from: itaibn0, AndHisHorse
comment by itaibn0 · 2013-11-10T16:38:43.524Z · LW(p) · GW(p)

I bet he believes you can't walk on the moon either.

Replies from: Nic_Smith
comment by Nic_Smith · 2013-11-10T18:09:48.999Z · LW(p) · GW(p)

Yet, no one has been on the moon in decades. Environmental circumstances cannot be ignored. You can't go to the moon right now -- maybe in some years, most likely not. "What can a twelfth-century peasant do to save themselves from annihilation? Nothing."

Replies from: AndHisHorse
comment by AndHisHorse · 2013-11-10T18:46:12.942Z · LW(p) · GW(p)

That is true. There are some things that we cannot do. There are some things that we cannot do yet. There are some things that we can do, but have not.

The objection to the quote is that it seems to place "moving Mt. Fuji", as an example of some larger class, in the first class not arbitrarily, but in spite of the evidence (the fact that the audience has come up with an answer, if indeed they have). While the surrounding article makes a good point, the quote in isolation smacks of irrational defeatism.

As an alternative to the quote, I propose the following:

How would you move Mount Fuji?

Take some time. Think about it.

Got an answer?

I'll bet it's tremendously expensive, potentially devastating to the surrounding landscape, and requires the improbable cooperation of a lot of other entities.

Wouldn't it be so much easier to go around it instead?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-11T00:53:48.268Z · LW(p) · GW(p)

Well, if we're proposing alternatives, I would probably reduce this to "Before setting out to move a mountain, consider what moving it accomplishes and whether there are cheaper ways of accomplishing that."

comment by AndHisHorse · 2013-11-10T18:01:25.638Z · LW(p) · GW(p)

Why not?

comment by Ishaan · 2013-11-01T19:13:32.631Z · LW(p) · GW(p)

Have you ever heard the phrase "rich as a Lannister"? ...Of course you have! You're a smart man. You know who the Lannisters are. I am a Lannister. Tyrion, son of Tywin. Of course, you have also heard the phrase "a Lannister always pays his debts". If you deliver a message from me to Lady Arryn, I will be in your debt. I will owe you gold... if you deliver the message, and I live, which I very much intend to do.

-Tyrion Lannister, Game of Thrones

Replies from: Alejandro1
comment by Alejandro1 · 2013-11-01T20:04:07.192Z · LW(p) · GW(p)

I'm always eager to upvote a Game of Thrones quote, but unfortunately I don't see the rationality insight here beyond an ordinary quid pro quo.

Replies from: Ishaan, simplicio
comment by Ishaan · 2013-11-01T20:24:10.445Z · LW(p) · GW(p)

Tyrion is frequently put into situations where he relies on his family's reputation for paying debts.

It's a real-life Newcomb-like problem - specifically a case of Parfit's Hitchhiker - illustrating the practical benefits of being seen as the sort of agent who keeps promises. It's not an ordinary quid-pro-quo because there is, in fact, no incentive for Tyrion to keep his end of the bargain once he gets what he wants other than to be seen as the sort of person who keeps his bargain.

Think it's a stretch?

Replies from: Lumifer, V_V
comment by Lumifer · 2013-11-01T20:26:09.307Z · LW(p) · GW(p)

It's a real-life

Ahem.

Replies from: Ishaan
comment by Ishaan · 2013-11-01T20:35:18.750Z · LW(p) · GW(p)

Er...right. Realistic, I should have said!

We often construct such ridiculous scenarios to illustrate this sort of thing ..."You're in a desert and a selfish pseudo-psychic drives by"? Really?

I enjoyed the fact that Parfit's Hitchhiker came up as a pop-culture reference, in a situation that arose organically.

Replies from: Lumifer
comment by Lumifer · 2013-11-01T20:58:45.429Z · LW(p) · GW(p)

We often construct such ridiculous scenarios

The point of these scenarios is make the issue as "clean" as possible, to strip away all the unnecessary embellishments which usually only cause people to fight the hypothetical.

organically

I guess what's inside the screenwriter's skull is organic... :-)

But really, since the invention of writing pretty much every writer who addressed the issue pointed out the importance of one's reputation of keeping promises. There are outright commands (e.g. Numbers 30:2 If a man ... swears an oath to bind himself by a pledge, he shall not break his word. He shall do according to all that proceeds out of his mouth.) and innumerable stories and fables about good things which happen to those who keep their promises and bad things which happen to those who don't.

Replies from: Ishaan
comment by Ishaan · 2013-11-01T21:38:07.745Z · LW(p) · GW(p)

But really, since the invention of writing pretty much every writer who addressed the issue pointed out the importance of one's reputation of keeping promises.

I don't disagree with what you say, but I do disagree with the connotation that things which are not original or counter intuitive are not worth pointing out.

The last time this show was quoted, it basically amounted to "try hard to win, give it everything", which is also something that people have been saying since the beginning of writing. All quote threads are filled with things that have been said again and again in slightly different ways. Even outside of quote threads, it's worth rephrasing things. Pretty much every Lesswrong post has been conceptually written before by someone, with a few rare exceptions.

innumerable stories and fables about good things which happen to those who keep their promises and bad things which happen to those who don't

Yes, but usually it's a punishment or reward issued directly from the other party, or by forces of nature...not about the practical value of going out of your way to establish reputation.

comment by V_V · 2013-11-01T21:32:06.691Z · LW(p) · GW(p)

Newcomb-like problem - specifically a case of Parfit's Hitchhiker

Parfit's Hitchhiker is not a "Newcomb-like problem". In fact, it's not even obvious that it is actually a proper decision problem: the only decision maker is the driver and you can't control their decision. You only get to decide if you can precommit.

Anyway, Google doesn't turn any reference to Parfit's Hitchhiker independent of LessWrong. Has this dilemma really originated from Parfit or did EY make it up?

Replies from: Jayson_Virissimo, shminux, Larks, Ishaan, army1987
comment by Jayson_Virissimo · 2013-11-01T23:23:19.804Z · LW(p) · GW(p)

...Google doesn't turn any reference to Parfit's Hitchhiker independent of LessWrong.

Yes, it does. Try using syntax similar to this:

parfit hitchhiker -"less wrong"

Replies from: V_V
comment by V_V · 2013-11-02T09:17:40.014Z · LW(p) · GW(p)

Thanks!

comment by Shmi (shminux) · 2013-11-01T22:16:05.051Z · LW(p) · GW(p)

Anyway, Google doesn't turn any reference to Parfit's Hitchhiker independent of LessWrong. Has this dilemma really originated from Parfit or did EY make it up?

Google points to this as the original reference. It's paywalled, so I cannot check.

The paper also apparently mentions Kavka's toxin puzzle, an interesting exercise in rationality and precommitment occasionally discussed on LW.

Replies from: V_V
comment by V_V · 2013-11-02T09:18:49.268Z · LW(p) · GW(p)

Google points to this as the original reference. It's paywalled, so I cannot check.

Thanks. I've also found this one, which is not paywalled.

comment by Larks · 2013-11-01T22:17:30.921Z · LW(p) · GW(p)

It's in Reasons and Persons

Replies from: V_V
comment by V_V · 2013-11-02T09:18:01.365Z · LW(p) · GW(p)

Thanks!

comment by Ishaan · 2013-11-01T21:55:36.206Z · LW(p) · GW(p)

Parfit's Hitchhiker is not a "Newcomb-like problem"

Then you're defining it differently from the way I, and others, are.

My req's for a Newcomblike problem:

1) Individuals who make certain decisions seem to win at higher rates than individuals who do not.

2) As far as you know, the act of decision doesn't causally effect the likelihood of a win.

what where your reqs?

Replies from: V_V
comment by V_V · 2013-11-02T09:42:02.983Z · LW(p) · GW(p)

1) Individuals who make certain decisions seem to win at higher rates than individuals who do not. 2) As far as you know, the act of decision doesn't causally effect the likelihood of a win.

These two requirements seem inconsistent.

I'd define a decision problem to be Newcomb-like if the payoff and the agent mental state (preferences, beliefs, decision procedures) are not independend conditional on the agent's decision.

Some of the problem on the list you linked are Newcomb-like, other are committment problems, other aren't even decision problems.

Replies from: Ishaan
comment by Ishaan · 2013-11-02T23:50:34.468Z · LW(p) · GW(p)

My def. isn't inconsistent. Those who buy computers are less likely to die of malaria,

^that's an instance of Solomon's problem, which is considered a newcomblike problem.

It's the same as the fact that those who one-box are more likely to get more money in Newcomb's. A third factor (socioeconomic status, CTGA allele, agent's mental state prior to decision) accounts for the variance.

Replies from: V_V
comment by V_V · 2013-11-03T08:03:36.608Z · LW(p) · GW(p)

My def. isn't inconsistent. Those who buy computers are less likely to die of malaria,

Once you condition on all the available evidence, such as the socioeconomic status, these two events become independent.
Likewise, in the Solomon's problem, a genetic test that detects the lesion would destroy the Newcomb-like structure of the dilemma.

comment by A1987dM (army1987) · 2013-11-02T09:44:14.301Z · LW(p) · GW(p)

Anyway, Google doesn't turn any reference to Parfit's Hitchhiker independent of LessWrong. Has this dilemma really originated from Parfit or did EY make it up?

AFAICT Kavka's toxin puzzle is isomorphic to it (except that in this case the billionaire's motives are alien).

comment by simplicio · 2013-11-01T20:20:57.395Z · LW(p) · GW(p)

Something about the opposite of Parfit's hitchhiker? Developing a reputation for following through on promises one could renege on.

comment by Eugine_Nier · 2013-11-05T02:08:34.776Z · LW(p) · GW(p)

If a meme complex is selected for virulence, if for example if it is transmitted by street corner preaching, it is going to be a cult, will have characteristics likely to be harmful to the host.

If, however, a meme complex is parentally transmitted, then it is going to reflect the characteristics of those who successfully reproduce, hence likely to be beneficent, providing divine authority for behaviors that parents know to be beneficial, behaviors which provide long term rewards but not short term rewards.

Jim

Replies from: Adele_L, NancyLebovitz, scav, army1987, Nic_Smith
comment by Adele_L · 2013-11-05T02:55:20.964Z · LW(p) · GW(p)

No more than 5 quotes per person per monthly thread, please.

I've counted 7 from you.

comment by NancyLebovitz · 2013-11-06T18:11:46.620Z · LW(p) · GW(p)

I think you've got a half-truth there.

Memes spread horizontally aren't selected for virulence, but memes spread generationally have at least been selected for not being utterly deadly.

comment by scav · 2013-11-05T17:28:24.829Z · LW(p) · GW(p)

I don't think there's any way to discriminate between crazy things your mum believes and crazy things the man on the street corner believes.

I also think the virulence of a meme complex, like the virulence of a virus, is very dependent on the context i.e. the population it is introduced to and the other memes it competes with in that population.

"what do you think you know, and how do you think you know it?" is snappy enough to be "virulent" and, I think, not too harmful to the individual host.

comment by A1987dM (army1987) · 2013-11-05T15:27:24.003Z · LW(p) · GW(p)

I wish “likely to be beneficent, providing divine authority for behaviors that parents know to be beneficial, behaviors which provide long term rewards but not short term rewards” (for some much less than literal value of “divine”) was a good description of “those who successfully reproduce”. Unfortunately it seems like “too dumb to use a condom” is a much more accurate description. Ever seen 16 and Pregnant? (Or, [pointless dig at the interlocutor removed], who reproduces more in the present-day US, white people or black people?)

http://en.wikipedia.org/wiki/Fertility_and_intelligence

Replies from: scav, Eugine_Nier
comment by scav · 2013-11-05T17:39:36.936Z · LW(p) · GW(p)

From the point of view of your genes, likely to reproduce and beneficial are exactly the same thing. That's trivially true.

Also not particularly interesting even if true: crazy beliefs that get you killed or prevent you from breeding have to spread non-parentally. They don't have to be particularly persuasive or virulent, there just has to be some other mechanism (e.g. state control of education, military discipline, enjoyable but idiotic forms of mass entertainment) to spread them.

The prevalence of these means doesn't even have to depend on the ones spreading the memes adopting them for themselves: there can be an economic motive for those who promote them for others to believe crazy things; like there is for drug dealers to sell rather than use heroin.

comment by Eugine_Nier · 2013-11-06T05:16:39.023Z · LW(p) · GW(p)

who reproduces more in the present-day US, white people or black people?

The welfare subsidies that make this a viable strategy are a very recent phenomenon.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-06T09:27:40.504Z · LW(p) · GW(p)

More educated, healthier, wealthier people reproducing less is not a phenomenon restricted to 21st-century America.

comment by Nic_Smith · 2013-11-05T03:06:08.105Z · LW(p) · GW(p)

Individual (or even social) benefit and reward do not necessarily follow from reproductive fitness; they could be utterly miserable but nonetheless have children* with the same beliefs.

*I am not certain how literally to interpret this from the quote.

comment by Cyan · 2013-11-12T11:32:07.205Z · LW(p) · GW(p)

-- my wife, on my Facebook timeline, although it's clearly not original to her.

Replies from: Larks
comment by Larks · 2013-11-14T02:21:40.937Z · LW(p) · GW(p)

This is not a quote; it is an unrelated picture with some text we would not consider worthy of inclusion on its own merits.

Replies from: Cyan
comment by Cyan · 2013-11-14T03:02:55.746Z · LW(p) · GW(p)

Who is this "we" you speak of?

If your objection is that it's mere text and not a quote, well, allow me to add an attribution. I do think you're taking the whole quotes thread idea a little more seriously than it was ever intended to be taken.

Replies from: Larks
comment by Larks · 2013-11-15T03:08:10.010Z · LW(p) · GW(p)

If your objection is that it's ... not a quote,

No, I think that even if Russell or Jaynes had said it we (by which I mean, the Lesswrong community, or whatever that might mean) wouldn't consider it worthy of inclusion.

I don't want to see the quotes thread turn into the 'put Chesterton quotes on cat pictures' thread, let alone the 'put sub-standard quotes on non-cat pictures' thread

Replies from: Cyan
comment by Cyan · 2013-11-15T15:24:28.361Z · LW(p) · GW(p)

I deny that the quote is sub-standard. If you think it is, I suggest that you're not looking hard enough for the message.