Rationality Quotes August 2013

post by Vaniver · 2013-08-02T20:59:04.223Z · score: 7 (7 votes) · LW · GW · Legacy · 736 comments

Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

736 comments

Comments sorted by top scores.

comment by Zando · 2013-08-03T06:50:10.080Z · score: 56 (56 votes) · LW · GW

when trying to characterize human beings as computational systems, the difference between “person” and “person with pencil and paper” is vast.

Procrastination and The Extended Will 2009

comment by bentarm · 2013-08-23T19:00:24.118Z · score: 2 (4 votes) · LW · GW

Am I missing something? Why is this quote so popular? Is there something more to it than "you can do harder sums with a pencil and paper than you can in your head"? Or, I guess "writing stuff down is sometimes useful".

comment by Nornagest · 2013-08-23T19:21:18.876Z · score: 3 (3 votes) · LW · GW

Pencil and paper is far more reliable than your native memory, and also gives you a way to work on more than seven or so objects at once. Either one would expand your capabilities significantly. Taken together they're huge, at least when you're working with things that natural selection hasn't optimized you for (i.e. yes for abstract math; not so much for facial recognition).

comment by bentarm · 2013-08-23T19:23:17.435Z · score: -1 (1 votes) · LW · GW

Right - but did anyone not know that?

comment by Nornagest · 2013-08-23T19:27:17.516Z · score: 7 (7 votes) · LW · GW

Facts which seem obvious in retrospect are often less salient than they appear, outside of their native contexts. If I'd been asked to describe humans as computational systems before reading the ancestor, pen and paper probably wouldn't be one of the things I'd have taken into account.

comment by Ishaan · 2013-10-15T01:34:10.079Z · score: 1 (1 votes) · LW · GW

Is there something more to it than "you can do harder sums with a pencil and paper than you can in your head"?

Yes.

The paper is about the importance of environmental scaffolding on behavior. One of the topics it touches on is akrasia in college students, and it hypothesizes that this is because they lost their usual scaffolding - the routine of their homes, their parents, etc.

The main point is that models of the human mind need to take into account the extent to which humans rely on external objects for computation. Paper and pencil are an extreme example of this.

The quote itself has further implications. In my opinion, this is the single most important technological development. As far as I'm concerned, the "Singularity" began when humans began using things other than their brains to store and process information. That was the beginning of the intelligence explosion, that was the first time we started doing something qualitatively different.

Everyone realizes that writing stuff down is useful, but since we do it all the time not everyone realizes what a big deal it is.. The important insight is that to write is to make the piece of paper a component of your memory and processing power.

comment by BT_Uytya · 2013-08-03T13:39:05.277Z · score: 53 (53 votes) · LW · GW

The fear people have about the idea of adherence to protocol is rigidity. They imagine mindless automatons, heads down in a checklist, incapable of looking out their windshield and coping with the real world in front of them. But what you find, when a checklist is well made, is exactly the opposite. The checklist gets the dumb stuff out of the way, the routines your brain shouldn’t have to occupy itself with (Are the elevator controls set? Did the patient get her antibiotics on time? Did the managers sell all their shares? Is everyone on the same page here?), and lets it rise above to focus on the hard stuff (Where should we land?).

Here are the details of one of the sharpest checklists I’ve seen, a checklist for engine failure during flight in a single-engine Cessna airplane—the US Airways situation, only with a solo pilot. It is slimmed down to six key steps not to miss for restarting the engine, steps like making sure the fuel shutoff valve is in the OPEN position and putting the backup fuel pump switch ON. But step one on the list is the most fascinating. It is simply: FLY THE AIRPLANE. Because pilots sometimes become so desperate trying to restart their engine, so crushed by the cognitive overload of thinking through what could have gone wrong, they forget this most basic task. FLY THE AIRPLANE. This isn’t rigidity. This is making sure everyone has their best shot at survival.

-- Atul Gawande, The Checklist Manifesto

comment by David_Gerard · 2013-08-05T20:49:23.426Z · score: 11 (11 votes) · LW · GW

I concur in the general case. But I would suggest the people complaining work in computers. I'm a Unix sysadmin; my job description is to automate myself out of existence. Checklist=shell script=JOB DONE, NEXT TASK TO ELIMINATE.

It turns out, thankfully, that work expands to fill the sysadmins available. Because even in the future, nothing works. I fully expect to be able to work to 100 if I want to.

comment by MinibearRex · 2013-08-04T06:07:56.644Z · score: 43 (43 votes) · LW · GW

I've got to start listening to those quiet, nagging doubts.

Calvin

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-04T17:01:11.927Z · score: 16 (20 votes) · LW · GW

This phrase was explicitly in my mind back when I was generalizing the "notice confusion" skill.

comment by snafoo · 2013-08-04T17:53:01.322Z · score: 4 (4 votes) · LW · GW

When you were what?

comment by Ben Pace (Benito) · 2013-08-04T17:58:14.782Z · score: 8 (12 votes) · LW · GW

Rationality 101 ;^)

comment by Particleman · 2013-08-02T06:07:05.805Z · score: 37 (41 votes) · LW · GW

In 2002, Wizards of the Coast put out Star Wars: The Trading Card Game designed by Richard Garfield.

As Richard modeled the game after a miniatures game, it made use of many six-sided dice. In combat, cards' damage was designated by how many six-sided dice they rolled. Wizards chose to stop producing the game due to poor sales. One of the contributing factors given through market research was that gamers seem to dislike six-sided dice in their trading card game.

Here's the kicker. When you dug deeper into the comments they equated dice with "lack of skill." But the game rolled huge amounts of dice. That greatly increased the consistency. (What I mean by this is that if you rolled a million dice, your chance of averaging 3.5 is much higher than if you rolled ten.) Players, though, equated lots of dice rolling with the game being "more random" even though that contradicts the actual math.

comment by [deleted] · 2013-08-02T21:20:45.648Z · score: 15 (15 votes) · LW · GW

What I mean by this is that if you rolled a million dice, your chance of averaging 3.5 is much higher than if you rolled ten.

The chance of averaging exactly 3.5 would be a hell of a lot smaller. The chance of averaging between 3.45 and 3.55 would be larger, though.

comment by Randy_M · 2013-08-06T19:49:37.575Z · score: 6 (6 votes) · LW · GW

Your chance of averaging 3.5 to two significant figures seems quite high indeed, though.

comment by duckduckMOO · 2013-08-23T17:58:42.780Z · score: -2 (2 votes) · LW · GW

Unless you're rolling an impractical number of dice for every attack having your attacks do random damage (and not 22-24 like in MMORPGs but 1X-6X) is incredibly random. Even if you are rolling a ridiculous number of dice the game can still be decided by one roll leaving a creature on the board or killing it by one or two points of damage.

What maths says that rolling dice doesn't make the game more random? Maybe he means the game is overall less random, but I don't see any argument for that, or reference to evidence of that claim.

If the reason for the game's failure was that people thought it lacked skill less additional randomness is not a decision to defend even if people were slightly overestimating the randomness.

Having to roll dice in a card game is kind of a slap in the face too. In other card games you draw your cards then make the most of them. There's 0 randomness to worry about except right when you draw your card or your opponent draws theirs (but you are often happily ignorant of whether they play a card from their hand or that they drew except in certain circumstances.) You can count cards and play based on what is left in your deck, or you know is not in your deck anymore.

Also, unlike miniature games, card games pretty much never start pre-deployed. You start with nothing on the board. If your turn one card kills his turn one card because of a dice roll then he has nothing on the board and you have a creature, giving you some level of control over the board (depends on the game, often quite high) In a miniature game if you kill more of his guys on turn one because of dice rolls you still have an army, though smaller.

Why is this quote upvoted?

comment by Kindly · 2013-08-25T19:22:55.821Z · score: 2 (2 votes) · LW · GW

The more precise statement of "math says rolling more dice makes things less random" is that if you roll ten six-sided dice and add up the answer, the result will be less random (on its scale) than if you merely roll one six-sided die.

Even more precisely: the outcome of 10d6 is 68.7% likely to lie in the range [30,40], while the outcome of 1d6 is only 33.3% likely to lie in the corresponding range [3,4].

I think the quoted portion of the article addresses exactly this point: people were scared of rolling many dice because this meant lots of randomness, but the math says that the opposite effect occurs.

As to your other points (starting with "kind of a slap in the face"), that is addressed in the article, but not the quoted part. In summary: both rolling dice and drawing cards is random, but there's a bunch of reasons why the randomness of drawing cards isn't as frustrating. (It can be frustrating too, though.)

comment by Viliam_Bur · 2013-08-25T18:15:36.349Z · score: 0 (2 votes) · LW · GW

Why is this quote upvoted?

Maybe because of this part:

Players [...] equated lots of dice rolling with the game being "more random" even though that contradicts the actual math.

comment by duckduckMOO · 2013-09-27T15:49:49.635Z · score: 1 (1 votes) · LW · GW

Rolling 10 dice instead of one makes the game less random. Rolling dice often instead of rarely makes the game more random. This game rolls dice for every attack and not that many. The dude said people complained about lots of dice rolling, not rolling lots of dice. Yeah, obviously if you roll 10 dice its less random than rolling one but what are the chances card game enthusiasts: people "geeky" enough to play star wars TCG don't understand that basic part of probability? It's far more likely that people were annoyed at lots of dice rolling, not the amount of dice you roll each time. Which matches the reported complaints of the players. Not that I'd expect an accurate report of the players positions when making excuses for why rolling dice in a card game is a bad idea.

comment by Ben Pace (Benito) · 2013-08-01T21:15:24.096Z · score: 32 (34 votes) · LW · GW

This analogy, this passage from the finite to infinite, is beset with pitfalls. How did Euler avoid them? He was a genius, some people will answer, and of course that is no explanation at all. Euler has shrewd reasons for trusting his discovery. We can understand his reasons with a little common sense, without any miraculous insight specific to genius.

  • G. Polya, Mathematics and Plausible Reasoning Vol. 1
comment by [deleted] · 2013-08-06T10:56:57.766Z · score: 2 (2 votes) · LW · GW

See also the appendix “Mathematical Formalities And Style” in Probability Theory by E.T. Jaynes.

comment by snafoo · 2013-08-04T17:51:26.055Z · score: 31 (35 votes) · LW · GW

When the axe came into the woods, many of the trees said, "At least the handle is one of us.

Turkish proverb

comment by monsterzero · 2013-08-05T03:28:45.544Z · score: 10 (12 votes) · LW · GW

http://pbfcomics.com/72/

comment by Alejandro1 · 2013-08-01T20:45:40.825Z · score: 31 (33 votes) · LW · GW

It's a horrible feeling when you don't understand why you did something.

-- Dennis Monokroussos

comment by lavalamp · 2013-08-01T23:03:03.977Z · score: 27 (27 votes) · LW · GW

It's probably a much more accurate feeling than the opposite one, though...

comment by [deleted] · 2013-08-02T21:21:08.683Z · score: 2 (2 votes) · LW · GW

If I understand why I did something, I want to believe ...

comment by wedrifid · 2013-08-02T05:17:17.261Z · score: 7 (7 votes) · LW · GW

It's a horrible feeling when you don't understand why you did something.

That is an interesting observation. For my part I do not experience horror in those circumstances, merely curiosity and uncertainty.

comment by Desrtopa · 2013-08-04T22:45:38.501Z · score: 10 (10 votes) · LW · GW

I think it may depend a lot on how well the action fits into your schema for reasonable behavior.

I have mild OCD. Its manifestations are usually unnoticeable to other people, and generally don't interfere with the ordinary function of my life, but occasionally lead to my engaging in behaviors that no ordinary person would consider worthwhile. The single most extreme manifestation, which still stands out in my memory, was a time when I was playing a video game, and saved my game file, then, doubting my own memory that I had saved it, did it again... and again... and again... until I had saved at least seven times, each time convinced that I couldn't yet be sure I had saved it "enough."

Afterwards, I was horrified at my own actions, because what I had just done was too obviously crazy to just handwave away.

comment by felzix · 2013-08-19T18:13:42.214Z · score: 3 (3 votes) · LW · GW

I used to do that a lot. I still have to fight the urge to save repeatedly when nothing has changed.

My obsessive compulsions are mostly mental though so it has had so little an impact on my interactions with others that I don't think it counts as a disorder.

comment by wedrifid · 2013-08-05T04:22:18.169Z · score: 0 (0 votes) · LW · GW

I think it may depend a lot on how well the action fits into your schema for reasonable behavior.

For me it fits my schema of reasonable behavior but also into my schema of "things other people may not like doing for which I don't consider them irrational".

Of course, I would rarely consider using a dollar as a bookmark. That would require stopping reading the book once I started it.

comment by Alejandro1 · 2013-08-02T10:47:10.271Z · score: 2 (2 votes) · LW · GW

It depends on the context, in particular, whether the situation is one where you "must" have a good reason for your actions. Your reaction is appropriate for most ordinary situations; his is appropriate for the context he's talking about (doing a different movement than than the one you intended in a chess game) and other high stakes situations (blurting an answer you know is wrong in an examination, saying/doing something awkward on a date, making a risky movement driving your car…)

comment by wedrifid · 2013-08-02T13:41:12.801Z · score: 8 (8 votes) · LW · GW

his is appropriate for the context he's talking about (doing a different movement than than the one you intended in a chess game) and other high stakes situations (blurting an answer you know is wrong in an examination, saying/doing something awkward on a date, making a risky movement driving your car…)

I experience horrible feelings when I humiliate myself or put myself at risk. This phenomenon seems to occur independently of whether I have a good causal model for why I did those things.

comment by [deleted] · 2013-08-22T20:26:32.264Z · score: 0 (0 votes) · LW · GW

OTOH, a good causal model may sometimes enable you to take action so as to not do that thing again.

comment by Alejandro1 · 2013-08-02T10:55:14.444Z · score: 30 (36 votes) · LW · GW

Now, now, perfectly symmetrical violence never solved anything.

--Professor Farnsworth, Futurama.

comment by lavalamp · 2013-08-02T20:22:28.601Z · score: 24 (24 votes) · LW · GW

The threat of massive perfectly symmetrical violence, on the other hand...

comment by sketerpot · 2013-08-03T02:22:52.447Z · score: 8 (8 votes) · LW · GW

Such a threat can also be effective for asymmetrical violence -- no matter which way the asymmetry goes.

comment by RolfAndreassen · 2013-08-02T02:48:21.419Z · score: 29 (55 votes) · LW · GW

Once there was a miser, who to save money would eat nothing but oatmeal. And what's more, he would make a great big batch of it at the start of every week, and put it in a drawer, and when he wanted a meal he would slice off a piece and eat it cold; thus he saved on firewood. Now, by the end of the week, the oatmeal would be somewhat moldy and not very appetising; and so to make himself eat it, the miser would take out a bottle of good whiskey, and pour himself a glass, and say "All right, Olai, eat your oatmeal and when you're done, you can have a dram." Then he would eat his moldy oatmeal, and when he was done he'd laugh and pour the whiskey back in the bottle, and say "Hah! And you believed that? There's one born every minute, to be sure!" And thus he had a great savings in whiskey as well.

-- Norwegian folktale.

comment by DanArmak · 2013-08-03T09:46:18.789Z · score: 8 (10 votes) · LW · GW

I don't understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn't actually work, and it could work as humour, but what does it mean as a rationality tale?

comment by ChrisPine · 2013-08-04T17:28:06.985Z · score: 30 (32 votes) · LW · GW

It's a cautionary tale about Norwegian food.

comment by D_Alex · 2013-08-09T01:59:29.677Z · score: 8 (10 votes) · LW · GW

It explains lutefisk.

Quote from Garrison Keillor's book Lake Wobegon Days: Every Advent we entered the purgatory of lutefisk, a repulsive gelatinous fishlike dish that tasted of soap and gave off an odor that would gag a goat. We did this in honor of Norwegian ancestors, much as if survivors of a famine might celebrate their deliverance by feasting on elm bark. I always felt the cold creeps as Advent approached, knowing that this dread delicacy would be put before me and I'd be told, "Just have a little." Eating a little was like vomiting a little, just as bad as a lot.

Quote from Garrison Keillor's book Pontoon: Lutefisk is cod that has been dried in a lye solution. It looks like the desiccated cadavers of squirrels run over by trucks, but after it is soaked and reconstituted and the lye is washed out and it's cooked, it looks more fish-related, though with lutefisk, the window of success is small. It can be tasty, but the statistics aren't on your side. It is the hereditary delicacy of Swedes and Norwegians who serve it around the holidays, in memory of their ancestors, who ate it because they were poor. Most lutefisk is not edible by normal people. It is reminiscent of the afterbirth of a dog or the world's largest chunk of phlegm.

Interview with Jeffrey Steingarten, author of The Man Who Ate Everything (translated quote from a 1999 article in Norwegian newspaper Dagbladet): Lutefisk is not food, it is a weapon of mass destruction. It is currently the only exception for the man who ate everything. Otherwise, I am fairly liberal, I gladly eat worms and insects, but I draw the line on lutefisk.

  • the above is from Wikipedia entry on lutefisk. Believe it or not.
comment by RolfAndreassen · 2013-08-10T05:01:29.817Z · score: 2 (2 votes) · LW · GW

Lake Wobegon Days: Every Advent we (ate lutefisk)

Obviously, that's why they were all above average!

No, seriously, lutefisk is peasant food. Rich urban types eat smalahovve.

comment by MixedNuts · 2013-08-04T17:28:41.504Z · score: 11 (11 votes) · LW · GW

Betcha it'd work. I'm going to set a piece of candy in front of me, work for half an hour, and then put it back, at least once a day for a week.

comment by KnaveOfAllTrades · 2013-08-05T11:31:57.405Z · score: 11 (11 votes) · LW · GW

I sometimes find that telling my Inner Lazy that it can decide—after I've done the first one—between whether to continue a series of tasks or to stop and be Lazy gets me to do the whole series of tasks. Despite having noticed explicitly that in practice this 'decision delay strategy' leads to the whole series getting done, it still works, and rather seems like tricking my Inner Lazy to transition into/hand the reins over to into my Inner Agent.

comment by MalcolmOcean (malcolmocean) · 2013-08-12T04:07:34.940Z · score: 6 (6 votes) · LW · GW

Accountability check!

Did you do it? How'd it go?

comment by MixedNuts · 2013-08-12T07:19:10.379Z · score: 11 (11 votes) · LW · GW

Did it once, binge-ate the candy a few hours later, bought more candy, binge-ate it again. Trying again in two weeks (or going to the doctor if still prone to binging).

comment by wedrifid · 2013-08-12T10:00:30.475Z · score: 0 (0 votes) · LW · GW

Betcha it'd work.

Oh, bother. I wish I'd seen this earlier.

comment by RolfAndreassen · 2013-08-03T18:34:06.760Z · score: 11 (11 votes) · LW · GW

It's either a cautionary tale about the dangers of deceiving yourself, or a humorous look at the impossibility of actually doing so.

comment by danlucraft · 2013-08-03T13:45:28.520Z · score: 10 (10 votes) · LW · GW

In the context of LW, I took it as an amusing critique of the whole idea of rewarding yourself for behaviours you want to do more .

comment by [deleted] · 2013-08-06T10:45:45.207Z · score: 5 (5 votes) · LW · GW

I took it to be about the hidden complexity of wishes: people often say they want to have more money left at the end of the month when what they actually mean is that they want to have more money left at the end of the month without making themselves miserable in the process, and the easiest solution to the former needn't be at all a solution to the latter.

comment by wedrifid · 2013-08-03T16:10:23.050Z · score: 5 (9 votes) · LW · GW

I don't understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn't actually work, and it could work as humour, but what does it mean as a rationality tale?

It could be used as an effective "How to create an Ugh Field and undermine all future self-discipline attempts" instruction manual. It isn't a rationality tale. It is confusing that 40 people evidently consider it to be one. (But only a little bit confusing. I usually expect non-rationalist quotes that would be accepted as jokes or inspirational quotes elsewhere to get around 10 upvotes in this thread regardless of merit. That means I'm surprised about the degree of positive reception.)

comment by AlexanderD · 2013-08-06T02:13:19.157Z · score: 5 (7 votes) · LW · GW

I don't think you are correct.

The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward, and the rest is a kabuki play he puts on for less-important impulses, to temporarily allow him to restrain them in service of his larger goal. The end pleasure of savings will provide strong positive reinforcement.

This could probably be empirically tested, to see if it is true and would work as a technique. I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar. Do they learn disappointment, or does the greater pleasure of money outweigh the candy? This is predicated on the idea that they would prefer the money, of course - you would need to tinker with amounts before the experiment might give useful results.

comment by pjeby · 2013-08-06T12:55:47.663Z · score: 6 (6 votes) · LW · GW

The miser knows each time he will not get the reward, and that he will save on food and drink. That is the real reward,

Also, don't forget his pleasure at successfully tricking himself. ;-)

comment by [deleted] · 2013-08-06T21:33:47.267Z · score: 2 (2 votes) · LW · GW

I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar. Do they learn disappointment, or does the greater pleasure of money outweigh the candy?

Myself, I'd just spend the dollar on candy.

comment by wedrifid · 2013-08-06T05:14:43.734Z · score: 0 (0 votes) · LW · GW

This could probably be empirically tested, to see if it is true and would work as a technique. I can imagine a test where someone is promised candy, and anticipates it while acting to fulfill a task, and then is rewarded instead with a dollar.

That is not the same thing as the quote. Empirically testing your candy and dollars reward switch would tell us next to nothing about the typical efficacy of the dubious self deception of the miser.

comment by AlexanderD · 2013-08-06T13:57:19.791Z · score: 3 (5 votes) · LW · GW

You are telling me I am wrong, but it is not helpful to me unless you explain why I am wrong.

I thought it made sense. As far as I could tell, the original parable has a miser with two desires: the desire for delicious booze and the desire to save money. The latter desire is by far the more important one to him, so he "fools" his desire for booze by promising himself a booze reward, and then reneging on himself each time. In my interpretation, this still results in an overall positive effect for self-discipline, because the happiness of saving money is so much more important to the miser than the disappointment of missing the booze reward.

The truth of whether this would actually work could be seen in an experiment. I tried to think of one with two rewards that satisfy different desires, and tried to think of a way to slightly disappoint the desire for sugar while strongly rewarding the impulse for money, after the completion of the task. Maybe I should specify that people should be hungry before the task, and tested in the future when they are hungry, to see if they are still willing to complete the task?

comment by KnaveOfAllTrades · 2013-08-05T11:49:59.335Z · score: 3 (3 votes) · LW · GW

create an Ugh Field and undermine all future self-discipline attempts

That's one way it could play out. It feels like this thinking also allows for it to work, because one might feel good about what got done by means of the trick, which would positively reinforce being tricked. I think the matter isn't clear cut.

comment by BT_Uytya · 2013-08-03T13:43:31.416Z · score: 3 (3 votes) · LW · GW

It's interesting to view this story from source-code-swap Prisoner's Dilemma / Timeless Decision Theory perspective. This can be a perfect epigraph in an article dedicated to it.

comment by Ben Pace (Benito) · 2013-08-03T12:04:49.782Z · score: 3 (5 votes) · LW · GW

I thought the way he deceived his conscious mind, and never learned, was interesting.

comment by RolfAndreassen · 2013-08-06T15:49:17.918Z · score: 26 (26 votes) · LW · GW

He took literally five seconds for something I'd spent two weeks on, which I guess is what being an expert means

-- Graduate student of our group, recognising a level above his own in a weekly progress report

comment by linkhyrule5 · 2013-08-06T19:41:07.537Z · score: 8 (8 votes) · LW · GW

Now I'm curious about the context...

comment by RolfAndreassen · 2013-08-07T16:34:34.111Z · score: 6 (8 votes) · LW · GW

It wasn't very interesting - some issue of how to make one piece of software talk to the code you'd just written and then store the output somewhere else. Not physics, just infrastructure. But the recognition of the levels was interesting, I thought. Although I do believe "literally five seconds" is likely an exaggeration.

comment by snafoo · 2013-08-04T17:46:45.143Z · score: 26 (26 votes) · LW · GW

Some say imprisoning three women in my home for a decade makes me a monster, I say it doesn’t, and of course the truth is somewhere in the middle.

Ariel Castro (according to The Onion)

comment by Randy_M · 2013-08-06T19:53:40.170Z · score: 6 (6 votes) · LW · GW

"So let's split the difference and say I should have stopped at two."

comment by RomanDavis · 2013-08-21T14:14:20.060Z · score: 1 (1 votes) · LW · GW

Is this just supposed to be a demonstration of irrationality? Can some one unpack this?

comment by metastable · 2013-08-21T14:56:02.008Z · score: 4 (4 votes) · LW · GW

A demonstration of the gray fallacy. The opinions of Ariel Castro are not equidistant from the truth with those of the rest of society, and we don't find the truth by finding a middle ground between his claims and those of everybody else.

comment by RomanDavis · 2013-08-21T18:45:06.412Z · score: 4 (4 votes) · LW · GW

I don't know how this happened. My comment was supposed to be a reply to:

When the axe came into the woods, many of the trees said, "At least the handle is one of us.

comment by metastable · 2013-08-21T19:02:35.084Z · score: 9 (9 votes) · LW · GW

Ah. I read that one as a reference to the tendency to let tribal affiliation trump realistic evaluation of outcomes.

comment by MinibearRex · 2013-08-05T05:23:38.967Z · score: 25 (27 votes) · LW · GW

He wasn't certain what he expected to find, which, in his experience, was generally a good enough reason to investigate something.

Harry Potter and the Confirmed Critical, Chapter 6

comment by Paulovsk · 2013-08-06T20:38:42.270Z · score: 2 (2 votes) · LW · GW

Can you give a link to this story? It is surprisingly difficult to find.

comment by AndHisHorse · 2013-08-06T20:47:06.234Z · score: 7 (7 votes) · LW · GW

It is the second book in the series Harry Potter and the Natural 20, which can be found here.

comment by gwern · 2013-09-02T00:27:26.946Z · score: 2 (2 votes) · LW · GW

If you put the quote into quotation marks and search Google, it's the fifth hit.

comment by Paulovsk · 2013-09-04T04:40:15.133Z · score: 1 (1 votes) · LW · GW

thank you. This was a 'duh!' moment; I haven't realized it was the 2nd book of the Natural 20.

comment by cody-bryce · 2013-08-02T22:30:27.813Z · score: 25 (43 votes) · LW · GW

If Tetris has taught me anything it's that errors pile up and accomplishments disappear.

-Unknown

comment by CronoDAS · 2013-08-03T01:48:52.317Z · score: 29 (35 votes) · LW · GW

It's ridiculous to think that video games influence children. After all, if Pac-Man had affected children born in the eighties, we'd all be running around in dark rooms, eating strange pills, and listening to repetitive electronic music.

-- Paraphrase of joke by Marcus Brigstocke

comment by ChristianKl · 2013-08-06T14:10:07.775Z · score: 0 (6 votes) · LW · GW

To be fair there are quite a few people who nowadays listen to electronic music, take drugs that are pills and who spend a lot of time in dark rooms.

comment by arundelo · 2013-08-06T14:15:37.770Z · score: 9 (9 votes) · LW · GW

That's the joke.

comment by DanielLC · 2013-08-05T04:35:08.667Z · score: 6 (8 votes) · LW · GW

It's funny, but you really shouldn't be learning life lessons from Tetris.

If Tetris has taught me anything, it's the history of the Soviet Union.

comment by DanArmak · 2013-08-03T09:29:53.627Z · score: 6 (6 votes) · LW · GW

We can reformulate Tetris as follows: challenges keep appearing (at a fixed rate), and must be solved at the same rate; we cannot let too many unsolved challenges pile up, or we will be overwhelmed and lose the game.

comment by Alejandro1 · 2013-08-03T13:34:22.302Z · score: 21 (21 votes) · LW · GW

So Tetris is really an anti-procrastination learning tool? Hmmm, wonder why that doesn't sound right….

comment by RolfAndreassen · 2013-08-03T18:37:44.681Z · score: 6 (6 votes) · LW · GW

But the challenge rate is not fixed. It increases at higher levels. So the lesson seems rather hollow: At some point, if you are successful at solving challenges, the rate at which new ones appear becomes too high for you.

comment by RichardKennaway · 2013-08-05T13:18:19.446Z · score: 4 (4 votes) · LW · GW

Just like life. The reward for succeeding at a challenge is always a new, bigger challenge.

comment by linkhyrule5 · 2013-08-05T05:57:27.823Z · score: 3 (3 votes) · LW · GW

At which point you die, for lack of intelligence.

Actually a fairly good metaphor for x-risk, surprisingly.

Of course, it's a lot easier to make a Tetris-optimizer than a Friendly AI...

comment by Document · 2013-08-06T03:17:28.817Z · score: 1 (1 votes) · LW · GW

At which point you die, for lack of intelligence.

I thought Tetris had been proven to always eventually produce an unclearable block sequence.

comment by DanArmak · 2013-08-03T19:38:26.971Z · score: 1 (1 votes) · LW · GW

It was either that or risk some people playing without stop until their bodies died in the real world.

comment by RolfAndreassen · 2013-08-03T20:53:56.112Z · score: 2 (6 votes) · LW · GW

...thus becoming useful object lessons to the rest of the species, and reducing our average susceptibility to reward systems with low variability. Not quite seeing the problem here.

comment by FiftyTwo · 2013-08-04T15:42:21.888Z · score: 1 (1 votes) · LW · GW

And todays challenges can be used to remedy yesterdays failures.

comment by [deleted] · 2013-08-06T10:49:48.098Z · score: 0 (2 votes) · LW · GW

How is that a rationality quote?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-03T00:00:25.862Z · score: -3 (29 votes) · LW · GW

LF:GFE

comment by cody-bryce · 2013-08-03T00:17:26.686Z · score: 7 (7 votes) · LW · GW

I'm afraid I don't know what that stands for.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-03T00:42:26.148Z · score: 5 (11 votes) · LW · GW

Logical Fallacy: Generalization from Fictional Evidence

comment by Kaj_Sotala · 2013-08-10T09:00:23.124Z · score: 3 (3 votes) · LW · GW

Actually, it strikes me that this particular example shouldn't be classified as GFE. "Errors pile up and accomplishments disappear" is a consequence of the way that the game logic works: in a sense, you could say that it's a theorem implied by the axioms of the game. While it's valid to say that Tetris is a flawed piece of procedural rhetoric in that its axioms do not correctly describe the real world, if you called it fictional evidence you would also be forced to call math fictional evidence, which probably isn't what you'd want.

comment by [deleted] · 2013-08-03T00:16:35.337Z · score: 1 (1 votes) · LW · GW

What?

comment by Manfred · 2013-08-03T00:36:46.540Z · score: 4 (4 votes) · LW · GW

Eh?

comment by [deleted] · 2013-08-03T00:37:37.533Z · score: 2 (2 votes) · LW · GW

Ah, thank you.

comment by shminux · 2013-08-02T03:23:24.547Z · score: 25 (31 votes) · LW · GW

A man who says he is willing to meet you halfway is usually a poor judge of distance.

Unknown

comment by peter_hurford · 2013-08-02T13:42:16.822Z · score: 9 (9 votes) · LW · GW

This could be studied empirically.

comment by dspeyer · 2013-08-04T21:29:25.669Z · score: 3 (3 votes) · LW · GW

Difficult. The "distance" is metaphorical, and this probably doesn't apply when there's an easy, unambiguous, generally accepted metric. Without that, how do we do the study?

Still, if you have a way, it could be interesting.

comment by Document · 2013-08-06T03:07:04.746Z · score: 9 (9 votes) · LW · GW

In a famous study, spouses were asked, “How large was your personal contribution to keeping the place tidy, in percentages?” They also answered similar questions about “taking out the garbage,” “initiating social engagements,” etc. Would the self-estimated contributions add up to 100%, or more, or less? As expected, the self-assessed contributions added up to more than 100%.

-Daniel Kahneman, Thinking, Fast and Slow

On the other hand, the book doesn't give a citation, and searching for the exact text of the question turns up only that passage. Not sure what to make of that.

comment by Unnamed · 2013-08-06T03:33:41.369Z · score: 16 (16 votes) · LW · GW

Ross & Sicoly (1979). Egocentric Biases in Availability and Attribution.

In the study, the spouses actually estimated their contributions by making a slash mark on a line segment which had endpoints labelled "primarily wife" and "primarily husband". The experimenters set it up this way, rather than asking for numerical percentages, for ethical reasons. In pilot testing using percentages, they "found that subjects were able to remember the percentages they recorded and that postquestionnaire comparisons of percentages provided a strong source of conflict between the spouses." (p. 325)

comment by AndHisHorse · 2013-08-04T23:14:25.995Z · score: 3 (3 votes) · LW · GW

If there is no easy, unambiguous generally accepted metric, that would seem to imply that everyone is a poor judge of distance - making the quote trivially true.

comment by Estarlio · 2013-08-02T14:09:33.651Z · score: 3 (5 votes) · LW · GW

Or thinks he's got better leverage than you.

comment by cody-bryce · 2013-08-02T22:29:11.591Z · score: 24 (28 votes) · LW · GW

Far too many people are looking for the right person, instead of trying to be the right person.

-Gloria Steinem

comment by DanArmak · 2013-08-03T09:34:17.429Z · score: 9 (9 votes) · LW · GW

I read that as "looking for the right person to fall in love with". Then the sense is "be the right person for someone else". But that achieves a different goal entirely, since it doesn't make the other person right for you.

There are many cases where you want a different person right for the task.

Name three!

Romantic partners (inherently), trading and working partners (allowing you to specialize in your comparative advantage), deputies and office-holders (allowing you to deputize), soldiers (allowing you to send someone else to their death to win the war).

comment by cody-bryce · 2013-08-03T16:44:05.023Z · score: 2 (2 votes) · LW · GW

I assume the original intent of the quote was about romantic partners, where it means, "Instead of searching so hard, make sure to prioritize being awesome for its own sake."

I was trying to repurpose it to express that action is better than preparing for something to fall into place more generally, and I think it's appealed to people.

comment by dspeyer · 2013-08-04T21:12:08.642Z · score: 4 (4 votes) · LW · GW

I originally read it as being about politics. We keep thinking that somewhere there's a candidate worth voting for, and then things will be ok, but instead we should be trying to become the worthy candidates, even if only for local office. Or perhaps toward improving the world generally. Instead of deciding whether to pay Yudkowsky or Bostrom to work on existential risk, we should try applying our own talents. Similar to "[T]he phrase 'Someone ought to do something' was not, by itself, a helpful one. People who used it never added the rider 'and that someone is me'."

Skimming Gloria Steinem's biography, I am more confident in this reading.

comment by Document · 2013-08-03T17:38:30.014Z · score: 2 (4 votes) · LW · GW

How isn't "looking for" or "searching hard" action?

comment by Document · 2013-08-03T02:14:58.054Z · score: 2 (8 votes) · LW · GW

Far too few people limit their aspirations to what they can accomplish working alone.

comment by cody-bryce · 2013-08-03T04:42:23.177Z · score: 1 (1 votes) · LW · GW

You still have to be the right person to be the right person in a team....?

comment by Document · 2013-08-03T05:32:31.491Z · score: 2 (2 votes) · LW · GW

But you don't have to be perfect to be the right person in a team, and you don't have to be "the" right person to be an asset to a team. People with low self-confidence plus low social confidence (plus possibly moralistic ideas about self-reliance) will try to self-improve through their own efforts rather than seeking help, regardless of how much less effective it is, believing they're not worth someone else's attention yet, or being afraid of owing someone, or whatever; quotes like Steinem's reinforce that.

...Maybe. I don't have any actual sources, so I could be totally wrong. Still, I'm not sure I like the focus on "being" rather than doing things.

comment by cody-bryce · 2013-08-03T16:51:23.911Z · score: 0 (0 votes) · LW · GW

But you don't have to be perfect to be the right person in a team, and you don't have to be "the" right person to be an asset to a team.

Who said anything about being perfect?

And if you're an asset, you sound prettymuch like the right person to me.

Maybe. I don't have any actual sources, so I could be totally wrong. Still, I'm not sure I like the focus on "being" rather than doing things.

To me the clause "be the right person" sounds very much active/action-based.

comment by linkhyrule5 · 2013-08-03T03:08:11.654Z · score: 0 (0 votes) · LW · GW

Far too many people...

Completely putting teamwork aside, most major contributions to humanity were achieved by standing on the shoulders of those who came before.

comment by ShardPhoenix · 2013-08-02T08:26:41.704Z · score: 22 (24 votes) · LW · GW

Rin: "Even I make mistakes once in a while."

Shirou (thinking): ...This is hard. Would it be good for her if I correct her and point out that she makes mistakes often, not just once in a while?

Fate/stay night

comment by FiftyTwo · 2013-08-04T15:40:45.063Z · score: 4 (4 votes) · LW · GW

Slightly off-topic, but I keep seeing Fate/Stay night referenced on here, is it particularly 'rationalist' or do people just like it as entertainment?

comment by Nornagest · 2013-08-05T00:19:05.418Z · score: 6 (6 votes) · LW · GW

It's not an especially rational piece of work as such, although it has its moments, but it is one of the more detailed examinations of heroic responsibility and the associated cultural expectations in fiction (if you can get past the sometimes shaky translation). Your mileage might vary, but I see echoes of it whenever Eliezer writes about saving the world.

comment by Desrtopa · 2013-08-04T22:48:41.594Z · score: 3 (3 votes) · LW · GW

It has some elements that stand out in terms of rationalist virtue, and many others which don't.

I found it to be very much a mixed bag, but the things it did well, I thought it did exceptionally well.

comment by ShardPhoenix · 2013-08-05T00:05:40.715Z · score: 2 (2 votes) · LW · GW

It's not so much rationalist as... Eliezer-ish. See my review in the media thread: http://lesswrong.com/lw/i8c/august_2013_media_thread/9ilm

comment by sketerpot · 2013-08-03T02:21:26.659Z · score: 3 (5 votes) · LW · GW

He just needs to get Saber to say it. Saber often tells people, in a bluntly matter-of-fact way, that they're making a mistake. Rin knows this. If Shiro said it, though, she'd think it was some kind of dominance thing and get mad.

(Maybe I'm over-analyzing this.)

comment by Eugine_Nier · 2013-08-02T06:22:12.832Z · score: 22 (34 votes) · LW · GW

Subsidizing the markers of status doesn’t produce the character traits that result in that status; it undermines them.

Reynolds' law

comment by NancyLebovitz · 2013-08-04T15:39:56.954Z · score: 4 (4 votes) · LW · GW

Status markers frequently indicate unusual access to resources as well as or even instead of character traits.

Subsidizing status markers dilutes them by making them less common.

How would you tell which factor is more important in the dilution of a status marker?

comment by Document · 2013-08-05T01:26:22.216Z · score: 1 (1 votes) · LW · GW

I can't parse your post, but that may be partly because I don't understand how subsidizing status markers would produce character traits to begin with.

comment by fubarobfusco · 2013-08-06T16:35:32.683Z · score: 8 (8 votes) · LW · GW

Eugine_Nier's comment has the suppressed premise that status usually results from character traits (alone, or primarily). NancyLebovitz's response contradicts this suppressed premise.

If you get rich by being exceptionally virtuous, then redistributing the wealth will make it less obvious who is virtuous.

But if you get rich by having a rich dad, then redistributing the wealth will merely make it less obvious who had a rich dad.

comment by Swimmer963 · 2013-08-05T02:15:09.711Z · score: 3 (3 votes) · LW · GW

I think the point is that it wouldn't. You can have character traits, i.e. conscientiousness, that result in status markers, i.e. having saved a lot of money. If you make it easier for people to get the specific status marker, i.e. welfare, the causal arrow doesn't go in reverse and increase conscientiousness. You could expect it to have no effect, i.e. if conscientiousness and other traits are innate and entirely determined by age 4. (That's kind of my default). Or, in a slightly more complicated world where conscientiousness can vary depending on environment , i.e. there are a bunch of causal arrows bouncing around in confusing ways, "diluting" the status marker by making it easier to acquire might reduce the incentive to have the underlying trait, and make people less conscientious over time. I've heard the argument that this happens to people on welfare, although I'm tempted to say "correlation not causation"–>who ends up on welfare in the first place already depends on conscientiousness.

comment by NancyLebovitz · 2013-08-05T04:47:49.887Z · score: 10 (10 votes) · LW · GW

At least in the US, saving money can disqualify you from welfare.

comment by Swimmer963 · 2013-08-05T16:38:37.729Z · score: 2 (2 votes) · LW · GW

When my best friend was on welfare, they would take what she had earned at her part-time job the last month and subtract half that amount from her welfare. So there was still an incentive to work, albeit less. I don't know to what degree she had to submit her budget or expenses to them (i.e. that they would actually know if she was saving money), but in general they seemed to make it as hard as possible to actually stay on Welfare.

comment by NancyLebovitz · 2013-08-07T09:23:57.386Z · score: 0 (0 votes) · LW · GW

That's about income, not savings.

comment by Swimmer963 · 2013-08-07T14:44:57.212Z · score: 2 (2 votes) · LW · GW

I don't know what the policy was on savings-i.e. to what degree, if at all, they would reduce her monthly amount if she submitted her budget each month and was spending less. I get the impression that it's kind of a basic fixed rate for, i.e., adult not in school with one child...and that it's realistically not enough to save, even if you spend nothing on discretionary purchases or fun. She got around $900 a month, of which $550 alone went towards her part of our rent.

If she'd, for example, made $500 per paycheck (25 hours a week at Canadian mininum wage), that would make $1000 a month, so they'd take $500 off her welfare payment, for a monthly total income of $1400...which is enough to save at least a small amount per month, given our shared living expenses. In the US welfare system, would they cancel your welfare if you were able to save $200 a month of this total?

They did keep cancelling the welfare for unrelated reasons. (Example: her parents had had an education fund for her of about $10,000, but they'd spent it all on her wedding, and they sent her a letter saying her welfare was cancelled until she could submit documents proving this. Not a warning-cancelled. She missed a month or two before submitting the documents, and eventually gave up and just worked more hours.)

comment by NancyLebovitz · 2013-08-07T15:30:12.851Z · score: 4 (4 votes) · LW · GW

http://cfed.org/assets/scorecard/2013/rg_AssetLimits_2013.pdf

Short version: it varies quite a bit by state, but some major benefits in a fair number of states have a personal asset limit of two or three thousand dollars.

comment by Swimmer963 · 2013-08-08T01:32:56.144Z · score: 3 (3 votes) · LW · GW

Thanks! So it looks like there's a limit but at least someone thinks it's a bad idea and some states are changing it...

According to this, the asset limit to qualify for Ontario Works (welfare) is $572 for a single adult and $1,550 for a lone parent. So, worse than is the US... (But it was $2500 for a single adult in 1981...) The 50% earning exemption is new from 2003 though.

Wow I have learned things today!

comment by Decius · 2013-08-07T09:53:44.917Z · score: 0 (0 votes) · LW · GW

I've never provided any information about savings when applying for welfare. What organization has that policy?

comment by CronoDAS · 2013-08-03T02:03:41.755Z · score: 3 (3 votes) · LW · GW

See also: Credential Inflation

comment by ShardPhoenix · 2013-08-02T08:28:32.800Z · score: 18 (20 votes) · LW · GW

But, Senjougahara, can I set a condition too? A condition, or, well, something like a promise. Don't ever pretend you can see something that you can't, or that you can't see something that you can. If our viewpoints are inconsistent, let's talk it over. Promise me.

Bakemonogatari

comment by Grif · 2013-08-06T06:50:37.542Z · score: 4 (4 votes) · LW · GW

In Bakemonogatari, the main characters often encounter spirits that only interact with specific people under specific conditions, although the effects they have are real (and would manifest to another's eyes as inexplicable paranormal phenomena). As such it's more a request about shoring up inconsistencies in sense perception, than it is about inconsistencies in belief.

comment by Baughn · 2013-08-30T11:32:24.475Z · score: 0 (0 votes) · LW · GW

That, and I'm getting the distinct impression their world is a non-euclidean mess.

comment by Arkanj3l · 2013-08-05T03:04:37.983Z · score: 17 (23 votes) · LW · GW

From Jacques Vallee, Messengers of Deception...

'Then he posed a question that, obvious as it seems, had not really occurred to me: “What makes you think that UFOs are a scientific problem?”

I replied with something to the effect that a problem was only scientific in the way it was approached, but he would have none of that, and he began lecturing me. First, he said, science had certain rules. For example, it has to assume that the phenomena it is observing is natural in origin rather than artificial and possibly biased. Now the UFO phenomenon could be controlled by alien beings. “If it is,” added the Major, “then the study of it doesn’t belong to science. It belongs to Intelligence.” Meaning counterespionage. And that, he pointed out, was his domain. *

“Now, in the field of counterespionage, the rules are completely different.” He drew a simple diagram in my notebook. “You are a scientist. In science there is no concept of the ‘price’ of information. Suppose I gave you 95 per cent of the data concerning a phenomenon. You’re happy because you know 95 per cent of the phenomenon. Not so in intelligence. If I get 95 per cent of the data, I know that this is the ‘cheap’ part of the information. I still need the other 5 percent, but I will have to pay a much higher price to get it. You see, Hitler had 95 per cent of the information about the landing in Normandy. But he had the wrong 95 percent!”

“Are you saying that the UFO data we us to compile statistics and to find patterns with computers are useless?” I asked. “Might we be spinning our magnetic tapes endlessly discovering spurious laws?”

“It all depends on how the team on the other side thinks. If they know what they’re doing, there will be so many cutouts between you and them that you won’t have the slightest chance of tracing your way to the truth. Not by following up sightings and throwing them into a computer. They will keep feeding you the information they want you to process. What is the only source of data about the UFO phenomenon? It is the UFOs themselves!”

Some things were beginning to make a lot of sense. “If you’re right, what can I do? It seems that research on the phenomenon is hopeless, then. I might as well dump my computer into a river.”

“Not necessarily, but you should try a different approach. First you should work entirely outside of the organized UFO groups; they are infiltrated by the same official agencies they are trying to influence, and they propagate any rumour anyone wants to have circulated. In Intelligence circles, people like that are historical necessities. We call them ‘useful idiots’. When you’ve worked long enough for Uncle Sam, you know he is involved in a lot of strange things. The data these groups get is biased at the source, but they play a useful role.

“Second, you should look for the irrational, the bizarre, the elements that do not fit...Have you ever felt that you were getting close to something that didn’t seem to fit any rational pattern yet gave you a strong impression that it was significant?”'

comment by Estarlio · 2013-08-05T14:53:40.549Z · score: 13 (13 votes) · LW · GW

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

  • “Silver Blaze” (Sir Arthur Conan Doyle)
comment by MixedNuts · 2013-08-11T18:30:14.259Z · score: 2 (2 votes) · LW · GW

If UFOs are controlled by a non-human intelligence, assuming they'll behave like human schemes is as pointless as assuming they'll behave like natural phenomena. But of course the premise is false and the Major's approach is correct.

comment by FiftyTwo · 2013-08-14T21:08:16.579Z · score: 3 (3 votes) · LW · GW

A creature that can build a spaceship is probably closer to oe that can build a plane than it is to a rock at least, you have to start somewhere.

comment by Kaj_Sotala · 2013-08-05T20:24:08.919Z · score: 16 (20 votes) · LW · GW

Old man: Gotcha! So you do collect answers after all!

Eye: But of course! Everybody does! You need answers to base decisions on. Decisions that lead to actions. We wouldn't do much of anything, if we were always indecisive!

All I am saying is that I see no point in treasuring them! That's all!

Once you see that an answer is not serving its question properly anymore, it should be tossed away. It's just their natural life cycle. They usually kick and scream, raising one hell of a ruckus when we ask them to leave. Especially when they have been with us for a long time.

You see, too many actions have been based on those answers. Too much work and energy invested in them. They feel so important, so full of themselves. They will answer to no one. Not even to their initial question!

What's the point if a wrong answer will stop you from returning to the right question. Although sometimes people have no questions to return to... which is usually why they defend them, with such strong conviction.

That's exactly why I am extra cautious with all these big ol' answers that have been lying around, long before we came along. They bully their way into our collection without being invited by any questions of our own. We accept them just because they have satisfied the questions of so many before us... seeking the questions which fits them instead...

My favorite kind of answers are those that my questions give birth to. Questions that I managed to keep safe long enough to do so. These baby answers might seem insignificant in comparison at first, but they are of a much better quality.

comment by RowanE · 2013-08-06T14:00:32.074Z · score: 3 (3 votes) · LW · GW

This is good, although when I read the comic I find myself interpreting Eye as valuing curiosity for curiosity's sake alone,in direct opposition to valuing truth, which I can't really get behind and leads to me siding with the old man.

comment by Polina · 2013-08-05T11:47:45.305Z · score: 15 (19 votes) · LW · GW

Life isn't about finding yourself. Life is about creating yourself.

George Bernard Shaw

comment by RichardKennaway · 2013-08-05T14:57:10.776Z · score: 10 (10 votes) · LW · GW

I agree with the thought, but I find the attribution implausible. "Finding yourself" sounds like modern pop-psych, not a phrase that GBS would ever have written. Google doesn't turn up a source.

comment by ChristianKl · 2013-08-06T23:46:03.915Z · score: 3 (3 votes) · LW · GW

Google nGram suggests that "Finding yourself" wasn't a phrase that was really in use before the 1960 albeit there a short uptick in 1940. Given that you need some time for criticism and Shaw died in 1950, I think it's quite clear that this quote is to modern for him. Although maybe post-modern is a more fitting word?

The timeframe seems to correspond with the rise of post-modern thought. If you suddenly start deconstructing everything you need to find yourself again ;)

comment by Polina · 2013-08-06T07:48:15.014Z · score: 2 (2 votes) · LW · GW

I think you are right that it is difficult to find the exact source. I came upon this quotation in the book Up where the author quoted Bernard Shaw. Google gave me http://www.goodreads.com/author/quotes/5217.George_Bernard_Shaw, but no article or play was indicated as a source of this quote.

comment by NancyLebovitz · 2013-08-08T02:09:46.725Z · score: 3 (3 votes) · LW · GW

"Life is about creating yourself" still might be problematic because the emphasis is still on what sort of person you are.

comment by Swimmer963 · 2013-08-08T02:31:18.760Z · score: 0 (0 votes) · LW · GW

As opposed to what? I would guess maybe a better concept is what you're able to get done...

comment by Vaniver · 2013-08-08T02:51:21.910Z · score: 1 (1 votes) · LW · GW

I think the implied contrast is between "creating yourself" and "what you do" or the less pretty but more precise "doing your actions." The first implies a smaller, more rigid set than the last, which is perhaps not the correct way to perceive life.

comment by Joshua_Blaine · 2013-08-02T17:49:04.355Z · score: 15 (21 votes) · LW · GW

The best solution to a problem is usually the easiest one.

-- GLaDOS from Portal 2

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-02T21:03:39.940Z · score: 22 (24 votes) · LW · GW

If you cast out all the easy strategies that don't actually work as non-'solutions', then sure, in what remains among the set of solutions, the best is often the easiest, though not easy. I can think of much harder ways to save the world and I'm not trying any of them.

comment by shminux · 2013-08-02T18:08:29.024Z · score: 1 (5 votes) · LW · GW

If you define best as easiest.

comment by Joshua_Blaine · 2013-08-02T20:27:48.291Z · score: 5 (7 votes) · LW · GW

If best is defined as easiest, then the "usually" within the quote is entirely superfluous. "If" statements are logically exception-less, and the Law of Conserved Conversation (That i've just made up) means that "usually" implies exceptions. Otherwise it would be excluded from the quote. So I say, pedantically, "duh. but you're missing the point a bit, aren't you mate?"

I like to think of the principle as a kind of Occam's for action. Don't take elaborate actions to produce some solution that is otherwise trivially easy to produce.

comment by [deleted] · 2013-08-02T22:09:08.341Z · score: 2 (2 votes) · LW · GW

the Law of Conserved Conversation (That i've just made up)

You may want to read something about pragmatics, starting with e.g. the section on conversational implicatures in Chapter 1 of CGEL.

(Your made-up law sounds related to these.)

comment by Joshua_Blaine · 2013-08-03T00:35:40.456Z · score: 2 (2 votes) · LW · GW

Huh. The Maxim of Relation does sound very much like what I was trying to go for.

comment by Vaniver · 2013-08-02T20:23:14.040Z · score: 4 (4 votes) · LW · GW

I see it as more of a "rather than sorting projects by revenue, make sure to sort them by profit," combined with "in cases where revenue is concave and cost linear, which happen frequently, the lowest cost project is probably going to be the highest profit."

comment by dspeyer · 2013-08-04T21:24:20.407Z · score: 3 (3 votes) · LW · GW

That plus "beware inflated revenue estimates, especially for have-it-all type plans". Cost estimates are often much more accurate.

comment by DSherron · 2013-08-02T19:29:38.189Z · score: 1 (1 votes) · LW · GW

Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.

comment by Panic_Lobster · 2013-08-14T06:28:36.118Z · score: 14 (16 votes) · LW · GW

Karl Popper used to begin his lecture course on the philosophy of science by asking the students simply to 'observe'. Then he would wait in silence for one of them to ask what they were supposed to observe. [...] So he would explain to them that scientific observation is impossible without pre-existing knowledge about what to look at, what to look for, how to look, and how to interpret what one sees. And he would explain that, therefore, theory has to come first. It has to be conjectured, not derived.

David Deutsch, The Beginning of Infinity

comment by Bugmaster · 2013-08-14T20:53:42.187Z · score: 6 (8 votes) · LW · GW

Did Karl Popper populate his class with particularly unimaginative students ? If someone asked me to "observe", I'd fill an entire notebook with observations in less than an hour -- and that's even without getting up from my chair.

comment by Estarlio · 2013-08-15T00:23:06.557Z · score: 7 (7 votes) · LW · GW

And, while you were writing, someone would provide the wanted answer ;)

comment by fubarobfusco · 2013-08-15T14:20:35.676Z · score: 3 (3 votes) · LW · GW

I'm pretty sure I had this very exercise in a creative-writing class somewhere in school.

comment by rule_and_line · 2013-08-14T22:36:08.908Z · score: 3 (3 votes) · LW · GW

That's an interesting prediction. Have you tried it? Can you predict what you'd do after filling the notebook?

In my imagination, I'd probably wind up in one of two states:

  • Feeling tricked and asking myself "What was the point of that?"
  • Feeling accomplished and waiting for the next instruction.
comment by Bugmaster · 2013-08-14T23:27:40.829Z · score: 1 (1 votes) · LW · GW

I have never tried it myself in a structured setting, such as a classroom; but I do sometimes notice things, and then ask myself, "What is going on here ? Why does this thing behave in the way that it does ?". Sometimes I think about it for a while, figure out what sounds like a good answer, then go on with my day. Sometimes I shrug and forget about it. Sometimes -- very rarely -- I'm interested enough to launch a more thorough investigation. I imagine that if I set myself an actual goal to "observe" stuff, I'd notice a lot more stuff, and spend much more time on investigating it.

You say that, in such a situation, you could end up "feeling tricked", but this assumes that the teacher who told you to "observe" is being dishonest: he's not interested in your observations, he's just interested in pushing his favorite philosophy onto you. This may or may not be the case with Karl Popper, but observations are valuable (and, IMO, fun) regardless.

comment by Daniel_Burfoot · 2013-08-22T21:40:49.366Z · score: 0 (0 votes) · LW · GW

Hmm, this point seems more Kuhnian than Popperian. Maybe Deutsch got the two confused.

comment by RichardKennaway · 2013-08-20T19:17:34.828Z · score: 0 (0 votes) · LW · GW

Another view.

comment by philh · 2013-08-10T23:48:06.693Z · score: 14 (16 votes) · LW · GW

"But think how small he is," said the Black Panther, who would have spoiled Mowgli if he had had his own way. "How can his little head carry all thy long talk?"

"Is there anything in the jungle too little to be killed? No. That is why I teach him these things, and that is why I hit him, very softly, when he forgets."

Rudyard Kipling, The Jungle Book

comment by snafoo · 2013-08-04T17:50:23.520Z · score: 14 (28 votes) · LW · GW

I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.

Stephen Jay Gould

comment by gwern · 2013-08-04T22:55:10.764Z · score: 6 (18 votes) · LW · GW

There was only one Ramanujan; and we are all well-aware of Gould's views on intelligence here, I presume.

comment by [deleted] · 2013-08-05T17:35:20.566Z · score: 4 (4 votes) · LW · GW

There was only one Ramanujan

In what reference class?

comment by gwern · 2013-08-05T19:45:55.784Z · score: 11 (23 votes) · LW · GW

I chose Ramanujan as my example because mathematics is extremely meritocratic, as proven by how he went from poor/middle-class Indian on the verge of starving to England on the strength of his correspondence & papers. If there really were countless such people, we would see many many examples of starving farmers banging out some impressive proofs and achieving levels of fame somewhat comparable to Einstein; hence the reference class of peasant-Einsteins must be very small since we see so few people using sheer brainpower to become famous like Ramanujan.

(Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans - consistent with the evidence. Gould, of course, being a Marxist who denies any intelligence, would not agree.)

comment by Vaniver · 2013-08-06T01:41:36.572Z · score: 23 (25 votes) · LW · GW

from poor/middle-class Indian

It is worth pointing out that Ramanujan, while poor, was still a Brahmin.

comment by NancyLebovitz · 2013-08-07T15:11:37.670Z · score: 13 (15 votes) · LW · GW

And not just that, but he had more education than the poorest Indians, and probably more than the second poorest. And got his hands on a math textbook, which was probably pretty low probability.

My bet is that there aren't a lot of geniuses doing stoop labor, especially in traditional peasant situations, but there are some who would have been geniuses if they'd had enough food when young and some education.

comment by gwern · 2013-08-11T17:13:14.129Z · score: 3 (5 votes) · LW · GW

And not just that, but he had more education than the poorest Indians, and probably more than the second poorest.

Even the poorest Indians (or Chinese, for that matter) will sacrifice to put their children through school. Ramanujan's initial education does not seem to have been too extraordinary, before his gifts became manifest (he scored first in exams, and that was how he was able to go to a well-regarded high school; pg25).

And got his hands on a math textbook, which was probably pretty low probability.

Actually, we know how he got his initial textbooks, which was in a way which emphasizes his poverty; pg26-27:

Ramanujan's family, always strapped for cash, often took in boarders. Around the time he was eleven, there were two of them, Brahmin boys, one from the neighboring district of Trichinopoly, one from Tirunelveli far to the south, studying at the nearby Government College. Noticing Ramanujan's interest in mathematics, they fed it with whatever they knew. Within months he had exhausted their knowledge and was pestering them for math texts from the college library. Among those they brought to him was an 1893 English textbook popular in South Indian colleges and English preparatory schools, S. L. Loney's Trigonometry, which actually ranged into more advanced realms. By the time Ramanujan was thirteen, he had mastered it.

...He became something of a minor celebrity. All through his school years, he walked off with merit certificates and volumes of English poetry as scholastic prizes. Finally, at a ceremony in 1904, when Ramanujan was being awarded the K. Ranganatha Rao prize for mathematics, head- master Krishnaswami Iyer introduced him to the audience as a student who, were it possible, deserved higher than the maximum possible marks. An A-plus, or 100 percent, wouldn't do to rate him. Ramanujan, he was saying, was off-scale.

So just as well he was being lent and awarded all his books, because certainly at age 11 as a poor Indian it's hard to see how he could afford expensive rare math or English books...

but there are some who would have been geniuses if they'd had enough food when young and some education.

A rather tautological comment: yes, if we removed all the factors preventing people from being X, then presumably more people would be X...

comment by Estarlio · 2013-08-11T17:59:38.962Z · score: 0 (0 votes) · LW · GW

Is the distribution for mathematicians in general stochastic with respect to IQ and a wealthy upbringing / proximity to cultural centres that reward such learning? That might give you signs of whether wealth / culture is a third correlate.

Otherwise, one way or the other, I'm not sure one person shifts the prob any appreciable distance.

comment by gwern · 2013-08-12T15:28:55.353Z · score: 1 (3 votes) · LW · GW

Otherwise, one way or the other, I'm not sure one person shifts the prob any appreciable distance.

It really depends on what 'prob' you're talking about. For example, the mean of some variable can be shifted an arbitrary amount by a single person if they are arbitrarily large, which is why "robust statistics" shuns the mean in favor of things like the median, and of course a single counter-example disproves a universal claim. When you are talking about lists of geniuses where the relevant group of geniuses might be 10 or 20 people, 1 person may be fairly meaningful because the group is so small.

comment by gwern · 2013-08-11T17:05:10.084Z · score: 5 (5 votes) · LW · GW

Being a Brahmin does not put rice on the table. Again, he was on the brink of starving, he says; this screens off any group considerations - we know he was very poor.

comment by Vaniver · 2013-08-11T21:37:05.362Z · score: 2 (2 votes) · LW · GW

Being a Brahmin does not put rice on the table. Again, he was on the brink of starving, he says; this screens off any group considerations - we know he was very poor.

It screens off any wealth considerations, with the exception of his education (which is midlly relevant). It has a big impact on the question of average IQ and ancestry, though. Brahmin average IQ is probably north of 100,* and so a first-rank mathematician coming from a Brahmin family of any wealth level is not as surprising as a first-rank mathematician coming from a Dalit family.

So we still need to explain the absence (as far as I know) of first rate Dalit mathematicians. Gould argues that they're there, and we're missing them; the hereditarian argues that they're not there. One way to distinguish between the two is to evaluate the counterfactual statement "if they were there, they wouldn't be missed," and while Ramanujan is evidence for that statement it's weakened because of the potential impact of caste prejudice / barriers.

(It seems like the example of China might be better; it seems that young clever people have had the opportunity to escape sweatshops and cotton fields and enter the imperial service / university system for quite some time. Again, though, this is confounded by Han IQ being probably slightly north of 100, and so may not generalize beyond Northeast Asia and Europe.)

*Unfortunately, there is very little solid research on Indian IQ by caste.

comment by gwern · 2013-08-12T00:47:20.297Z · score: 1 (3 votes) · LW · GW

It has a big impact on the question of average IQ and ancestry, though. Brahmin average IQ is probably north of 100,* and so a first-rank mathematician coming from a Brahmin family of any wealth level is not as surprising as a first-rank mathematician coming from a Dalit family.

You'd need to examine the IQ of the poorer Brahmins, though, before you could say it's not surprising; otherwise if the poor Brahmins have the same IQs as equally poor Dalits, then it ought to be equally surprising.

One way to distinguish between the two is to evaluate the counterfactual statement "if they were there, they wouldn't be missed," and while Ramanujan is evidence for that statement it's weakened because of the potential impact of caste prejudice / barriers.

But Ramanujan is evidence against the Great Filters of nationality and poverty, which ought to be much bigger filters against possible Einsteins than caste.

It seems like the example of China might be better; it seems that young clever people have had the opportunity to escape sweatshops and cotton fields and enter the imperial service / university system for quite some time.

Yes, but I'm not very familiar with the background of major Chinese figures (eg. I just looked him up now and while I had assumed Confucius was a minor aristocrat, apparently he was actually the son of an army officer and "is said to have worked as a shepherd, cowherd, clerk, and a book-keeper."); plus, you'd want to look at the post-Tang major Chinese figures, but that will exclude most major Chinese figures period like all the major philosophers - looking up the Chinese philosophy table in Murray's Human Accomplishment, like the first 10 are all pre-examination (and Murray comments of one of them, " it was Zhu Xi who was responsible for making Mencius as well known as he is today, by including Mencius’s work as part of “The Four Books” that became the central texts for both primary education and the civil service examinations").

comment by private_messaging · 2013-09-05T01:39:37.997Z · score: 0 (0 votes) · LW · GW

But Ramanujan is evidence against the Great Filters of nationality and poverty

He's literally as much evidence against those filters as he is evidence against hypothetical very low prevalence of poor innate geniuses.

comment by HonoreDB · 2013-08-07T14:06:48.660Z · score: 9 (19 votes) · LW · GW

I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.

comment by gwern · 2013-08-11T16:54:13.710Z · score: 10 (12 votes) · LW · GW

Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation.

A gross exaggeration; execution was never in the cards for a poisoned apple which was never eaten.

Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy.

Likewise. Goedel didn't go crazy until long after he was famous, and so your conjugate is in no way showing 'privilege'.

Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality.

Likewise. You have some strange Whiggish conception of history where all periods were ones where gays would be lynched; Turing would not have been lynched anymore than President Buchanan would have, because so many upper-class Englishmen were notorious practicing gays and their boarding schools Sodoms and Gomorrahs. To remember the context of Turing's homosexuality conviction, this was in the same period where highly-placed gay Englishman after gay Englishman was turning out to be Soviet moles (see the Cambridge Five and how the bisexual Kim Philby nearly became head of MI6!) EDIT: pg137-144 of the Ramanujan book I've been quoting discusses the extensive homosexuality at Cambridge and its elite, and how tolerance of homosexuality ebbed and flowed, with the close of the Victorian age being particularly intolerant.

The right conjugate for Newton, by the way, reads 'and his heretical Christian views were discovered, he was fired from Cambridge - like his successor as Lucasian Professor - and died a martyr'.

I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.

The problem is, we have these stories. We have Ramanujan who by his own testimony was on the verge of starvation - and if that is not poor, then you are not using the word as I understand it - and we have William Shakespeare (no aristocrat he), and we have Epicurus who was a slave. There is no censorship of poor and middle-class Einsteins. And this is exactly what we would expect when we consider what it takes to be a genius like Einstein, to be gifted in multiple ways, to be far out on multiple distributions (giving us a highly skewed distribution of accomplishment, see the Lotka curve): we would expect a handful of outliers who come from populations with low means, and otherwise our lists to be dominated by outliers from populations with higher means, without any appeal to Marxian oppression or discrimination necessary.

comment by HonoreDB · 2013-08-15T19:00:51.289Z · score: 8 (10 votes) · LW · GW

Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.

Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there), that would itself imply a context containing more censoring factors for any potential Einsteins...to become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.

On your specific objections to my conjugates...I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position. Hardly a gross exaggeration. Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton's conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.

comment by gwern · 2013-09-03T18:24:01.458Z · score: 3 (5 votes) · LW · GW

Do you really think the existence of oppression is a figment of Marxist ideology?

I'm perfectly happy to accept the existence of oppression, but I see no need to make up ways in which the oppression might be even more awful than one had previously thought. Isn't it enough that peasants live shorter lives, are deprived of stuff, can be abused by the wealthy, etc? Why do we need to make up additional ways in which they might be opppressed? Gould comes off here as engaging in a horns effect: not only is oppression bad in the obvious concrete well-verified ways, it's the Worst Thing In The World and so it's also oppressing Einsteins!

If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept.

Not what Gould hyperbolically claimed. He didn't say that 'at the margin, there may be someone who was slightly better than your average mathematician but who failed to get tenure thanks to some lingering disadvantages from his childhood'. He claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India.

Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there)

It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point.

to become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.

The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.

I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position.

Really? Then I'm sure you could name three examples.

Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at

Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity.

Ramanujan was more politic than your average mathematician.

Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?

comment by HonoreDB · 2013-09-04T00:16:04.046Z · score: 4 (4 votes) · LW · GW

the effects of poverty & oppression on means & tails

Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.

The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.

Not all of them, I don't think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you'll be given a chance to prove yourself, gives a shit, has a way of getting you tested...

I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position.

Really? Then I'm sure you could name three examples.

Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone: http://digitaljournal.com/article/346684 http://charlotte.news14.com/content/headlines/628564/teen-arrested-for-trying-to-poison-mother-s-coffee/ http://www.ksl.com/?nid=148&sid=85968

Person being killed for suspected unsuccessful attempt to poison someone: http://zeenews.india.com/news/bihar/man-lynched-for-trying-to-poison-hand-pump_869197.html

Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity.

I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn't completely come across.

Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?

He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.

comment by hairyfigment · 2013-09-04T01:38:35.347Z · score: 2 (2 votes) · LW · GW

He claimed that there were outright historic geniuses laboring in the fields.

And this part seems entirely plausible. American slaves had no opportunity to become famous mathematicians unless they escaped, or chanced to have an implausibly benevolent Dumbledore of an owner.

Gould makes a much stronger claim, and I attach little probability to the part about the present day. But even there, you're ignoring one or two good points about the actions of famous mathematicians. Demanding citations for 'trying to kill people can ruin your life' seems frankly bizarre.

comment by Vaniver · 2013-09-03T19:20:52.718Z · score: -1 (5 votes) · LW · GW

Do you really think the existence of oppression is a figment of Marxist ideology?

The specific oppressions you led off with: yes.

I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas)

I thought we were talking about Oppenheimer and Cambridge? It looks like if Oppenheimer hadn't had rich parents who lobbied on his behalf, he might have gotten probation instead of not. Given his instability, that might have pushed him into a self-destructive spiral, or maybe he just would have progressed a little slower through the system. So, yes, jumping from "the university is unhappy" to "the state hangs you" is a gross exaggeration. (Universities are used to graduate students being under a ton of stress, and so do cut them slack; the response to Oppenheimer of "we think you need to go on vacation, for everyone's safety" was 'normal'.)

comment by HonoreDB · 2013-09-03T23:58:44.397Z · score: 2 (6 votes) · LW · GW"Oppenheimer wasn't privileged, he was only treated slightly better than the average Cambridge student."

I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.

comment by Vaniver · 2013-09-04T11:50:09.871Z · score: 2 (4 votes) · LW · GW

the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.

I don't see the relevance, because to me "Einsteins in sweatshops" means "Einsteins that don't make it to ", for some Cambridge equivalent. If Ramanujan had died three years earlier, and thus not completed his PhD, he would still be in the history books. I mean, take Galois as an example: repeatedly imprisoned for political radicalism under a monarchy, and dies in a duel at age 20. Certainly someone ruined by circumstances--and yet we still know about him and his mathematical work.

In general, these counterfactuals are useful for exhibiting your theory but not proving your theory. Either we have the same background assumptions- and so the counterfactuals look reasonable to both of us- or we disagree on background assumptions, and the counterfactual is only weakly useful at identifying where the disagreement is.

comment by Jayson_Virissimo · 2013-08-11T17:50:39.613Z · score: 5 (5 votes) · LW · GW

...and we have Epicurus who was a slave.

I don't think Epicurus was a slave. He did admit slaves to his school though, which is not something that was typical for his time. Perhaps you are referring to the Stoic, Epictetus, who definitely was a slave (although, white-collar).

comment by gwern · 2013-08-12T15:14:40.405Z · score: 4 (4 votes) · LW · GW

Whups, you're right. Some of the Greek philosophers' names are so easy to confuse (I still confuse Xenophanes and Xenophon). Well, Epictetus was still important, if not as important as Epicurus.

comment by Grant · 2013-08-06T16:42:26.216Z · score: 5 (5 votes) · LW · GW

I think a better term might be 'meritocratic', and not 'democratic'. Unless mathematicians vote on mathematics?

comment by gwern · 2013-08-06T21:50:54.427Z · score: 2 (4 votes) · LW · GW

Well, it is also democratic in the sense that what convinces the mathematical community is what matters, and there's no 'President of Mathematics' or 'Academie de la Mathematique' laying down the rules, but yes, 'meritocratic' is closer to what I meant.

comment by [deleted] · 2013-08-11T15:15:44.257Z · score: 0 (2 votes) · LW · GW

Well, “democratic” strongly suggests a majority vote, and it's not like something that convinces 54% of the mathematicians who read it ‘wins’.

comment by gwern · 2013-08-12T18:30:22.932Z · score: 4 (6 votes) · LW · GW

pg169-171, Kanigel's 1991 The Man Who Knew Infinity:

It wasn't the first time a letter had launched the career of a famous mathematician. Indeed, as the mathematician Louis J. Mordell would later insist, "It is really an easy matter for anyone who has done brilliant mathematical work to bring himself to the attention of the mathematical world, no matter how obscure or unknown he is or how insignificant a position he occupies. All he need do is to send an account of his results to a leading authority," as Jacobi had in writing Legendre on elliptic functions, or as Hermite had in writing Jacobi on number theory.

And yet, if Mordell was right-if "it is really an easy matter" - why had Gauss spurned Abel? Carl Friedrich Gauss was the premier mathematician of his time, and, perhaps, of all time. The Norwegian Niels Henrik Abel, just twenty-two at the time he wrote Gauss, had proved that some equations of the fifth degree (like x^5 + 3x^4 + ... = 0) could never be solved algebraically. That was a real coup, especially since leading mathematicians had for years sought a general solution that, Abel now showed, didn't exist. Yet when he sent his proof to Gauss, the man history records as "the Prince of Mathematics" tossed it aside without reading it. "Here," one account has him saying, dismissing Abel's paper as the work of a crank, "is another of those monstrosities."

. Then, too, if "it is really an easy matter," why had Ramanujan's brilliance failed to cast an equal spell on Baker and Hobson, the other two Cambridge mathematicians to whom he had written?...The other Cambridge mathematician, a Senior Wrangler, was E. W. Hobson, who was in his late fifties when he heard from Ramanujan and more eminent even than Baker. His high forehead, prominent mustache, and striking eyes helped make him, in Hardy's words, "a distinguished and conspicuous figure" around Cambridge. But he was remembered, too, as a dull lecturer, and after he died his most important book was described in words like "systematic," "exhaustive," and "comprehensive," never in language suggesting great imagination or flair. "An old stick-in-the-mud," someone once called him.

...Of course, Ramanujan's fate had always hung on a knife edge, and it had never taken more than the slightest want of imagination, the briefest hesitancy, to tip the balance against him. Only the most stubborn persistence on the part of his friend Rajagopalachari had gained him the sympathy of Ramachandra Rao. And Hardy himself was put off by Ramanujan's letter before he was won over by it. The cards are stacked, against any original mind, and perhaps properly so. After all, many who claim the mantle of "new and original" are indeed new, and original - but not better. So, in a sense, it should be neither surprising nor reason for any but the mildest rebuke that Hobson and Baker said no. Nor should it be surprising that no one: in India had made much of Ramanujan's work. Hardy was perhaps England's premier mathematician, the beneficiary of the finest education, in touch with the latest mathematical thought and, to boot, an expert in several fields Ramanujan plowed .... And yet a day with Ramanujan's theorems had left him bewildered. I had never seen anything in the least like them before. Like the Indians, Hardy did not know what to make of Ramanujan's work. Like them, he doubted his own judgment of it. Indeed, it is not just that he discerned genius in Ramanujan that redounds to his credit today; it is that he battered down his own wall of skepticism to do so. That Ramanujan was Indian probably didn't taint him in Hardy's eyes.

Personally, having finished reading the book, I think Kanigel is wrong to think there is so much contingency here. He paints a vivid picture of why Ramanujan had failed out of school, lost his scholarships, and had difficulties publishing, and why two Cambridge mathematicians might mostly ignore his letter: Ramanujan's stubborn refusal to study non-mathematical topics and refusal to provide reasonably rigorous proofs. His life could have been much easier if he had been less eccentric and prideful. That despite all his self-inflicted problems he was brought to Cambridge anyway is a testimony to how talent will out.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-06T16:38:36.012Z · score: 4 (6 votes) · LW · GW

Was extremely democratic. Do we know this is still true?

comment by gwern · 2013-08-06T21:58:41.597Z · score: 17 (19 votes) · LW · GW

"The Collapse of the Soviet Union and the Productivity of American Mathematicians" comes to mind as an interesting recent natural experiment where the floodgate of Russian mathematical talent was unleashed after the collapse of the USSR and many of them successfully rose in America despite academic math being a zero-sum game; consistent with meritocracy.

comment by Lumifer · 2013-08-06T18:17:19.487Z · score: 5 (7 votes) · LW · GW

At the outlier level, I think so -- see e.g. Perelman. At the normal professor-of-mathematics level, probably not.

comment by [deleted] · 2013-08-06T10:30:19.242Z · score: 3 (3 votes) · LW · GW

Okay, maybe there aren't other examples quite as good as him, but a few of these people surely come close.

Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans - consistent with the evidence.

Yes, but I'm not sure all of the populations working in cotton fields and sweatshops had such a low average IQ. (And Gould just said “people”, not “innumerable people” or something like that.)

comment by gwern · 2013-08-06T21:54:18.934Z · score: 4 (4 votes) · LW · GW

Most of those people either seem to come from middle-class or better backgrounds, fall well below Einstein, or both (I mean, Eliezer Yudkowsky?)

comment by jasonsaied · 2013-08-07T05:35:05.470Z · score: 4 (6 votes) · LW · GW

Doesn't your observation that most successful autodidacts come from financially stable backgrounds SUPPORT the hypothesis that intelligent individuals from low-income backgrounds are prevented from becoming successful?

With the facts you've highlighted, two conclusions may be drawn: either most poor people are stupid, or the aforementioned "starving farmers" don't have the time or the resources to educate themselves or "[bang] out some impressive proofs," on account of the whole "I'm starving and need to grow some food" thing. I don't see how such people would be able to afford books to learn from or time to spend reading them.

comment by gwern · 2013-08-11T16:58:31.614Z · score: 3 (3 votes) · LW · GW

Doesn't your observation that most successful autodidacts come from financially stable backgrounds SUPPORT the hypothesis that intelligent individuals from low-income backgrounds are prevented from becoming successful?

No, it doesn't; see my other comment. I was criticizing the list as a bizarre selection which did not include anyone remotely like Einstein.

I don't see how such people would be able to afford books to learn from

How did Ramanujan afford books?

The answer to the autodidact point is to point out that once one has proven one's Einstein-level talent, one is integrated into the meritocratic system and no longer considered an autodidact.

comment by DanArmak · 2013-08-16T20:09:12.428Z · score: -1 (1 votes) · LW · GW

(And Gould just said “people”, not “innumerable people” or something like that.)

Did you mean innumerate people?

comment by [deleted] · 2013-08-16T21:23:06.640Z · score: 3 (3 votes) · LW · GW

I meant ‘lots of people’, not ‘people who cannot do arithmetic’. looks word up EDIT: Huh, looks like that was the right word after all.

comment by DanArmak · 2013-08-16T22:38:00.560Z · score: 2 (2 votes) · LW · GW

Sorry, then. Your phrasing sounded wrong to me, but I was wrong.

comment by Document · 2013-08-16T22:08:21.424Z · score: 1 (1 votes) · LW · GW

Will you update your post after looking the word up confirms that it means what you thought it did?

comment by [deleted] · 2013-08-16T22:20:28.862Z · score: 2 (2 votes) · LW · GW

I was going to but I forgot to. Thank you.

comment by Oscar_Cunningham · 2013-08-05T23:03:23.033Z · score: 3 (3 votes) · LW · GW

on the verge of starving

I haven't heard that before. Do you have a source?

comment by Vaniver · 2013-08-05T23:28:59.733Z · score: 11 (11 votes) · LW · GW

From his letter to G.H. Hardy:

I am already a half starving man. To preserve my brains I want food and this is my first consideration. Any sympathetic letter from you will be helpful to me here to get a scholarship either from the university or from the government.

Googling the text finds it quoted a bunch of places.

comment by Oscar_Cunningham · 2013-08-06T00:28:19.428Z · score: 3 (3 votes) · LW · GW

Wow, thanks!

comment by gwern · 2013-08-06T02:08:56.954Z · score: 9 (9 votes) · LW · GW

Besides his letter to Hardy, Wikipedia cites The Man Who Knew Infinity (on Libgen; it also quotes the 'half starving' passage), where the cited section reads:

Describing the obsession with college degrees among ambitious young Indians around this time, an English writer, Herbert Compton, noted how "the loaves and fishes fall far short of the multitude, and the result is the creation of armies of hungry 'hopefuls'-the name is a literal trans- lation of the vernacular generic term omedwar used in describing them- who pass their lives in absolute idleness, waiting on the skirts of chance, or gravitate to courses entirely opposed to those which education in- tended." Ramanujan, it might have seemed in 1908, was just such an omedwar. Out of school, without a job, he hung around the house in Kumbakonam.

Times were hard. One day back at Pachaiyappa's, the wind had blown off Ramanujan's cap as he boarded the electric train for school, and Ramanujan's Sanskrit teacher, who insisted that boys wear their traditional tufts covered, asked him to step back out to the market and buy one. Ramanujan apologized that he lacked even the few annas it cost. (His classmates, who'd observed his often-threadbare dress, chipped in to buy it for him.)

Ramanujan's father never made more than about twenty rupees a month; a rupee bought about twenty-five pounds of rice. Agricultural workers in surrounding villages earned four or five annas, or about a quarter rupee, per day; so many families were far worse off than Ramanujan's. But by the standards of the Brahmin professional community in which Ramanujan moved, it was close to penury.

The family took in boarders; that brought in another ten rupees per month. And Komalatammal sang at the temple, bringing in a few more. Still, Ramanujan occasionally went hungry. Sometimes, an old woman in the neighborhood would invite him in for a midday meal. Another family, that of Ramanujan's friend S. M. Subramanian, would also take him in, feeding him dosai, the lentil pancakes that are a staple of South Indian cooking. One time in 1908, Ramanujan's mother stopped by the Subramanian house lamenting that she had no rice. The boy's mother fed her and sent her younger son, Anantharaman, to find Ramanujan. Anantharaman led him to the house of his aunt, who filled him up on rice and butter.

To bring in money, Ramanujan approached friends of the family; perhaps they had accounts to post, or books to reconcile? Or a son to tutor? One student, for seven rupees a month, was Viswanatha Sastri, son ofa Government College philosophy professor. Early each morning, Ramanujan would walk to the boy's house on Solaiappa Mudali Street, at the other end of town, to coach him in algebra, geometry, and trigonometry. The only trouble was, he couldn't stick to the course material. He'd teach the standard method today and then, if Viswanatha forgot it, would improvise a wholly new one tomorrow. Soon he'd be lost in areas the boy's regular teacher never touched.

Sometimes he would fly off onto philosophical tangents. They'd be discussing the height of a wall, perhaps for a trigonometry problem, and Ramanujan would insist that its height was, of course, only relative: who could say how high it seemed to an ant or a buffalo? One time he asked how the world would look when first created, before there was anyone to view it. He took delight, too, in posing sly little problems: If you take a belt, he asked Viswanatha and his father, and cinch it tight around the earth's twenty-five-thousand-mile-Iong equator, then let it out just 271" feet-about two yards-how far off the earth's surface would it stand? Some tiny fraction of an inch? Nope, one foot.

Viswanatha Sastri found Ramanujan inspiring; other students, however, did not. One classmate from high school, N. Govindaraja Iyengar, asked Ramanujan to help him with differential calculus for his B.A. exam. The arrangement lasted all of two weeks. You can think of calculus as a set of powerful mathematical tools; that's how most students learn it and what most exams require. Or else you can appreciate it for the subtle questions it poses about the nature of the infinitesimally small and the infinitely large. Ramanujan, either unmindful of his students' practical needs or unwilling to cater to them, stressed the latter. "He would talk only of infinity and infinitesimals," wrote Govindara,ja, who was no slouch intellectually and wound up as chairman oflndia's public service commission. "I felt that his tuition [teaching] might not be of real use to me in the examination, and so I gave it up."

Ramanujan had lost all his scholarships. He had failed in school. Even as a tutor of the subject he loved most, he'd been found wanting. He had nothing.

And yet, viewed a little differently, he had everything. For now there was nothing to distract him from his notebooks-notebooks, crammed with theorems, that each day, each week, bulged wider.

comment by philh · 2013-08-10T23:39:18.511Z · score: 0 (0 votes) · LW · GW

If you take a belt, he asked Viswanatha and his father, and cinch it tight around the earth's twenty-five-thousand-mile-Iong equator, then let it out just 271" feet-about two yards-how far off the earth's surface would it stand? Some tiny fraction of an inch? Nope, one foot.

I can't parse '271" feet', is this an OCR issue? If you loosen the belt by two yards, it can obviously reach at least a yard above the surface, because you can just go from ____ to __|__. And I recall that the actual answer is considerably more than that.

comment by AndHisHorse · 2013-08-11T00:03:11.050Z · score: 5 (5 votes) · LW · GW

Given that the symbol " is the symbol for inches, and ' is the symbol for feet, I would suspect that there has been a mistyping in the quote.

I think that what was meant to be there was 72" or 72.1" (inches), which is exactly/one-tenth of an inch over two yards (one yard = three feet). That would produce the desired result of a nearly one-foot increase in the radius of the belt; adding 72 inches to the circumference of the belt would produce an increase of 11.46 inches (72 inches / (2 * pi)) in the radius of the belt, which in this case is the height above the ground.

comment by Swimmer963 · 2013-08-11T01:00:20.046Z · score: 2 (4 votes) · LW · GW

Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away.

Isn't the average IQ 100 by definition?

comment by gwern · 2013-08-11T03:22:38.216Z · score: 4 (4 votes) · LW · GW

Yes - but whose average?

comment by Swimmer963 · 2013-08-11T04:41:43.365Z · score: 3 (3 votes) · LW · GW

Presumably the people who write the IQ test, based on whatever population sample they use to calibrate it. Is the point that the average IQ in India is 70-80, as opposed to the average in the US? (This could be technically true on an IQ test written in the US, without being meaningful, or it could be actually true because of nutrition or whatever). What data does the number 70-80 actually come from?

comment by ESRogs · 2013-08-11T06:22:06.775Z · score: 4 (4 votes) · LW · GW

What data does the number 70-80 actually come from?

Presumably from this list.

comment by private_messaging · 2013-09-05T00:09:17.571Z · score: 0 (0 votes) · LW · GW

Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans - consistent with the evidence.

It would naively seem that an IQ of 160 or more is 5 SDs from 85 , but 4SDs from the 100 , so the rarity would be 1/3,483,046 vs 1/31,560 , for a huge ratio of 110 times prevalence of extreme genius between the populations.

Except that this is not how it works when the IQ of 100 population has been selected from the other and subsequently has lower variance. Nor is it how Flynn effect worked. Because, of course, the standard deviation is not going to remain constant.

comment by mwengler · 2013-08-05T16:22:40.682Z · score: 1 (3 votes) · LW · GW

You presume too much, the only thing I remember about Gould's views is that they are controversial.

comment by wedrifid · 2013-08-04T18:54:40.184Z · score: 6 (12 votes) · LW · GW

I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.

A proactive interest in the latter would seem to lead to extensive instrumental interest in the former. Finding things (such as convolutions in brains or genes) that are indicative of potentially valuable talent is the kind of thing that helps make efficient use of it.

comment by ialdabaoth · 2013-08-04T20:07:01.574Z · score: 16 (24 votes) · LW · GW

There are surprisingly few MRI machines or DNA sequencers in cotton fields and sweatshops. Paraphrasing the original quote from Stephen Jay Gould: The problem is not how good we are at detecting talent; it's where we even bother to look for it.

comment by ChristianKl · 2013-08-06T20:46:53.624Z · score: 6 (6 votes) · LW · GW

You need neither MRI machines nor DNA sequencers to detect intelligence. IQ test perform much better at detecting intelligence.

comment by gwern · 2013-08-06T22:20:22.343Z · score: 9 (11 votes) · LW · GW

Yes; at this point with only 3 SNPs linked to intelligence, it's a joke to say that 'poor people aren't being sequenced and this is why we aren't detecting hidden gems'.

comment by ialdabaoth · 2013-08-06T22:59:01.581Z · score: 4 (6 votes) · LW · GW

Yes, but that wasn't the point of my post; I was replying to:

Finding things (such as convolutions in brains or genes) that are indicative of potentially valuable talent is the kind of thing that helps make efficient use of it.

An MRI machine was an example of a device that could detect convolutions ins brains; a DNA sequencer was an example of a device that could detect genes. My point generalized to "it doesn't matter how good you are at testing for , if you don't apply the test." If we look at IQ tests instead, then (again) it doesn't matter how accurately a properly-administered IQ test detects intelligence, if you don't bother properly administering IQ tests to people in cotton fields, sweatshops, or other places where you don't feel like looking because they aren't "under the lamppost", as it were.

comment by ChristianKl · 2013-08-06T23:29:10.374Z · score: 1 (1 votes) · LW · GW

In a country like China there's quite a bit of testing in school. I think it's quite plausible that there are people who went through the Chinese school system working in Chinese sweatshops and cotton fields.

comment by ialdabaoth · 2013-08-07T00:09:42.801Z · score: 2 (4 votes) · LW · GW

Is there IQ test properly designed and administered, or does the test-as-given have hidden correlations with things other than IQ?

comment by RolfAndreassen · 2013-08-10T04:58:14.518Z · score: 8 (8 votes) · LW · GW

I suspect, actually, that Gould would not view "find the geniuses and get them out of the fields" as a reasonable solution to the problem he poses. What he wants is for there to be no stoop labour in the first place, whether for geniuses or the terminally mediocre. The geniuses are just a way to illustrate the problem.

comment by Estarlio · 2013-08-04T19:57:52.115Z · score: 2 (4 votes) · LW · GW

That's a hard problem, with no reasonable way to measure it in in a large population in sight, or even direction of the relationship taken into account. Ideally you'd take a bunch of kids and look at their brains and then see how they grew up and see whether you could find anything that altered the distribution in similar cases - but ....

Well, you see the problem? It's a sort of twiddling your thumbs style studying, rather than addressing more immediate problems that might do something at a reasonable price/timeline.

comment by Eugine_Nier · 2013-08-04T05:43:25.833Z · score: 14 (24 votes) · LW · GW

And anyone that’s been involved in philanthropy eventually comes to that point. When you try to help, you try to give things, you start to have the consequences. There’s an author Bob Lupton, who really nails it when he says that when he gave something the first time, there was gratitude; and when he gave something a second time to that same community, there was anticipation; the third time, there was expectation; the fourth time, there was entitlement; and the fifth time, there was dependency. That is what we’ve all experienced when we’ve wanted to do good. Something changes the more we just give hand-out after hand-out. Something that is designed to be a help actually causes harm.

Peter Greer

comment by FiftyTwo · 2013-08-14T21:09:53.058Z · score: 1 (1 votes) · LW · GW

The other way to look at that is the other agent doing basic induction.

comment by Eugine_Nier · 2013-08-15T01:05:47.808Z · score: 1 (5 votes) · LW · GW

It is. That doesn't mean the results are good.

comment by Salemicus · 2013-08-22T23:16:10.670Z · score: 13 (13 votes) · LW · GW

Finding a good formulation for a problem is often most of the work of solving it... Problem formulation and problem solution are mutually-recursive processes.

David Chapman

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-23T18:45:06.123Z · score: 3 (3 votes) · LW · GW

See also: "Figuring out what should be your top priority" vs. "Actually working on your current best guess".

comment by shminux · 2013-08-19T16:36:20.088Z · score: 13 (15 votes) · LW · GW

If your parents made you practice the flute for 10,000 hours, and it wasn't your thing, you aren't an expert. You're a victim.

The most important skill involved in success is knowing how and when to switch to a game with better odds for you.

Scott Adams

comment by Lumifer · 2013-08-19T17:47:35.340Z · score: 13 (13 votes) · LW · GW

Aka http://demotivators.despair.com/demotivational/stupiditydemotivator.jpg

"Quitters never win, winners never quit, but those who never win AND never quit are idiots"

comment by Viliam_Bur · 2013-08-25T20:01:42.335Z · score: 4 (4 votes) · LW · GW

From the same website, another LessWrongian wisdom:

The bad news is robots can do your job now. The good news is we're now hiring robot repair technicians. The worse news is we're working on robot-fixing robots- and we do not anticipate any further good news.

comment by [deleted] · 2013-08-19T16:48:12.737Z · score: 3 (3 votes) · LW · GW

This is an incredibly important life skill.

comment by anonym · 2013-08-21T02:23:44.347Z · score: 12 (12 votes) · LW · GW

The opposite intellectual sin to wanting to derive everything from fundamental physics is holism which makes too much of the fact that everything is ultimately connected to everything else. Sure, but scientific progress is made by finding where the connections are weak enough to allow separate theories.

-- John McCarthy

comment by hylleddin · 2013-08-02T21:24:40.468Z · score: 12 (14 votes) · LW · GW

The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.

-- Tillaume, The Alloy of Law

comment by satt · 2013-08-24T12:09:13.559Z · score: 0 (0 votes) · LW · GW

Nothing is compulsory, but some things are necessary.

— Robert Fripp

comment by Eugine_Nier · 2013-08-04T06:13:25.297Z · score: 11 (17 votes) · LW · GW

So, in a business setting, you’ve got to provide value to your customers so that they pay for the goods and services that you’re providing. Philanthropy is unfortunate in that the people that your customer base is made of oftentimes are the people that are writing the checks to support you. The people that are writing the donation checks are what keep organizations in business oftentimes. The people that are receiving the services, then, are oftentimes not paying for the services, and therefore their voice is not heard. And so within the nonprofit space, we’ve created a system where he/she who tells the best story is the one that’s rewarded. There’s an incentive to push down the stories that are not of positive impact. There’s the incentive to pretend that there are no negative things that happen, there’s the incentive to make sure that our failures are never made public, and there’s the disconnected between who’s paying for the service and who’s receiving the services. When you disconnect those two aspects, you do not have accountability that acts in the best interest of the people who are receiving what we are all trying to do, which is just to help in places of great need.

Peter Greer

comment by mwengler · 2013-08-05T16:28:45.873Z · score: 3 (3 votes) · LW · GW

And so within the nonprofit space, we’ve created a system where he/she who tells the best story is the one that’s rewarded.

Rewarding those who tell great stories is hardly limited to non-profits. Hollywood of course does this as well it should. Fund raising for new ventures does this a lot, raising money for many sorts of investment at the retail level is largely an effort of telling good stories not particularly supported by statistical fact.

Which isn't to say that this is not a problem for non-profits, but rather that non-profits might do well to see how other industries deal with this phenomenon.

comment by Eugine_Nier · 2013-08-06T03:43:55.674Z · score: 0 (4 votes) · LW · GW

Fund raising for new ventures does this a lot, raising money for many sorts of investment at the retail level is largely an effort of telling good stories not particularly supported by statistical fact.

At least in investing the people listening to the stories eventually find out whether their investment went sour.

comment by Document · 2013-08-06T03:12:35.165Z · score: 1 (1 votes) · LW · GW

The problem is doubtless exacerbated when those paying for the service and those receiving it live in different time periods.

comment by cody-bryce · 2013-08-02T22:28:23.775Z · score: 11 (17 votes) · LW · GW

I just think it's good to be confident. If I'm not on my team why should anybody else be?

-Robert Downey Jr.

comment by Document · 2013-08-03T02:25:30.195Z · score: 10 (16 votes) · LW · GW

I think it's good to be well-calibrated.

comment by wedrifid · 2013-08-03T03:06:40.097Z · score: 13 (13 votes) · LW · GW

I think it's good to be well-calibrated.

It is usually best to be socially confident while making well-calibrated predictions of success. The two are only slightly related and Downey is definitely talking about the social kind of confidence.

comment by Document · 2013-08-03T04:08:28.174Z · score: 2 (2 votes) · LW · GW

Good point. I'm still not sure I like his framing of social interactions as getting people on "your" team (which I may be partly biased in by the source of the quote), but the objection in my initial post isn't a good one.

comment by DanArmak · 2013-08-03T09:36:13.015Z · score: 1 (1 votes) · LW · GW

I think it's best to be well-calibrated, use that to choose your team as one that's going to succeed, and then to be confident.

comment by dspeyer · 2013-08-04T21:16:21.370Z · score: 2 (6 votes) · LW · GW

Maybe I'm misunderstanding the quote, but this seems to wither if you have something to protect. If I'm having surgery, I don't really want the team of expert surgeons listening to my suggestions. I shouldn't be on my team because I'm not qualified. Highly qualified people should be so that my team will win (and I get to live).

comment by Estarlio · 2013-08-05T16:06:31.470Z · score: 10 (12 votes) · LW · GW

Well, I think the thrust of the quote had more to do with being confident in your own projects. But I'll try to do an answer to your point because I think it's important to recognise the limitations of domain specialists - some of whom just aren't very good at their jobs.

If you're not on your team of expert surgeons, you're gonna be screwed if they're not actually as expert as you might think they were. There's a bit in What Do You Care What Other People Think? Where Feynman is talking about his first wife's hospitalisation - and how he had done some reading around the area and come up with the idea that it might be TB - and didn't push for the idea because he thought that the doctors knew what they were doing.

Then, sometime later, the bump began to change. It got bigger—or maybe it was smaller—and she got a fever. The fever got worse, so the family doctor decided Arlene should go to the hospital. She was told she had typhoid fever. Right away, as I still do today, I looked up the disease in medical books and read all about it. When I went to see Arlene in the hospital, she was in quarantine—we had to put on special gowns when we entered her room, and so on. The doctor was there, so I asked him how the Wydell test came out—it was an absolute test for typhoid fever that involved checking for bacteria in the feces. He said, "It was negative." "What? How can that be!" I said. "Why all these gowns, when you can't even find the bacteria in an experiment? Maybe she doesn't have typhoid fever!" The result of that was that the doctor talked to Arlene's parents, who told me not to interfere. "After all, he's the doctor. You're only her fiancé." I've found out since that such people don't know what they're doing, and get insulted when you make some suggestion or criticism. I realize that now, but I wish I had been much stronger then and told her parents that the doctor was an idiot—which he was—and didn't know what he was doing. But as it was, her parents were in charge of it.

Anyway, after a little while, Arlene got better, apparently: the swelling went down and the fever went away. But after some weeks the swelling started again, and this time she went to another doctor. This guy feels under her armpits and in her groin, and so on, and notices there's swelling in those places, too. He says the problem is in her lymphatic glands, but he doesn't yet know what the specific disease is. He will consult with other doctors. As soon as I hear about it I go down to the library at Princeton and look up lymphatic diseases, and find "Swelling of the Lymphatic Glands. (1) Tuberculosis of the lymphatic glands. This is very easy to diagnose . . ."—so I figure this isn't what Arlene has, because the doctors are having trouble trying to figure it out.

[Feynman moves onto less likely possibilities]

One of the diseases I told Arlene about was Hodgkin's disease. When she next saw her doctor, she asked him about it: "Could it be Hodgkin's disease?" He said, "Well, yes, that's a possibility." When she went to the county hospital, the doctor wrote the following diagnosis: "Hodgkin's disease—?" So I realized that the doctor didn't know any more than I did about this problem. The county hospital gave Arlene all sorts of tests and X-ray treatments for this "Hodgkin's disease—?" and there were special meetings to discuss this peculiar case. I remember waiting for her outside, in the hall. When the meeting was over, the nurse wheeled her out in a wheelchair. All of a sudden a little guy comes running out of the meeting room and catches up with us. "Tell me," he says, out of breath, "do you spit up blood? Have you ever coughed up blood?" The nurse says, "Go away! Go away! What kind of thing is that to ask of a patient!"—and brushes him away. Then she turned to us and said, "That man is a doctor from the neighborhood who comes to the meetings and is always making trouble. That's not the kind of thing to ask of a patient!" I didn't catch on. The doctor was checking a certain possibility, and if I had been smart, I would have asked him what it was. Finally, after a lot of discussion, a doctor at the hospital tells me they figure the most likely possibility is Hodgkin's disease. He says, "There will be some periods of improvement, and some periods in the hospital. It will be on and off, getting gradually worse. There's no way to reverse it entirely. It's fatal after a few years."

[Gets convinced to lie to her that it's Hodgkins - lie falls through]

For some months now Arlene's doctors had wanted to take a biopsy of the swelling on her neck, but her parents didn't want it done—they didn't want to "bother the poor sick girl." But with new resolve, I kept working on them, explaining that it's important to get as much information as possible. With Arlene's help, I finally convinced her parents. A few days later, Arlene telephones me and says, "They got a report from the biopsy." "Yeah? Is it good or bad?" "I don't know. Come over and let's talk about it." When I got to her house, she showed me the report. It said, "Biopsy shows tuberculosis of the lymphatic gland." That really got me. I mean, that was the first goddamn thing on the list! I passed it by, because the book said it was easy to diagnose, and because the doctors were having so much trouble trying to figure out what it was. I assumed they had checked the obvious case. And it was the obvious case: the man who had come running out of the meeting room asking "Do you spit up blood?" had the right idea. He knew what it probably was!

I felt like a jerk, because I had passed over the obvious possibility by using circumstantial evidence—which isn't any good—and by assuming the doctors were more intelligent than they were. Otherwise, I would have suggested it right off, and perhaps the doctor would have diagnosed Arlene's disease way back then as "tuberculosis of the lymphatic gland—?" I was a dope. I've learned, since then.

=====================

Point being, disinvolving yourself from decisions is not a no-risk choice, and specialists aren't necessarily wise just because they've sat through the classes and crammed some sort of knowledge into their heads to get a degree. Assigning trust is a difficult subject.

There's a book called The Speed of Trust - and that's pretty much what you give up in being involved in complex decisions where you're not a specialist and where the specialists are actually really good at their jobs - a bit of speed.

comment by ChristianKl · 2013-08-06T14:29:08.491Z · score: 3 (5 votes) · LW · GW

Expert surgeons tend to think that more problems should be solved via surgery than doctors who aren't surgeons. Before getting surgery you should always talk with a doctor who knows something about the kind of illness you are having who isn't a surgeon.

After the operation is done doctors will ask you if everything is alright with you. If you try to understand what the operation involved you will give your doctor answers that are likely to be more informative than if you just try to place all responsibility onto another person.

Especially if you feel something that's not normal for the type of operation that you get, it important to be confident that you perceive something that's worth bringing to the attention of your doctor.

Having had big operations (one with 8 weeks of hospitalisation and one with 3 weeks) myself I think not taking enough for myself in those context was one of the worst decisions I made in my life. But then I was young and stupid about how the world works at the time.

comment by RichardKennaway · 2013-08-04T22:41:55.945Z · score: 2 (2 votes) · LW · GW

Maybe I'm misunderstanding the quote, but this seems to wither if you have something to protect.

Only if you're not the one with the responsibility to do something to protect it. I don't know the context of the quote, other than apparently being from an interview (with the actor, not any character he has played), but I read it as being about your own efforts to accomplish something. In such matters, you are the first person on your team, and you won't get any others on board by telling them you're not sure this is a good idea. Once you've made the decision that you are going to go for it, you have to then go for it, not sit around wondering if it's the right decision. If you're not acting on a decision, you didn't make it.

comment by Document · 2013-08-05T01:30:05.291Z · score: 0 (0 votes) · LW · GW

That may be a better wording of what I was trying to say here.

comment by Vladimir_Nesov · 2013-08-23T19:52:54.245Z · score: 1 (1 votes) · LW · GW

This works as a rationalization growing from the conclusion that others should be "on your team". If on well-calibrated assessment you yourself are not "on your team", others probably shouldn't be either, in which case projecting confidence amounts to deceit.

comment by wedrifid · 2013-08-24T08:32:03.718Z · score: 0 (2 votes) · LW · GW

This works as a rationalization growing from the conclusion that others should be "on your team". If on well-calibrated assessment you yourself are not "on your team", others probably shouldn't be either, in which case projecting confidence amounts to deceit.

(Unless I don't understand what you are saying) I reject whatever definition 'deceit' is given such that the above claim is true. Behaving in a socially confident manner is different in nature to lying.

comment by Vladimir_Nesov · 2013-08-24T13:22:46.500Z · score: 1 (1 votes) · LW · GW

Behaving in a socially confident manner is different in nature to lying.

I was using "confidence" in a more specific sense, as in "overconfidence", that is implying that you know what you are doing, in the case where you actually don't. "Socially confident manner" might in contrast (for example, among many other things) involve willingness to state your state of uncertainty, as opposed to hiding it (including behind overconfidence).

comment by wedrifid · 2013-08-24T14:34:23.082Z · score: 1 (1 votes) · LW · GW

I was using "confidence" in a more specific sense, as in "overconfidence", that is implying that you know what you are doing, in the case where you actually don't.

This seems reasonable. Misleading about probabilities is deceptive. To be fair on Robert Downey, it doesn't seem likely that that is the the usage he was making in the quote.

comment by Kawoomba · 2013-08-24T08:56:20.972Z · score: 0 (0 votes) · LW · GW

If on well-calibrated assessment you yourself are not "on your team", others probably shouldn't be either, in which case projecting confidence amounts to deceit.

Behaving in a socially confident manner is different in nature to lying.

Jehovah's Witnesses (or insert your cult of choice) who secretly don't believe in what they're selling, army recruiters who have secretly come to know and reject the horrors of war, insurance salesmen who sell useless policies:

All these (and many others) can be deceitful even without telling you their respective lies explicitly, just by using their social capital / community standing / aura of authority to signal their allegiance to their tribe, lending it credence in a deceitful (dishonest because not in tune with their well-calibrated assessment) manner. The similarity to lying comes from social cues (such as exuding confidence in one's role) and 'explicit' lies being forms of communication both.

comment by wedrifid · 2013-08-24T09:50:09.300Z · score: 0 (0 votes) · LW · GW

It is possible to deceive others while using social confidence signals. Such signals are instrumentally useful for even vital for this and many other purposes. But this is not the same thing as the confidence being deceitful.

comment by cody-bryce · 2013-08-03T04:48:51.265Z · score: 0 (0 votes) · LW · GW

A somewhat similar sentiment: http://lesswrong.com/lw/2o3/rationality_quotes_september_2010/2kol

comment by duckduckMOO · 2013-08-23T18:41:24.871Z · score: 1 (1 votes) · LW · GW

Why shouldn't they be? The idea that if you don't rate yourself highly no one should is just an excuse for shitty instincts.

Obviously it's a useful piece of nonsense to tell yourself. People are more likely to come to your side if you are confident. But the explicit reasoning is reprehensible. (not that any explicit reasoning probably went in, it's such a common idea that it is repeated without thought. It's almost a universal applause light.)

This is more of an irrationality quote. A bit of of paper thin justification for a shitty but common sentiment which it's useful to adopt rather than notice.

comment by jbay · 2013-08-02T14:10:09.900Z · score: 10 (10 votes) · LW · GW

But, unlike other species, we also know how not to know. We employ this unique ability to suppress our knowledge not just of mortality, but of everything we find uncomfortable, until our survival strategy becomes a threat to our survival.

[...] There is no virtue in sustaining a set of beliefs, regardless of the evidence. There is no virtue in either following other people unquestioningly or in cultivating a loyal and unquestioning band of followers.

While you can be definitively wrong, you cannot be definitely right. The best anyone can do is constantly to review the evidence and to keep improving and updating their knowledge. Journalism which attempts this is worth reading. Journalism which does not is a waste of time."

comment by DSherron · 2013-08-04T22:49:14.255Z · score: 7 (7 votes) · LW · GW

While you can be definitively wrong, you cannot be definitely right.

Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes' Theorem.

Note: This means that you cannot be definitively wrong, not that you can be definitively right.

comment by Document · 2013-08-03T02:34:29.323Z · score: 7 (9 votes) · LW · GW

There is no virtue in either following other people unquestioningly or in cultivating a loyal and unquestioning band of followers.

True, but possibly dangerously close to "There is no virtue in following other people or in cultivating followers".

comment by iDante · 2013-08-10T22:10:52.299Z · score: 9 (13 votes) · LW · GW

To the layman, the philosopher, or the classical physicist, a statement of the form "this particle doesn't have a well-defined position" (or momentum, or x-component of spin angular momentum, or whatever) sounds vague, incompetent, or (worst of all) profound. It is none of these. But its precise meaning is, I think, almost impossible to convey to anyone who has not studied quantum mechanics in some depth.

comment by DanArmak · 2013-08-16T19:50:04.145Z · score: 2 (2 votes) · LW · GW

I haven't studied quantum mechanics in any depth at all. The meaning I, as a layman, derive from this statement is: in the formal QM system a particle has no property labelled "position". There is perhaps an emergent property called position, but it is not fundamental and is not always well defined, just like there are no ice-cream atoms. Is this wrong?

comment by pragmatist · 2013-08-16T20:48:20.989Z · score: 13 (13 votes) · LW · GW

Yes, it's wrong. In the QM formalism position is a fundamental property. However, the way physical properties work is very different from classical mechanics (CM). In CM, a property is basically a function that maps physical states to real numbers. So the x-component of momentum, for instance, is a function that takes a state as input and spits out a number as output, and that number is the value of the property for that state. Same state, same number, always. This is what it means for a property to have a well-defined value for every state.

In QM, physical properties are more complicated -- they're linear operators, if you want a mathematically exact treatment. But here's an attempt at an intuitive explanation: There are some special quantum states (called eigenstates) for which physical properties behave pretty much like they do in CM. If the particle is in one of those states, then the property takes the state as input and basically just spits out a number. Whenever the particle is in that state, you get the same number. For those states, the property does have a well-defined value.

But the problem in QM is that those are not the only states there are. There are other states as well. These states are linear combinations of the eigenstates, i.e. they correspond to sums of eigenstates (states in QM are basically just vectors, so you can sum them together). These linear combinations are not themselves eigenstates. When you input them into the property, it spits out multiple numbers, not just one. In fact it spits out all the numbers corresponding to each of the eigenstates that are summed together to form our linear combination state. So if A and B are eigenstates for which the property in question spits out numbers a and b respectively, then for the combined state A + B, the property will spit out both a and b -- two numbers, not just one.

So the property isn't just a simple function from states to numbers; for some states you end up with more than one number. And which of those numbers do you see when you make a measurement? Well, that depends on your interpretation. In collapse theories, for instance, you see one of the numbers chosen at random. In MWI, the world branches and each one of those numbers is seen on a separate branch. So there's the sense in which properties aren't well-defined in QM -- properties don't associate a unique number with every physical state. This is all pretty hand-wavey, I realize, but Griffiths is right. If you really want an understanding of what's going on, then you need to study QM in some depth.

Also, I should say that in MWI there is something to your claim that the position of a particle is emergent and not fundamental, but this is not so much because of the nature of the property. It's because particles themselves are emergent and non-fundamental in MWI. The universal wavefunction is fundamental.

comment by DanArmak · 2013-08-16T22:40:47.959Z · score: 4 (4 votes) · LW · GW

Thanks for the detailed explanation! Now I have more fun words to remember without actually understanding :-)

Seriously, thanks for taking the time to explain that.

comment by gothgirl420666 · 2013-08-04T22:29:14.229Z · score: 9 (23 votes) · LW · GW

I like it when I hear philosophy in rap songs (or any kind of music, really) that I can actually fully agree with:

I never had belief in Christ, cus in the pictures he was white

Same color as the judge that gave my hood repeated life

Sentences for little shit, church I wasn't feeling it

Why the preacher tell us everything gon be alright?

Knew what it was for, still I felt that it was wrong

Till I heard Chef call himself God in the song

And it all made sense, cus you can't do shit

But look inside the mirror once it all goes wrong

You fix your own problems, tame your own conscience

All that holy water shit is nothing short of nonsense

Not denying Christ, I'm just denying niggas options

Cus prayer never moved my Grandmama out of Compton

I prayed for my cousin, but them niggas still shot him

Invest in a gun, cause them niggas still got them

And won't shit stop em from popping you in broad day

Hope that choir pew bulletproof or you gon' pay

-- Vince Staples, "Versace Rap"

comment by David_Gerard · 2013-08-05T20:59:07.655Z · score: 5 (5 votes) · LW · GW

It's quite sad that Tupac Shakur is the focus of so many conspiracy theories, because he was quite the sceptic about wasting your time on this stuff when there was real work to do making the world better.

comment by gothgirl420666 · 2013-08-05T22:19:07.541Z · score: 6 (6 votes) · LW · GW

I always thought it was interesting that Tupac got all the conspiracy theories while Biggie got none, despite the fact that Biggie released an album called Ready to Die, died, then two weeks later released an album called Life After Death. It's probably because Tupac's music appeals more to hippie types who are into this kind of stuff.

comment by Cthulhoo · 2013-08-02T07:33:43.261Z · score: 9 (11 votes) · LW · GW

Whatever alleged "truth" is proven by results to be but an empty fiction, let it be unceremoniously flung into the outer darkness, among the dead gods, dead empires, dead philosophies, and other useless lumber and wreckage!

Anton Lavey, The Satanic Bible, The Book of Satan II

comment by FiftyTwo · 2013-08-04T15:45:16.216Z · score: 2 (2 votes) · LW · GW

Isn't it better to examine a falsehood to discover why it was so popular and appealing before throwing it away?

comment by AndHisHorse · 2013-08-04T19:18:34.817Z · score: 7 (7 votes) · LW · GW

Then, to continue the metaphor, we should study it by telescope from afar, not as a present and influential entity in our own sphere of existence, but rather a distant body, informative but impotent, the object of curiosity rather than devotion.

comment by satt · 2013-08-24T11:03:06.591Z · score: 2 (2 votes) · LW · GW

Much of science, including social science, tries to explain things we all know, but science can also make a contribution by establishing that some of the things we all think we know simply are not so. In that case, social science may also try to explain why we think we know things that are not so, adding as it were a piece of knowledge to replace the one that has been taken away.

— Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, p. 16

comment by ChrisPine · 2013-08-04T17:30:38.726Z · score: 0 (0 votes) · LW · GW

Only if they won't let you throw it away.

comment by katydee · 2013-08-03T06:34:02.589Z · score: 8 (18 votes) · LW · GW

The tired and thirsty prospector threw himself down at the edge of the watering hole and started to drink. But then he looked around and saw skulls and bones everywhere. "Uh-oh," he thought. "This watering hole is reserved for skeletons."

Jack Handey

comment by Manfred · 2013-08-05T09:14:05.568Z · score: 6 (6 votes) · LW · GW

So good even dead people want to drink it.

comment by Document · 2013-08-06T17:28:23.281Z · score: 2 (2 votes) · LW · GW

(Reference.)

comment by linkhyrule5 · 2013-08-05T05:55:03.140Z · score: 0 (6 votes) · LW · GW

To be fair, if you see a watering hole surrounded by skeletons, it probably means the water's toxic.

comment by katydee · 2013-08-05T07:40:05.435Z · score: 5 (5 votes) · LW · GW

That's the joke.

comment by linkhyrule5 · 2013-08-05T18:40:11.330Z · score: 1 (3 votes) · LW · GW

Ah. I thought it was something like "I won't drink from this because it's reserved for skeletons (and will therefore die and perpetuate the cycle)," which was just bizarre enough to be a joke.

comment by Eugine_Nier · 2013-08-03T05:09:14.342Z · score: 8 (18 votes) · LW · GW

Nobody can believe nothing. When a man says he believes nothing, two things are true: first, that there is something in which he desperately, perhaps dearly, wishes not to believe; and second that there is some unspoken thing in which he secretly believes, perhaps even unknown to himself.

John C Wright

comment by NancyLebovitz · 2013-08-20T02:36:23.135Z · score: 6 (6 votes) · LW · GW

Is there a name for the fallacy of claiming to be an expert on the specific contents of other people's subconsciouses?

comment by MalcolmOcean (malcolmocean) · 2013-08-07T03:23:48.234Z · score: 2 (2 votes) · LW · GW

This sounds like it implies that both things must be true. It seems to me that either would be sufficient to justify someone saying they believe nothing.

comment by Ambition · 2013-08-02T02:32:30.655Z · score: 8 (14 votes) · LW · GW

He who knows nothing is closer to the truth than he whose mind is filled with falsehoods and errors.

-Thomas Jefferson

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-02T21:01:03.402Z · score: 22 (22 votes) · LW · GW

One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.

comment by NancyLebovitz · 2013-08-03T00:31:21.141Z · score: 7 (7 votes) · LW · GW

How about "If you know nothing and are willing to learn, you're closer to the truth than someone who's attached to falsehoods"? Even then, I suppose you'd need to throw in something about the speed of learning.

comment by AndHisHorse · 2013-08-04T23:19:15.284Z · score: 5 (5 votes) · LW · GW

It would seem that the difference of opinion here originates in the definition of further. Someone who knows nothing is further (in the information-theoretic sense) from the truth than someone who believes a falsehood, assuming that the falsehood has at least some basis in reality (even if only an accidental relation), because they must flip more bits of their belief (or lack thereof) to arrive at something resembling truth. On the other hand, in the limited, human, psychological sense, they are closer, because they have no attachments to relinquish, and they will not object to having their state of ignorance lifted from them, as one who believes in falsehoods might object to having their state of delusion destroyed.

comment by felzix · 2013-08-19T18:47:25.635Z · score: 0 (0 votes) · LW · GW

Right, I'd take it as a statement on how humans actually think, not how a perfect rationalist thinks. Or maybe how most humans think since humans can be unattached to their beliefs.

comment by Grant · 2013-08-05T07:56:18.314Z · score: 4 (4 votes) · LW · GW

To me "filled with falsehoods and errors" translates into more falsehoods than "some". Though I agree its not a very good quote within the context of LW.

comment by Ambition · 2013-08-03T01:18:16.962Z · score: 3 (3 votes) · LW · GW

He who knows nothing is further from the truth than he whose mind is filled with falsehoods and errors, but has the courage to acknowledge them as so.

-LessWrong Community

comment by BlueSun · 2013-08-05T17:03:07.710Z · score: 2 (2 votes) · LW · GW

Maybe it's just where my mind was when I read it but I interpreted the quote as meaning something more like:

"It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence."

comment by Decius · 2013-08-07T18:20:27.899Z · score: 1 (1 votes) · LW · GW

In what units does one measure distance from the truth, and in what manner?

comment by linkhyrule5 · 2013-08-10T01:56:22.014Z · score: 3 (3 votes) · LW · GW

Bits of Shannon entropy.

comment by Decius · 2013-08-10T02:26:22.726Z · score: 0 (0 votes) · LW · GW

That's half of the answer. In what manner does one measure the number of bits of Shannon entropy that a person has?

comment by [deleted] · 2013-08-13T18:15:19.869Z · score: 2 (2 votes) · LW · GW

If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).

The lower the magnitude of the resulting negative real, the better you faired.

comment by Decius · 2013-08-13T20:15:33.684Z · score: 0 (0 votes) · LW · GW

That allows a prediction/confidence/belief to be measured. How do you total a person?

comment by [deleted] · 2013-08-13T23:44:07.795Z · score: 0 (0 votes) · LW · GW

Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.

It's impossible in practise, but only like, four line formal definition.

comment by Decius · 2013-08-14T05:40:36.054Z · score: 1 (1 votes) · LW · GW

How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?

Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.

comment by [deleted] · 2013-08-16T13:23:35.488Z · score: 2 (2 votes) · LW · GW

I am not very certain that humans actually can have an internal belief model that isn't isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.

comment by pragmatist · 2013-08-16T21:39:07.956Z · score: 1 (1 votes) · LW · GW

How do you account for people falling prey to things like the conjunction fallacy?

comment by private_messaging · 2013-08-23T09:48:59.335Z · score: 3 (3 votes) · LW · GW

I don't think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It's when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).

On top of that, the meaning of the word "probable" in everyday context is somewhat different - a proper study should ask people to actually make bets. All around it's not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.

edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about "how many people out of 100", nobody gave a wrong answer. Which immediately implies that the understanding of "probable" has been an issue, or some other cause, but not some general failure to apply conjunctions.

comment by pragmatist · 2013-08-23T10:24:28.811Z · score: 0 (0 votes) · LW · GW

There have been studies that asked people to make bets. Here's an example. It makes no difference -- subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.

comment by private_messaging · 2013-08-23T11:14:45.792Z · score: 2 (2 votes) · LW · GW

I've just read the example beyond it's abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of "conjunction fallacy" was attained by considering at least one error over many questions.

comment by private_messaging · 2013-08-23T10:38:05.528Z · score: 2 (2 votes) · LW · GW

How is it a "robust phenomenon" if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format?

I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their "training dataset" (which consists of detailed correct accounts of actual facts and fuzzy speculations).

edit: Let's say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science - physics - Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn't matter any how "robust" it was. In a soft science - psychology - an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.

comment by pragmatist · 2013-08-23T11:21:24.648Z · score: 0 (0 votes) · LW · GW

Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn't rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.

The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word "probability" involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by "probability", it's that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by "probability", why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by "probability"?

In response to your Newtonian physics example, it's simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is there currently a better account of probabilistic fallacies than that offered by the heuristics and biases program? And do you think that there is anything about the conjunction fallacy research that makes it impossible to fit the effect within the framework of the heuristics and biases program?

I'm not familiar with the effect of variable string length difference, and quick Googling isn't helping. If you could direct me to some research on this, I'd appreciate it.

comment by private_messaging · 2013-08-23T11:34:38.566Z · score: 0 (4 votes) · LW · GW

A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.

There's only room for making it easier when the word "probable" is not synonymous with "larger N out of 100". So I maintain that alternate understanding of the word "probable" (and perhaps also an invalid idea of what one should bet on) are relevant. edit: to clarify, I can easily imagine an alternate cultural context where "blerg" is always, universally, invariably, a shorthand for "N out of 100". In such context, asking about "N out of 100" or about "blerg" should produce nearly identical results.

Also, in your study, about half of the questions were answered correctly.

The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic.

I guess that's fair enough, albeit its not clear how that works on Linda-like examples.

In my opinion its just that through their life people are exposed to a training dataset which consists of

  1. Detailed accounts of real events.

  2. Speculative guesses.

and (1) is much more commonly correct than (2) even though (1) is more conjunctive. So people get mis-trained through a biased training set. A very wide class of learning AIs would get mis-trained by this sort of thing too.

I'm not familiar with the effect of variable string length difference, and quick Googling isn't helping. If you could direct me to some research on this, I'd appreciate it.

The point is that you can't pull the representativeness trick with e.g. R vs RGGRRGRRRGG . All research I ever seen had strings with small % difference in their length. I am assuming that the research is strongly biased towards researching something un-obvious, while it is fairly obvious that R is more probable than RGGRRGRRRGG and frankly we do not expect to find anyone who thinks that RGGRRGRRRGG is more probable than R.

comment by pragmatist · 2013-08-23T12:05:42.966Z · score: 0 (4 votes) · LW · GW

There's only room for making it easier when the word "probable" is not synonymous with "larger N out of 100". So I maintain that alternate understanding of the word "probable" (and perhaps also an invalid idea of what one should bet on) are relevant.

Maybe a misunderstanding about the word is relevant, but it clearly isn't entirely responsible for the effect. Like I said, the conjunction fallacy is much less common if the structure of the question is made clear to the subject using a diagram (e.g. if it is made obvious that feminist bank tellers are a proper subset of bank tellers). It seems implausible that providing this extra information will change the subject's judgment about what the experimenter means by "probable".

I guess that's fair enough, albeit its not clear how that works on Linda-like examples.

The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.

comment by private_messaging · 2013-08-23T15:34:41.855Z · score: 3 (5 votes) · LW · GW

Maybe a misunderstanding about the word is relevant, but it clearly isn't entirely responsible for the effect.

In the study you quoted, a bit less than half of the answers were wrong, in sharp contrast to the Linda example, where 90% of the answers were wrong. It implies that at least 40% of the failures were a result of misunderstanding. This only leaves 60% for fallacies. Of that 60%, some people have other misunderstandings and other errors of reasoning, and some people are plain stupid (10% are the dumbest people out of 10, i.e. have an IQ of 80 or less), leaving easily less than 50% for the actual conjunction fallacy.

It seems implausible that providing this extra information will change the subject's judgment about what the experimenter means by "probable".

Why so? If the word "probable" is fairly ill defined (as well as the whole concept of probability), then it will or will not acquire specific meaning depending on the context.

The description given of Linda in the problem statement (outspoken philosophy major, social justice activist) is much more representative of feminist bank tellers than it is of bank tellers.

Then the representativeness works in the opposite direction from what's commonly assumed of the dice example.

Speaking of which, "is" is sometimes used to describe traits for identification purposes, e.g. "in general, an alligator is shorter and less aggressive than a crocodile" is more correct than "in general, an alligator is shorter than a crocodile". If you were to compile traits for finding Linda, you'd pick the most descriptive answer. People know they need to do something with what they are told, they don't necessarily understand correctly what they need to do.

comment by [deleted] · 2013-08-23T07:38:54.596Z · score: 2 (2 votes) · LW · GW

Poor brain design.

Honestly, I could do way better if you gave me a millenium.

comment by linkhyrule5 · 2013-08-23T09:29:46.031Z · score: 3 (3 votes) · LW · GW

You know, at some point, whoever's still alive when that becomes not-a-joke needs to actually test this.

Because I'm just curious what a human-designed human would look like.

comment by Decius · 2013-08-17T04:05:04.216Z · score: 0 (0 votes) · LW · GW

How likely do you believe it is that there exists a human who is absolutely certain of something?

comment by Lumifer · 2013-08-16T15:09:23.701Z · score: 0 (0 votes) · LW · GW

Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.

Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?

It's not unheard of people to bet their life on some belief of theirs.

comment by Randaly · 2013-08-16T15:22:19.702Z · score: 1 (1 votes) · LW · GW

It's not unheard of people to bet their life on some belief of theirs.

That doesn't show that they're absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.

The real issue with this claim is that people don't actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you're considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)

comment by AndHisHorse · 2013-08-16T19:06:58.742Z · score: 0 (0 votes) · LW · GW

Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.

In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it's impossible for their god to contradict himself, or something like that).

Further, I would exclude methods of changing someone's mind without using evidence (surgery or cosmic rays). I can't quite put it into words, but it seems like the fact that it isn't evidence and instead changes probabilities directly means that it doesn't so much affect beliefs as it replaces them.

comment by Protagoras · 2013-08-19T19:55:35.141Z · score: 2 (2 votes) · LW · GW

I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I'm a simulation or I've gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it's just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn't exist? I admit that I'm not certain about that (heh!), which is part of the reason I'm curious about your test.

comment by RichardKennaway · 2013-08-19T21:00:59.211Z · score: 3 (3 votes) · LW · GW

If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.

comment by Lumifer · 2013-08-19T20:10:59.127Z · score: 0 (2 votes) · LW · GW

So you're effectively saying that your prior is zero and will not be budged by ANY evidence.

Hmm... smells of heresy to me... :-D

comment by Randaly · 2013-08-16T19:56:42.521Z · score: 2 (2 votes) · LW · GW

Is someone absolutely certain if they say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)?

Disagree. This would be a statement about their imagination, not about reality.

Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.

ETA: An example: While I haven't actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can't be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.

I can't quite put it into words, but it seems like the fact that it isn't evidence and instead changes probabilities directly means that it doesn't so much affect beliefs as it replaces them.

Yeah, fair enough.

comment by AndHisHorse · 2013-08-16T20:15:02.974Z · score: 0 (0 votes) · LW · GW

Disagree. This would be a statement about their imagination, not about reality.

You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as "absolutely certain" seems to refer to a mental state, I would give it the definition of the latter.

comment by Randaly · 2013-08-16T20:42:09.433Z · score: 1 (1 votes) · LW · GW

(I suspect that we don't actually disagree about anything in reality. I further suspect that the phrase I used regarding imagination and reality was misleading; sorry, it's my standard response to thought experiments based on people's ability to imagine things.)

I'm not claiming that there is a difference between their stated probabilities and the actual, objective probabilities. I'm claiming that there is a difference between their stated probabilities and the probabilities that they actually hold. The relevant mental states are the implicit probabilities from their internal belief system; while words can be some evidence about this, I highly suspect, for reasons given above, that anybody who claims to be 100% confident of something is simply wrong in mapping their own internal beliefs, which they don't have explicit access to and aren't even stored as probabilities (?), over onto explicitly stated probabilities.

Suppose that somebody stated that they cannot imagine any circumstances under which they might change their beliefs. This is a statement about their ability to imagine situations; it is not a proof that no such situation could possibly exist in reality. The fact that it is not is demonstrated by my claim that there are people who did make that statement, but then actually encountered a situation that caused them to change their belief. Clearly, these people's statement that they were absolutely, 100% confident of their belief was incorrect.

comment by AndHisHorse · 2013-08-16T20:49:33.462Z · score: 1 (1 votes) · LW · GW

I would still say that while belief-altering experiences are certainly possible, even for people with stated absolute certainty, I am not convinced that they can imagine them occurring with nonzero probability. In fact, if I had absolute certainty about something, I would as a logical consequence be absolutely certain that any disproof of that belief could not occur.

However, it is also not unreasonable that someone does not believe what they profess to believe in some practically testable manner. For example, someone who states that they have absolute certainty that their deity will protect them from harm, but still declines to walk through a fire, would fall into such a category - even if they are not intentionally lying, on some level they are not absolutely certain.

I think that some of our disagreement arises from the fact that I, being relatively uneducated (for this particular community) about Bayesian networks, am not convinced that all human belief systems are isomorphic to one. This is, however, a fault in my own knowledge, and not a strong critique of the assertion.

comment by Lumifer · 2013-08-16T20:06:24.417Z · score: -2 (2 votes) · LW · GW

I would expect that most religious fundamentalists would reply to such a question by saying that they would never change their beliefs because of their absolute faith.

First, fundamentalism is a matter of theology, not of intensity of faith.

Second, what would these people do if their God appeared before them and flat out told them they're wrong? :-D

comment by Randaly · 2013-08-16T20:17:06.798Z · score: 1 (1 votes) · LW · GW

First, fundamentalism is a matter of theology, not of intensity of faith.

Fixed, thanks.

Second, what would these people do if their God appeared before them and flat out told them they're wrong? :-D

Their verbal response would be that this would be impossible.

(I agree that such a situation would likely lead to them actually changing their beliefs.)

comment by Lumifer · 2013-08-16T20:36:14.478Z · score: -1 (1 votes) · LW · GW

Their verbal response would be that this would be impossible.

At which point you can point out to them that God can do WTF He wants and is certainly not limited by ideas of pathetic mortals about what's impossible and what's not.

Oh, and step back, exploding heads can be messy :-)

comment by AndHisHorse · 2013-08-16T20:40:18.850Z · score: 2 (2 votes) · LW · GW

This is not the place to start dissecting theism, but would you be willing to concede the possible existence of people who would simply not be responsive to such arguments? Perhaps they might accuse you of lying and refuse to listen further, or refute you with some biblical verse, or even question your premises.

comment by Lumifer · 2013-08-16T21:05:09.348Z · score: 0 (2 votes) · LW · GW

would you be willing to concede the possible existence of people who would simply not be responsive to such arguments?

Of course. Stuffing fingers into your ears and going NA-NA-NA-NA-CAN'T-HEAR-YOU is a rather common debate tactic :-)

comment by Decius · 2013-08-17T04:09:47.918Z · score: 0 (0 votes) · LW · GW

Don't you observe people doing that to reality, rather than updating their beliefs?

comment by Lumifer · 2013-08-17T04:44:15.501Z · score: 0 (0 votes) · LW · GW

That too. Though reality, of course, has ways of making sure its point of view prevails :-)

comment by Decius · 2013-08-18T00:16:43.375Z · score: 0 (0 votes) · LW · GW

Reality has shown itself to be fairly ineffective in the short term (all of human history).

comment by Lumifer · 2013-08-19T18:04:36.270Z · score: -1 (1 votes) · LW · GW

8-0

In my experience reality is very very effective. In the long term AND in the short term.

comment by Decius · 2013-08-20T15:07:16.920Z · score: 0 (0 votes) · LW · GW

Counterexamples: Religion (Essentially all of them that make claims about reality). Almost every macroeconomic theory. The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.

comment by Lumifer · 2013-08-20T16:04:14.793Z · score: 0 (0 votes) · LW · GW

You are confused. I am not saying that false claims about reality cannot persist -- I am saying that reality always wins.

When you die you don't actually go to heaven -- that's Reality 1, Religion 0.

Besides, you need to look a bit more carefully at the motivations of the people involved. The goal of writing macroeconomic papers is not to reflect reality well, it is to produce publications in pursuit of tenure. The goal of the War on Drugs is not to stop drug use, it is to control the population and extract wealth. The goal of abstinence-based sex education is not to reduce pregnancy rates, it is to make certain people feel good about themselves.

comment by Decius · 2013-08-22T16:15:34.171Z · score: -1 (1 votes) · LW · GW

Wait, isn't that pretty much tautological, given the definition of 'reality'?

comment by shminux · 2013-08-22T19:04:03.532Z · score: -1 (1 votes) · LW · GW

What's your definition of reality?

comment by Decius · 2013-08-23T06:56:42.429Z · score: 1 (1 votes) · LW · GW

I can't get a very general definition while still being useful, but reality is what determines if a belief is true or false.

I thought you were saying that reality has a pattern of convincing people of true beliefs, not that reality is indifferent to belief.

comment by Lumifer · 2013-08-27T17:18:06.723Z · score: 0 (0 votes) · LW · GW

I thought you were saying that reality has a pattern of convincing people of true beliefs

You misunderstood. Reality has the feature of making people face the true consequences of their actions regardless of their beliefs. That's why reality always wins.

comment by Decius · 2013-08-30T03:39:17.223Z · score: 0 (0 votes) · LW · GW

Most of my definition of 'true consequences' matches my definition of 'reality'.

comment by AndHisHorse · 2013-08-27T17:47:22.719Z · score: 0 (0 votes) · LW · GW

Sort of. Particularly in the case of belief in an afterlife, there isn't a person still around to face the true consequences of their actions. And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different - or have a different meaning - from what they really are.

comment by Eugine_Nier · 2013-08-28T06:28:12.968Z · score: -2 (2 votes) · LW · GW

And even in less extreme examples, people can still convince themselves that the true consequences of their actions are different - or have a different meaning - from what they really are.

In those cases reality can take more drastic measures.

Edit: Here is the quote I should have linked to.

comment by AndHisHorse · 2013-08-28T11:53:53.834Z · score: 2 (2 votes) · LW · GW

Believing that 2 + 2 = 5 will most likely cause one to fail to build a successful airplane, but that does not prohibit one from believing that one's own arithmetic is perfect, and that the incompetence of others, the impossibility of flight, or the condemnation of an airplane-hating god is responsible for the failure.

comment by Eugine_Nier · 2013-08-30T05:27:54.808Z · score: -2 (2 votes) · LW · GW

See my edit. Basically, the enemy airplanes flying overhead and dropping bombs should convince you that flight is indeed possible. Also any remaining desire you have it invent excuses will go away once one of the bombs explodes close enough to you.

comment by metastable · 2013-08-20T17:41:28.621Z · score: -1 (1 votes) · LW · GW

What's the goal of rationalism as a movement?

comment by Lumifer · 2013-08-20T18:13:39.458Z · score: 0 (0 votes) · LW · GW

No idea. I don't even think rationalism is a movement (in the usual sociological meaning). Ask some of the founders.

comment by Decius · 2013-08-22T16:13:28.055Z · score: 0 (0 votes) · LW · GW

The founders don't get to decide whether or not it is a movement, or what goal it does or doesn't have. It turns out that many founders in this case are also influential agents, but the influential agents I've talked to have expressed that they expect the world to be a better place if people generally make better decisions (in cases where objectively better decision-making is a meaningful concept).

comment by Eugine_Nier · 2013-08-21T02:44:49.386Z · score: -3 (3 votes) · LW · GW

The War on Drugs. Abstinence-based sex education. Political positions too numerous and controversial to call out.

Careful, those are the kind of political claims that where there is currently so much mind-kill that I wouldn't trust much of the "evidence" you're using to declare them obviously false.

The general claim is one where I think it would be better to test it on historical examples.

comment by Decius · 2013-08-22T16:06:00.634Z · score: -1 (1 votes) · LW · GW

So, because Copernicus was eventually vindicated, reality prevails in general? Only a smaller subset of humanity believes in science.

comment by Randaly · 2013-08-16T20:47:12.414Z · score: 1 (1 votes) · LW · GW

At which point you can point out to them that God can do WTF He wants

This is not an accurate representation of mainstream theology. Most theologists believe, for example, that it is impossible for God to do evil. See William Lane Craig's commentary.

comment by Lumifer · 2013-08-16T21:03:17.078Z · score: -1 (1 votes) · LW · GW

This is not an accurate representation of mainstream theology.

First you mean Christian theology, there are lot more theologies around.

Second, I don't know what is "mainstream" theology -- is it the official position of the Roman Catholic Church? Some common elements in Protestant theology? Does anyone care about Orthodox Christians?

Third, the question of limits on Judeo-Christian God is a very very old theological issue which has not been resolved to everyone's satisfaction and no resolution is expected.

Fourth, William Lane Craig basically evades the problem by defining good as "what God is". God can still do anything He wants and whatever He does automatically gets defined as "good".

comment by shminux · 2013-08-19T18:57:55.924Z · score: 0 (0 votes) · LW · GW

Second, what would these people do if their God appeared before them and flat out told them they're wrong?

Clearly they would consider this entity a false God/Satan.

comment by Lumifer · 2013-08-19T19:03:34.182Z · score: 0 (0 votes) · LW · GW

This is starting to veer into free-will territory, but I don't think God would have much problem convincing these people that He is the Real Deal. Wouldn't be much of a god otherwise :-)

comment by shminux · 2013-08-19T20:08:02.047Z · score: 0 (0 votes) · LW · GW

I don't think God would have much problem convincing these people that He is the Real Deal

That's vacuously true, of course. Which makes you original question meaningless as stated.

comment by Lumifer · 2013-08-19T20:13:56.894Z · score: 0 (0 votes) · LW · GW

It wasn't so much meaningless as it was rhetorical.

comment by Lumifer · 2013-08-16T19:10:45.126Z · score: 0 (0 votes) · LW · GW

I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set.

If you "cannot imagine under any circumstances" your imagination is deficient.

comment by AndHisHorse · 2013-08-16T19:15:52.379Z · score: 1 (1 votes) · LW · GW

I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.

Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.

What definition of absolute certainty would you propose?

comment by Lumifer · 2013-08-16T19:29:24.436Z · score: -2 (2 votes) · LW · GW

I am not arguing that it is not an empty set.

So you are proposing a definition that nothing can satisfy. That doesn't seem like a useful activity. If you want to say that no belief can stand up to the powers of imagination, sure, I'll agree with you. However if we want to talk about what people call "absolute certainty" it would be nice to have some agreed-on terms to use in discussing it. Saying "oh, there just ain't no such animal" doesn't lead anywhere.

As to what I propose, I believe that definitions serve a purpose and the same thing can be defined differently in different contexts. You want a definition of "absolute certainty" for which purpose and in which context?

comment by AndHisHorse · 2013-08-16T20:11:44.625Z · score: 1 (1 votes) · LW · GW

You are correct, I have contradicted myself. I failed to mention the possibility of people who are not reasoning perfectly, and in fact are not close, to the point where they can mistakenly arrive at absolute certainty. I am not arguing that their certainty is fake - it is a mental state, after all - but rather that it cannot be reached using proper rational thought.

What you have pointed out to me is that absolute certainty is not, in fact, a useful thing. It is the result of a mistake in the reasoning process. An inept mathematician can add together a large but finite series of natural numbers, and then write down "infinity" after the equals sign, and thereafter goes about believing that the sum of a certain series is infinite.

The sum is not, in fact, infinite; no finite set of finite things can add up to an infinity, just as no finite set of finite pieces of evidence can produce absolute, infinitely strong certainty. But if we use some process other than the "correct" one, as the mathematician's brain has to somehow output "infinity" from the finite inputs it has been given, we can generate absolute certainty from finite evidence - it simply isn't correct. It doesn't correspond to something which is either impossible or inevitable in the real world, just as the inept mathematician's infinity does not correspond to a real infinity. Rather, they both correspond to beliefs about the real world.

While I do not believe that there are any rationally acquired beliefs which can stand up to the powers of imagination (though I am not absolutely certain of this belief), I do believe that irrational beliefs can. See my above description of the hypothetical young-earther; they may be able to conceive of a circumstance which would falsify their belief (i.e. their god telling them that it isn't so), but they cannot conceive of that circumstance actually occurring (they are absolutely certain that their god does not contradict himself, which may have its roots in other absolutely certain beliefs or may be simply taken as a given).

comment by Lumifer · 2013-08-16T20:52:52.392Z · score: 0 (0 votes) · LW · GW

the possibility of people who are not reasoning perfectly

:-) As in, like, every single human being...

certainty ... cannot be reached using proper rational thought

Yep. Provided you limit "proper rational thought" to Bayesian updating of probabilities this is correct. Well, as long your prior isn't 1, that is.

I do believe that irrational beliefs can

I'd say that if you don't require internal consistency from your beliefs then yes, you can have a subjectively certain belief which nothing can shake. If you're not bothered by contradictions, well then, doublethink is like Barbie -- everything is possible with it.

comment by linkhyrule5 · 2013-08-17T05:08:47.262Z · score: 0 (0 votes) · LW · GW

Well, yes.

That is the point.

Nothing is absolutely certain.

comment by Decius · 2013-08-17T04:08:04.048Z · score: 0 (0 votes) · LW · GW

Why does a deficient imagination disqualify a brain from being certain?

comment by Lumifer · 2013-08-17T04:45:43.072Z · score: 0 (0 votes) · LW · GW

Vice versa. Deficient imagination allows a brain to be certain.

comment by Decius · 2013-08-18T00:18:31.712Z · score: 0 (0 votes) · LW · GW

... ergo there exist human brains that are certain.

if people exist that are absolutely certain of something, I want to believe that they exist.

comment by linkhyrule5 · 2013-08-17T05:08:15.336Z · score: 0 (0 votes) · LW · GW

So... a brain is allowed to be certain because it can't tell it's wrong?

comment by Document · 2013-08-16T16:06:43.815Z · score: 0 (0 votes) · LW · GW

cult leaders disbanding the cult

Tangent: Does that work?

comment by Lumifer · 2013-08-16T15:26:56.826Z · score: -1 (3 votes) · LW · GW

the closest analogue would be people who will not change their beliefs in literally any circumstances

Nope. "I'm certain that X is true now" is different from "I am certain that X is true and will be true forever and ever".

I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.

comment by Randaly · 2013-08-16T18:19:46.852Z · score: 1 (1 votes) · LW · GW

In fact, unless you're insane, you probably already believe that tomorrow will not be Friday!

(That belief is underspecified- "today" is a notion that varies independently, it doesn't point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)

comment by Lumifer · 2013-08-16T18:58:21.525Z · score: 0 (0 votes) · LW · GW

That belief is underspecified

Not exactly that but yes, there is the reference issue which makes this example less than totally convincing.

The main point still stands, though -- certainty of a belief and its time-invariance are different things.

comment by AndHisHorse · 2013-08-16T18:49:02.205Z · score: 0 (0 votes) · LW · GW

I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.

Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.

comment by Decius · 2013-08-18T00:42:18.211Z · score: 0 (0 votes) · LW · GW

I don't have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.

comment by Lumifer · 2013-08-16T19:00:36.506Z · score: 0 (0 votes) · LW · GW

I very much doubt that you are absolutely certain.

Define "absolute certainty".

In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?

comment by linkhyrule5 · 2013-08-16T19:09:21.121Z · score: 1 (1 votes) · LW · GW

So you're not absolutely certain. The probability you assign to "Today is Friday" is, oh, nine nines, not 1.

comment by Lumifer · 2013-08-16T19:35:59.823Z · score: -1 (1 votes) · LW · GW

Nope. I assign it the probability of 1.

On the other hand, you think I'm mistaken about that.

On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it's not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.

comment by linkhyrule5 · 2013-08-16T19:40:37.339Z · score: 3 (3 votes) · LW · GW

So if you went in to work and nobody was there, and your computer says it's Saturday, and your watch says Saturday, and the next thirty people you ask say it's Saturday... you would still believe it's Friday?

If you think it's Saturday after any amount of evidence, after assigning probability 1 to the statement "Today is Friday," then you can't be doing anything vaguely rational - no amount of Bayesian updating will allow you to update away from probability 1.

If you ever assign something probability 1, you can never be rationally convinced of its falsehood.

comment by Lumifer · 2013-08-16T19:58:50.253Z · score: -3 (3 votes) · LW · GW

If you ever assign something probability 1, you can never be rationally convinced of its falsehood.

That's not true. There are ways to change your mind other than through Bayesian updating.

comment by linkhyrule5 · 2013-08-16T23:14:51.285Z · score: 1 (1 votes) · LW · GW

Sure. But by definition they are irrational kludges made by human brains.

Bayesian updating is a theorem of probability: it is literally the formal definition of "rationally changing your mind." If you're changing your mind through something that isn't Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you're just wrong.

comment by Decius · 2013-08-17T04:04:12.982Z · score: 0 (0 votes) · LW · GW

But by definition they are irrational kludges made by human brains.

The original point was that human brains are not all Bayesian agents. (Specifically, that they could be completely certain of something)

comment by linkhyrule5 · 2013-08-17T04:19:45.812Z · score: 0 (0 votes) · LW · GW

... Okay?

Okay, so, this looks like a case of arguing over semantics.

What I am saying is: "You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules."

What I think Lumifer is saying is, "Yes, but you're never going to succeed because human brains are crazy kludges in the first place."

In which case we have no disagreement, though I would note that I intend to do as well as I can.

comment by Decius · 2013-08-18T00:13:58.104Z · score: 0 (0 votes) · LW · GW

I wasn't restricting the domain to the brains of people who intrinsically value being rational agents.

comment by Lumifer · 2013-08-17T04:40:04.661Z · score: 0 (0 votes) · LW · GW

What I think Lumifer is saying is, "Yes, but you're never going to succeed because human brains are crazy kludges in the first place."

I am sorry, I must have been unclear. I'm not staying "yes, but", I'm saying "no, I disagree".

I disagree that "you can never correctly give probability 1 to something". To avoid silly debates over 1/3^^^3 chances I'd state my position as "you can correctly assign a probability that is indistinguishable from 1 to something".

I disagree that "changing your mind in a non-Bayesian manner is simply incorrect". That looks to me like an overbroad claim that's false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn't seem reasonable to me.

comment by somervta · 2013-08-17T04:59:45.077Z · score: 2 (2 votes) · LW · GW

I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)

comment by linkhyrule5 · 2013-08-17T05:06:32.571Z · score: 0 (0 votes) · LW · GW

The thing is, from a probabilistic standpoint, one is essentially infinity - it takes an infinite number of bits of evidence to get probability 1 from any finite prior.

And the human mind is a horrific repurposed adaptation not at all intended to do what we're doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.

comment by Lumifer · 2013-08-19T17:50:01.464Z · score: 1 (1 votes) · LW · GW

the human mind is a horrific repurposed adaptation not at all intended to do what we're doing with it when we try to be rational.

Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.

comment by linkhyrule5 · 2013-08-19T18:46:32.703Z · score: 0 (0 votes) · LW · GW

... Um. No?

I might have been a little hyperbolic there - the brain is meant to model the world - but...

Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.

comment by Lumifer · 2013-08-19T19:01:19.538Z · score: 0 (0 votes) · LW · GW

Regardless of EY, what is your point? What are you trying to express?

comment by linkhyrule5 · 2013-08-19T19:09:24.825Z · score: 0 (0 votes) · LW · GW

*sigh*

My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it's Azathoth's "best guess."

So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.

comment by Lumifer · 2013-08-19T19:21:04.198Z · score: 1 (1 votes) · LW · GW

The brain is simply not the optimal processing engine given the resources of the human body

How do you define optimality?

So I see no reason to pander to its biases when I can use mathematics

LOL.

Sorry :-/

So, since you seem to be completely convinced of the advantage of the mathematical "optimal processing" over the usual biased and messy thinking that humans normally do -- could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn't be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...

comment by linkhyrule5 · 2013-08-19T19:36:27.099Z · score: 0 (0 votes) · LW · GW

Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn't be here; that doesn't say much about it's default configuration, though.

As far as biases - how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?

And as far as optimality goes - it's an open question, I don't know. I do, however, believe that the brain is not optimal, because it's a very complex system that hasn't had much time to be refined.

comment by Lumifer · 2013-08-19T19:52:24.225Z · score: 0 (0 votes) · LW · GW

investors and/or traders would be more rational than the average

That's not good enough -- you can "use mathematics" and that gives you THE optimal result, the very best possible -- right? As such, anything not the best possible is inferior, even if it's better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.

As to optimality, unless you define it *somehow* the phrase "brain is not optimal" has no meaning.

comment by linkhyrule5 · 2013-08-19T22:04:31.698Z · score: 0 (0 votes) · LW · GW

That is true.

I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.

Now, I can attempt to use Bayes' Theorem on my own lack-of-knowledge, and predict probabilities of probabilities - calibrate myself, and learn to notice when I'm missing information - but that adds more uncertainty; my performance drifts back towards average.

As to optimality, unless you define it somehow the phrase "brain is not optimal" has no meaning.

Not at all. I can define a series of metrics - energy consumption and "win" ratio being the most obvious - and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)

I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can't really say much about brain-optimality mostly because I don't understand enough biology to understand how much energy draw is too much, and the like; it's trivial to show that our brain is not an optimal mind under unbounded resources.

Which, in turn, is really what we care about here - energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.

comment by Lumifer · 2013-08-20T00:58:49.968Z · score: 0 (0 votes) · LW · GW

I can define a series of metrics - energy consumption and "win" ratio being the most obvious - and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists

I don't think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let's get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?

so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it

Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).

comment by Lumifer · 2013-08-16T23:51:41.223Z · score: -1 (1 votes) · LW · GW

Bayesian updating is a theorem of probability

Yes.

it is literally the formal definition of "rationally changing your mind."

No, unless you define "rationally changing your mind" this way in which case it's just a circle.

If you're changing your mind through something that isn't Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you're just wrong.

Nope.

The ultimate criterion of whether the answer is the right one is real life.

comment by Randaly · 2013-08-16T20:10:04.379Z · score: 1 (1 votes) · LW · GW

On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it's not capable of such granularity.

While I'm not certain, I'm fairly confident that most people's minds don't assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it's not Friday, then that would suffice to prove that your mind's internal probability is not Friday.

Most of the time, when people talk about probabilities or state the probabilities they assign to something, they're talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they're still just basically guesses at what's going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.

comment by AndHisHorse · 2013-08-16T19:10:48.094Z · score: 0 (0 votes) · LW · GW

Taking a (modified) page from Randaly's book, I would define absolute certainty as "so certain that one cannot conceive of any possible evidence which might convince one that the belief in question is false". Since you can conceive of the brain-in-the-vat scenario and believe that it is not impossible, I would say that you cannot be absolutely certain of anything, including the axioms and logic of the world you know (even the rejection of absolute certainty).

comment by shminux · 2013-08-21T19:41:23.074Z · score: 7 (9 votes) · LW · GW

A luxury, once sampled, becomes a necessity. Pace yourself.

Andrew Tobias, My Vast Fortune

comment by cody-bryce · 2013-08-02T22:31:02.630Z · score: 7 (13 votes) · LW · GW

There are no happy endings. Endings are the saddest part, So just give me a happy middle And a very happy start.

-Shel Silverstein

comment by MixedNuts · 2013-08-04T17:31:52.821Z · score: 9 (11 votes) · LW · GW

But but peak/end rule!

comment by Document · 2013-08-03T02:22:15.961Z · score: 0 (2 votes) · LW · GW

X will never reach [arbitrary standard], so let's not try to improve X.

comment by AndHisHorse · 2013-08-03T02:28:25.545Z · score: 5 (5 votes) · LW · GW

I think the point is not that endings are generally and extrinsically sad, but rather that by definition, an ending is a thing which is sad, if we take the existence of such a thing to be good. (The ending of a bad thing, for example, is an exception, though generally because it allows for the existence of good things). The response, then, would not to be to try to improve endings, but rather to try to do away with them (and, barring that, improve the extrinsic qualities of the non-ending parts).

comment by cody-bryce · 2013-08-03T04:49:14.457Z · score: -3 (5 votes) · LW · GW

.

comment by cody-bryce · 2013-08-02T22:29:32.926Z · score: 7 (25 votes) · LW · GW

Why spend a dollar on a bookmark? ... Why not use the dollar as a bookmark?

-Steven Spielberg

comment by Qiaochu_Yuan · 2013-08-03T02:11:31.898Z · score: 16 (20 votes) · LW · GW

Dollars are floppy. It's nice to have a relatively rigid bookmark. I've used tissues and such as bookmarks in the past but they're unsatisfactory. Of course, that was back when I still read books in dead tree format.

comment by [deleted] · 2013-08-04T15:51:30.350Z · score: 12 (14 votes) · LW · GW

I'm reminded of a picture I saw on Facebook of a doorstop still in its original packaging used as a doorstop.

comment by CronoDAS · 2013-08-03T02:28:08.602Z · score: 11 (13 votes) · LW · GW

My bookmark is prettier than the dollar.

comment by RolfAndreassen · 2013-08-05T15:07:41.569Z · score: 1 (1 votes) · LW · GW

But when it's being used, you don't see it!

comment by James_K · 2013-08-03T06:50:59.077Z · score: 8 (10 votes) · LW · GW

My bookmark is made of two prices of fridge-magnet material. It can be closed around a few pages and the magnetism holds it in place, preventing it from falling out.

Plus dollars in my country are exclusively coins, the smallest note is $5.

comment by Document · 2013-08-03T02:18:21.990Z · score: 7 (7 votes) · LW · GW

exposure to objects common to the domain of business (e.g., boardroom tables and briefcases) increased the cognitive accessibility of the construct of competition (Study 1), the likelihood that an ambiguous social interaction would be perceived as less cooperative (Study 2), and the amount of money that participants proposed to retain for themselves in the “Ultimatum Game” (Studies 3 and 4).

-Abstract, Material priming: The influence of mundane physical objects on situational construal and competitive behavioral choice (via Yvain)

comment by Bugmaster · 2013-08-05T03:52:40.918Z · score: 3 (9 votes) · LW · GW

The answer may very well be, "because I find this bookmark that I bought at a dollar store a lot more aesthetically pleasing than the raw dollar bill".

You may as well ask, "Why spend $20 on a book ? Why not just save the $20 ?"

comment by Decius · 2013-08-07T17:58:26.766Z · score: 0 (2 votes) · LW · GW

I get all kinds of entertainment out of reading a $20 bill.

comment by Document · 2013-08-06T03:18:27.105Z · score: -2 (4 votes) · LW · GW

You may as well ask, "Why spend $20 on a book ? Why not just save the $20 ?"

Arr.

comment by cody-bryce · 2013-08-03T16:47:47.436Z · score: 3 (7 votes) · LW · GW

It would seem that most of the responders are hopelessly literal....

comment by Jiro · 2013-08-03T17:01:18.988Z · score: 6 (10 votes) · LW · GW

I find it hard to come up with a deeper meaning for the original statement, so yeah.

Besides, it's not hard to come up with a deeper meaning behind what the responders are saying; in pointing out that an object specifically designed as a bookmark makes a better bookmark than a dollar bill, they're making a statement about more than just dollar bills and bookmarks, but about specialization in general.

comment by Document · 2013-08-03T17:47:47.433Z · score: 4 (6 votes) · LW · GW

I find it hard to come up with a deeper meaning for the original statement

"We don't automatically reflect on most things we do, even when spending money. Even lifelong practices can be shown as absurd with a moment's consideration from the right angle. In fact, we're so irrational that we'll pay a dollar for a bookmark!"

comment by Said Achmiz (SaidAchmiz) · 2013-08-04T01:24:15.241Z · score: 8 (8 votes) · LW · GW

A decision with an aesthetic benefit is not irrational. You are misusing "irrational".

(Or was this sarcasm?)

comment by Document · 2013-08-05T01:18:38.053Z · score: -1 (1 votes) · LW · GW

Reworded so people don't get caught up in that particular phrasing. (Also, please read the comment tree and note that I'm just trying to answer Jiro's implied question.)

comment by gothgirl420666 · 2013-08-04T22:43:34.912Z · score: 3 (5 votes) · LW · GW

I don't see why everyone is disagreeing with you. I definitely notice that people have a tendency to buy things labeled for some sort of purpose, where if they thought for a few minutes they could find a way to fulfill that same purpose without spending money. Unfortunately, I can't think of any examples off the top of my head.

comment by earthian · 2013-08-05T14:01:38.034Z · score: 1 (1 votes) · LW · GW
While I agree that people often make decisions without thinking them out, I think you are underestimating aesthetics. Aesthetics have phychological effects, and often people find better design structure estetically pleasing.
comment by Document · 2013-08-06T00:58:19.046Z · score: 1 (1 votes) · LW · GW

Reworded so people don't get caught up in that particular phrasing. (Also, please read the comment tree and note that I'm just trying to answer Jiro's implied question.)

comment by MugaSofer · 2013-08-04T14:47:34.659Z · score: 1 (1 votes) · LW · GW

That's clearly the intent - except maybe for that last bit - but it's kinda a poor example, I have to admit.

comment by wedrifid · 2013-08-04T08:57:29.704Z · score: 2 (10 votes) · LW · GW

It would seem that most of the responders are hopelessly literal....

Your quote is both literally and connotatively poor. If Spielberg had asked "Why spend two dollars on a bookmark? ... Why not use a dollar as a bookmark?" then there would at least have been some moral along the lines of efficient practicality. Even then it would be borderline.

comment by Desrtopa · 2013-08-05T03:35:57.680Z · score: 5 (7 votes) · LW · GW

Your quote is both literally and connotatively poor. If Spielberg had asked "Why spend two dollars on a bookmark? ... Why not use a dollar as a bookmark?" then there would at least have been some moral along the lines of efficient practicality.

A dollar is much more fungible than a bookmark. After you're done reading your book, you can not only use the dollar to hold your place in other books, you can spend it on other things.

comment by wedrifid · 2013-08-05T04:18:01.831Z · score: 0 (0 votes) · LW · GW

A dollar is much more fungible than a bookmark. After you're done reading your book, you can not only use the dollar to hold your place in other books, you can spend it on other things.

It is indeed a considerably more fungible one dollar.

comment by [deleted] · 2013-08-04T15:47:33.869Z · score: 1 (1 votes) · LW · GW

It takes time and effort (admittedly not much of it, but usually even little of it makes a difference psychologically) to spend $1 on a bookmark. (I would have phrased it as “Why bother spending ...”.)

comment by wedrifid · 2013-08-03T02:55:37.157Z · score: 3 (13 votes) · LW · GW

Why spend a dollar on a bookmark? ... Why not use the dollar as a bookmark?

It will fall out. Apart from that, money isn't particularly clean and (especially if considering US currency) not particularly pretty either. I expect people to find a bookmark far more aesthetically pleasing than a note.

How is this a rationality quote? It is rationality-neutral at best.

comment by cody-bryce · 2013-08-04T03:13:51.708Z · score: 8 (10 votes) · LW · GW

"Because the dollar is dirty" is one of those pained, stretched explanations people come up with to explain why they do what they do, not the actual reason (even in some small part) the bookmark was invented and became popular.

comment by wedrifid · 2013-08-04T04:30:43.506Z · score: 0 (4 votes) · LW · GW

"Because the dollar is dirty" is one of those pained, stretched explanations people come up with to explain why they do what they do, not the actual reason (even in some small part) the bookmark was invented and became popular.

The question wasn't "Why was the bookmark invented?". If it was, I might have, for example, tried to determine the first time someone used a bookmark (or when it became popular). Then I could have told you precisely how many dollars in present value that dollar would have been worth. That is, moving the goalposts in this way has made your quote worse, not better.

not the actual reason (even in some small part)

Not even is some small part? That's absurd. Can you not empathise in even a small part with the aesthetic aversion many people have to contaminating things with used currency?

comment by snafoo · 2013-08-09T05:11:04.710Z · score: -2 (4 votes) · LW · GW

Can you not empathise in even a small part with the aesthetic aversion many people have to contaminating things with used currency?

Are you sure you didn't just go ahead and basically make up these people who don't want money to touch their book because it's dirty?

comment by wedrifid · 2013-08-09T06:42:21.569Z · score: 0 (4 votes) · LW · GW

Are you sure you didn't just go ahead and basically make up these people who don't want money to touch their book because it's dirty?

No. I've seen such people. When I look at the mirror, for example. Notice that the standard was explicitly set to:

not the actual reason (even in some small part)

The observation that this kind of absurd claim is positively received and even supported by similarly ridiculous petty sniping is disheartening.

comment by [deleted] · 2013-08-09T12:56:19.712Z · score: -1 (3 votes) · LW · GW

I've known at least a couple people who found it yucky to handle cash right before a meal for that same reason.

comment by Said Achmiz (SaidAchmiz) · 2013-08-09T15:15:19.295Z · score: -1 (3 votes) · LW · GW

I definitely wash my hands after handling money and before eating.

comment by [deleted] · 2013-08-04T15:50:35.599Z · score: 1 (3 votes) · LW · GW

I do neither. I use any piece of sufficiently stiff paper I happen to have around (bookmarks purchased by someone else, playing cards, used train tickets, whatever).

comment by Document · 2013-08-05T01:20:52.937Z · score: 0 (0 votes) · LW · GW

I tear out a blank page from the nearest notebook of sufficient size, and fold it as necessary.

comment by gothgirl420666 · 2013-08-04T22:32:06.839Z · score: 0 (4 votes) · LW · GW

Or just fold the corner of the page over.

comment by AndHisHorse · 2013-08-04T23:23:13.671Z · score: 8 (12 votes) · LW · GW

While I respect your right to do so, I find such a concept aesthetically horrifying.

comment by gothgirl420666 · 2013-08-05T04:23:43.090Z · score: 5 (5 votes) · LW · GW

I never understood that... I remember when I was in elementary school there was a sign in the library that said something like "Don't dog-ear your books... you wouldn't like it if someone folded your ear over, so don't do it to your book." What?

comment by MixedNuts · 2013-08-05T14:00:30.906Z · score: 1 (3 votes) · LW · GW

you wouldn't like it if someone folded your ear over

That's not particularly uncomfortable.

comment by khafra · 2013-08-05T19:26:45.612Z · score: 18 (20 votes) · LW · GW

You're suffering from the typical ear fallacy. Some people have much stiffer cartilage, or something; I don't find it uncomfortable, but I've met people who're caused actual pain by it.

comment by Wes_W · 2013-08-05T04:46:50.417Z · score: 1 (3 votes) · LW · GW

With library books, I think the concern is more about wear-and-tear on shared property. Some of us leakily generalize this to "folding page corners is bad", even for non-shared books. When it's your own book, you can do whatever you want.

Personally I find folded page corners less effective than bookmarks for quickly finding my place, especially if I've folded many other page corners, which makes the currently-folded one less visually obvious. But perhaps I'd learn to be better at that if I used it regularly.

comment by cwillu · 2013-08-05T19:37:05.812Z · score: 0 (2 votes) · LW · GW

It's a permanent mark that easily leads to tearing.

comment by Bayeslisk · 2013-08-06T16:50:22.173Z · score: 2 (4 votes) · LW · GW

I made one when I was bored, long ago when my grandmother still ran her store and my uncle still ran his immigration law firm on the third floor, and when I was obsessed with knot theory, out of computer paper, tape, and a lot of hard pencil. I still use it, and it cost me next to nothing.

EDIT: If requested (however unlikely) I will happily deliver a picture, and either a push or a bouillon cube (your choice). EDIT THE SECOND: it was requested! http://imgur.com/a/kxanI

comment by [deleted] · 2013-08-06T21:39:30.095Z · score: -1 (1 votes) · LW · GW

If requested (however unlikely) I will happily deliver a picture,

Yes please! :-)

comment by Bayeslisk · 2013-08-06T22:22:14.713Z · score: 0 (0 votes) · LW · GW

Done! Do you want a bouillon cube or a push? Think wisely.

comment by [deleted] · 2013-08-06T23:09:28.288Z · score: 0 (0 votes) · LW · GW

What kind of push?

comment by Bayeslisk · 2013-08-07T00:28:54.642Z · score: 0 (0 votes) · LW · GW

This kind!

comment by Document · 2013-08-07T17:54:32.482Z · score: -1 (1 votes) · LW · GW

I feel like I want the last few minutes of my life back.

comment by [deleted] · 2013-08-05T12:30:52.463Z · score: 0 (2 votes) · LW · GW

That leaves a permanent crease, which I dislike. (Likewise, I prefer to use pencils -- preferably soft pencils -- rather than pens to take notes.)

comment by MugaSofer · 2013-08-04T15:48:14.240Z · score: 0 (2 votes) · LW · GW

Why use a bookmark that's worth a whole dollar? I use scrap paper, or a sticky note if falling out is a risk (it almost always isn't.)

comment by Eugine_Nier · 2013-08-02T06:34:28.132Z · score: 7 (29 votes) · LW · GW

Wicked people exist. Nothing avails except to set them apart from innocent people. And many people, neither wicked nor innocent, but watchful, dissembling, and calculating of their chances, ponder our reaction to wickedness as a clue to what they might profitably do.

James Wilson

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T17:32:04.642Z · score: 4 (6 votes) · LW · GW

Counter-quote.

comment by wedrifid · 2013-08-02T17:41:20.553Z · score: 3 (3 votes) · LW · GW

Counter-quote.

Only loosely. The insightful part of the grandparent quote is the third sentence, which complements the moral-greyness issue quite well.

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T18:26:45.301Z · score: 3 (5 votes) · LW · GW

I think it is only slightly insightful, at best. It's a gross simplification of how most people experience, and actually (under-the-hood) perform, moral calculations, and it simplifies away most of the interesting stuff.

comment by Arkanj3l · 2013-08-10T01:32:24.671Z · score: 6 (12 votes) · LW · GW

The world is a lot simpler than the human mind can comprehend. The mind endlessly manufactures meanings and reflects with other minds, ignoring reality. Or maybe it enhances it. Not very clear on that part, I'm human as well.

comment by Rukifellth · 2013-08-10T03:35:45.791Z · score: 1 (1 votes) · LW · GW

I found this to be slightly unsettling when I realized it, though we may be talking about different things.

comment by hairyfigment · 2013-08-06T18:11:11.845Z · score: 6 (8 votes) · LW · GW

How do you know that it will bring out his genius, Graff? It's never given you what you needed before. You've only had near-misses and flameouts. Is this how Mazer Rackham was trained? Actually, why isn't Mazer Rackham in charge of this training? What qualifications do you have that make you so sure your technique is the perfect recipe to make the ultimate military genius?

-- Will Wildman, analysis of Ender's Game

comment by snafoo · 2013-08-04T17:48:35.578Z · score: 6 (8 votes) · LW · GW

We can easily forgive a child who is afraid of the dark; the real tragedy of life is when men are afraid of the light.

misattributed often to Plato

comment by KnaveOfAllTrades · 2013-08-30T17:05:33.716Z · score: 5 (5 votes) · LW · GW

Start by doing what’s necessary; then do what’s possible; and suddenly you are doing the impossible.

St. Francis of Assisi (allegedly)

comment by gwern · 2013-08-12T00:26:53.201Z · score: 5 (7 votes) · LW · GW
...Each minute bursts in the burning room,
The great globe reels in the solar fire,
Spinning the trivial and unique away.
(How all things flash! How all things flare!)
What am I now that I was then?
May memory restore again and again
The smallest color of the smallest day:
Time is the school in which we learn,
Time is the fire in which we burn.

--Delmore Schwartz, "Calmly We Walk Through This April's Day"; quoted by Mike Darwin on the GRG ML

comment by Eugine_Nier · 2013-08-10T07:17:38.723Z · score: 5 (23 votes) · LW · GW

It is fashionable in the US to talk about people who are on welfare and don’t work. That is not precisely true. Yes, there are people on welfare who neither have a regular job nor look for one. But what might not be understood is that these people are working: they are navigating the labyrinthine bureaucracy and making sure they meet all the guidelines to keep the money flowing. That is work. It is just not productive work. It is a work that is the result of perverse incentives.

Sarah Hoyt

comment by [deleted] · 2013-08-22T20:21:33.936Z · score: 4 (10 votes) · LW · GW

comment by taelor · 2013-08-11T06:30:55.622Z · score: 4 (6 votes) · LW · GW

Except when physically constrained, a person is least free or dignified when under the threat of punishment. We should expect that the literatures of freedom and dignity would oppose punitive techniques, but in fact they have acted to preserve them. A person who has been punished is not thereby simply less inclined to behave in a given way; at best, he learns how to avoid punishment. Some ways of doing so are maladaptive or neurotic, as in the so­ called 'Freudian dynamisms'. Other ways include avoid­ing situations in which punished behaviour is likely to occur and doing things which are incompatible with punished behaviour. Other people may take similar steps to reduce the likelihood that a person will be punished, but the literatures of freedom and dignity object to this as leading only to automatic goodness. Under punitive contingencies a person appears to be free to behave well and to deserve credit when he does so. Non-punitive con­tingencies generate the same behaviour, but a person cannot then be said to be free, and the contingencies de­serve the credit when he behaves well. Little or nothing remains for autonomous man to do and receive credit for doing. He does not engage in moral struggle and therefore has no chance to be a moral hero or credited with inner virtues. But our task is not to encourage moral struggle or to build or demonstrate inner virtues. It is to make life less punishing and in doing so to release for more reinforcing activities the time and energy consumed in the avoidance of punishment. Up to a point the litera­tures of freedom and dignity have played a part in the slow and erratic alleviation of aversive features of the human environment, including the aversive features used in intentional control. But they have formulated the task in such a way that they cannot now accept the fact that all control is exerted by the environment and proceed to the design of better environments rather than of better men.

-- B. F. Skinner, Beyond Freedom and Dignity

comment by wedrifid · 2013-08-12T10:29:30.338Z · score: 5 (5 votes) · LW · GW

Except when physically constrained, a person is least free or dignified when under the threat of punishment.

Very close. I'd perhaps suggest that a person is less dignified when desperately seeking a reward that certainly isn't going to come.

comment by Glen · 2013-08-02T22:52:18.151Z · score: 4 (8 votes) · LW · GW

Everything can be reduced to an abstraction, a puzzle, and then solved

-Ledaal Kes (Exalted Aspect Book: Air)

comment by Document · 2013-08-03T02:23:57.828Z · score: -2 (6 votes) · LW · GW

Are they a villain who "solves" people by removing them from their way?

(Alternative response: Does "everything" include the puzzle of identifying something that can't be reduced to a puzzle?)

comment by linkhyrule5 · 2013-08-03T02:59:52.835Z · score: 2 (2 votes) · LW · GW

... You can remove people as problems without doing so euphemistically, i.e. killing them.

If you befriend them, for example.

And, well, yes. That does count as a puzzle.

comment by Document · 2013-08-03T03:56:45.320Z · score: 0 (0 votes) · LW · GW

The statement just seems weird without any context, I guess. It certainly isn't narrow.

Would you trust an AI that was being friendly to you as an attempted "solution" to the "puzzle" you presented?

comment by AndHisHorse · 2013-08-03T15:01:01.988Z · score: 1 (1 votes) · LW · GW

That depends, what sort of solution is it trying to find? If it's trying to maximize my happiness, that's all fine and dandy; if it's trying to minimize my capacity as an impediment to its acquisition of superior paperclip-maximizing hardware, I would object. Either way, I base my trust on the AI's goal, rather than its algorithms (assuming that the algorithms are effective at accomplishing that goal).

comment by linkhyrule5 · 2013-08-03T07:36:43.716Z · score: 0 (0 votes) · LW · GW

Well, no, but I would never trust an AI if I couldn't prove (or nobody I trusted could prove) it was Friendly with respect to me, period.

... not that it would much matter, but..

Also, relevance? I'm not really understanding your point in general. Certainly, problems need to be solved, but I would hope that your morality is included as a constraint...

comment by Document · 2013-08-03T07:45:02.691Z · score: 0 (0 votes) · LW · GW

But not necessarily if you're a fictional character, hence my initial question. I think my point is that I'm not convinced the quote actually means anything, either in its original context or in its use here; it's sounding like "everything" just means "things for which the statement is true".

comment by linkhyrule5 · 2013-08-03T21:25:50.611Z · score: 0 (0 votes) · LW · GW

Still don't understand. By definition, if something is hampering you, it presents a problem: sometimes the solution is "leave it alone, all possible 'solutions' are actually worse," but it's still something that bears thinking about.

It is somewhat tautological, I'll grant, but us poor imperfect humans occasionally find tautologies helpful.

comment by Glen · 2013-08-05T17:26:58.546Z · score: 1 (1 votes) · LW · GW

This is similar to how I've interpreted it. The character comes from a pre-enlightenment society, and is considered one of the greatest intelligence agents largely due to his ability to get results where nobody else can. He privately attributes this success to a rational mind and extensive [chess] skill that trains him to approach things as though they can be solved. While "stop and think about problems like they were games to be won instead of chores to be blamed on someone else" may seem obvious to people used to thinking like that, it's a major shift for most people.

comment by Martin-2 · 2013-08-01T22:19:17.653Z · score: 4 (4 votes) · LW · GW

It is not July. It is August.

comment by MalcolmOcean (malcolmocean) · 2013-08-01T23:11:59.211Z · score: 21 (21 votes) · LW · GW

Saw this under "latest rationality quotes" and was like "man, I'm really missing the context as to how this is a rationality quote."

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T00:38:25.753Z · score: 33 (33 votes) · LW · GW

"If it July, I desire to believe it is July. If it is August, I desire to believe it is August..."

comment by linkhyrule5 · 2013-08-02T08:17:42.482Z · score: 9 (9 votes) · LW · GW

If the Romans had been more willing to rename months they were unwilling to keep in their original places, we might have a much saner calendar.

comment by DanArmak · 2013-08-02T11:03:43.532Z · score: 23 (23 votes) · LW · GW

If people in the 1500 years since the Romans had been more willing to rename months...

comment by FiftyTwo · 2013-08-04T15:46:23.165Z · score: 0 (0 votes) · LW · GW

Now you've got me thinking about the minimum level of rationality/processing power necessary to determine the month accurately...

comment by Vaniver · 2013-08-01T22:48:47.743Z · score: 3 (3 votes) · LW · GW

Fixed! The perils of copy/paste.

comment by lukeprog · 2013-08-25T17:15:29.601Z · score: 3 (5 votes) · LW · GW

It ain’t ignorance [that] causes so much trouble; it’s folks knowing so much that ain’t so.

Josh Billings

(h/t Robin Hanson)

comment by [deleted] · 2013-08-25T18:03:01.587Z · score: -2 (6 votes) · LW · GW

Famously subverted by Ronald Reagan as:

The trouble with our liberal friends is not that they are ignorant, but that they know so much that isn't so.

comment by RichardKennaway · 2013-08-26T11:38:25.410Z · score: 3 (3 votes) · LW · GW

How is that a subversion? It is exactly in accord with the original.

comment by [deleted] · 2013-08-26T11:50:56.405Z · score: 5 (7 votes) · LW · GW

The key phrase is "our liberal friends." Everyone suffers from illusion of transparency, Dunning-Kruger, and etc., but Reagan is applying the bias selectively.

comment by Salemicus · 2013-08-25T17:08:58.677Z · score: 3 (3 votes) · LW · GW

All experience is an arch wherethrough gleams that untravelled world, whose margin fades for ever and for ever when I move...

To follow knowledge like a sinking star, beyond the utmost bound of human thought.

Alfred, Lord Tennyson, Ulysses

comment by metastable · 2013-08-21T19:42:19.886Z · score: 3 (7 votes) · LW · GW

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.

Fred P. Brooks, No Silver Bullet

comment by shminux · 2013-08-21T20:08:44.681Z · score: 7 (9 votes) · LW · GW

I've always had misgivings about this quote. In my experience about 90% of the code on a large project is an artifact of a poor requirement analysis/architecture/design/implementation. (Sendmail comes to mind.) I have seen 10,000-line packages melting away when a feature is redesigned with more functionality and improved reliability and maintainability.

comment by wedrifid · 2013-08-22T02:07:10.742Z · score: 5 (7 votes) · LW · GW

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.

This is true, but the connotations need to be applied cautiously. Complexity is necessary, but it is still something to be minimised wherever practical. Things should be as simple as possible but not simpler.

comment by DanArmak · 2013-08-22T22:12:39.036Z · score: 1 (3 votes) · LW · GW

More concretely, sometimes software can be simplified and improved at the same time.

comment by AndHisHorse · 2013-08-21T20:31:21.495Z · score: 1 (1 votes) · LW · GW

This isn't necessarily true if the complexity is very intuitive. If it takes ten thousand lines of code to accurately describe the action "jump three feet in the air", then those ten thousand lines of code are describing what a jump is, what to do while in mid-air, what it means to land, and other things that humans may grasp intuitively (assuming that the actor is constructed in a manner similar to a human).

Additionally, there are some complex features which are not specific to the software. We don't need to describe how a particular program receives feedback from the motor and sensors, how it translates the input of its devices, if these features are common to most similar programs - the description of those processes is part of the default, part of the background that we assume along with everything else we don't need to derive from fundamental physics.

In other words, the complexity of software may correspond to a feature which humans may be able to understand as simple - because we have the prior knowledge necessary, courtesy of common nature and nurture. A full description of complexity is necessary if and only if it is surprising to our intuition.

comment by linkhyrule5 · 2013-08-21T21:32:44.798Z · score: 4 (4 votes) · LW · GW

That is, in some sense, his point - a phrase like "jump three feet in the air" does abstract most of the computational essence, making it seem like a trivial problem what it really, really isn't.

comment by anonym · 2013-08-21T02:25:19.908Z · score: 3 (9 votes) · LW · GW

When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.

-- John McCarthy

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-21T19:17:45.953Z · score: 5 (13 votes) · LW · GW

Thus, whenever you look in a computer science textbook for an algorithm which only gives approximate results, you will find that the algorithm itself is very vaguely specified, since the result is just an approximation anyway.

(I would have said: "When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.")

comment by Kindly · 2013-08-21T21:29:58.021Z · score: 3 (3 votes) · LW · GW

(I would have said: "When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.")

Thus we merely require citizens to "be responsible adults" before they can vote rather than give a sharp boundary such as 18 years old, college applications tell you "don't write a long, rambling essay" rather than enforce a 500-word limit, and food packaging specifies "sometime in September" for the expiration date.

Sharp membership boundaries are useful to make it easy to test for the concept. Even if the concept is fuzzy and the test is imperfect, this doesn't need to be a waste of time.

comment by AndHisHorse · 2013-08-21T21:56:03.618Z · score: 8 (8 votes) · LW · GW

Sharp membership boundaries, however, often result in people forgetting the fuzziness of the concept - there are some people who vote without being responsible adults, because they can; an essay can be boring and rambling at 450 words or impressive and concise at 600; and food can be good a bit past its expiration date (it doesn't usually go in the other direction in my experience, presumably because the risk of eating spoiled food vastly outweighs the risk of mistakenly tossing out good food, so expiration dates are the very early estimates).

comment by TheOtherDave · 2013-08-22T02:58:04.475Z · score: 2 (2 votes) · LW · GW

Though sometimes it's even more useful to acknowledge that the sharp-boundaried concept we're testing for is different from, though perhaps expected to be correlated with in some way, the fuzzy concept we were initially interested in.

That helps us avoid the trap of believing that 17-year-olds aren't responsible adults but 18-year-olds are, or that 550-word essays are long and rambling but 450-word essays aren't, or that food is safe to eat on September 25 but not on September 29. None of that is true, but that's OK; we aren't actually testing for whether voters are responsible adults, essays are long and rambling, or food is expired.

comment by linkhyrule5 · 2013-08-21T22:38:37.497Z · score: 1 (3 votes) · LW · GW

Just because humans do it doesn't mean it's a good idea.

comment by Kindly · 2013-08-21T23:31:34.413Z · score: 0 (0 votes) · LW · GW

To clarify, I also think all of these are good ideas; not necessarily the best possible, but definitely useful.

comment by Salemicus · 2013-08-21T22:44:04.151Z · score: 0 (0 votes) · LW · GW

It doesn't prove it's a good idea, but it's evidence in its favour.

comment by linkhyrule5 · 2013-08-21T22:51:24.953Z · score: 1 (1 votes) · LW · GW

Well, sure. But that doesn't mean it's very strong evidence: I'd expect to see an average human (or nation) do something stupid almost as often as they do something intelligent.

comment by Salemicus · 2013-08-22T23:01:31.073Z · score: -2 (2 votes) · LW · GW

We are obviously starting from very different premises. To me, the fact that lots of people do something is very strong evidence that the behaviour is, at least, not maladaptive, and the burden of proof is very much on the person suggesting that it is. And the more widespread the behaviour, the stronger the burden.

Alternatively, you could just look at the evidence. When legal systems have replaced bright-line rules with 15-factor balancing tests, has that led to better outcomes for society as a whole? Consider in particular the criteria for the Rule of Law. In the mid-20th century, co-incident with high modernism and utilitarianism, these multi-part, multi-factor balancing tests were all the rage. Why are they now held in such disdain?

comment by linkhyrule5 · 2013-08-22T23:19:23.598Z · score: 1 (1 votes) · LW · GW

Unfortunately, the fact that lots of people do something may merely be an indication of a very successful meme: consider major religions.

I will certainly grant that having a sharp restriction is better than a 15-factor balancing test, but I'm not arguing for 15-factor balancing tests.

I'd go further, but I've just noticed that I don't really have much evidence for this belief, and I should probably go see how accomplished Chinese universities (which judge purely off the gaokao) are versus American universities first.

comment by lukeprog · 2013-08-16T03:15:48.378Z · score: 3 (5 votes) · LW · GW

An educated mind is, as it were, composed of all the minds of preceding ages.

Le Bovier de Fontenelle

comment by wedrifid · 2013-08-16T06:51:01.131Z · score: 8 (8 votes) · LW · GW

An educated mind is, as it were, composed of all the minds of preceding ages.

This explains all those urges I get to burn witches, my talent at farming, all my knowledge at hunting and tracking and my outstanding knack for feudal political intrigue.

(Composition is not the relationship to previous minds that education entails. Can someone think of a better one?)

comment by Kawoomba · 2013-08-16T06:55:40.994Z · score: 8 (8 votes) · LW · GW

Derivation.

comment by wedrifid · 2013-08-16T07:52:24.662Z · score: 0 (2 votes) · LW · GW

Much better.

comment by DanArmak · 2013-08-16T19:35:48.043Z · score: 7 (7 votes) · LW · GW

We rest upon the frontal lobes of giants.

comment by Document · 2013-08-16T03:30:04.964Z · score: 0 (0 votes) · LW · GW

Is that a praise of educated minds, or a caution against too readily classifying a mind as educated?

(Possibly related: http://lesswrong.com/lw/1ul/for_progress_to_be_by_accumulation_and_not_by/)

comment by lukeprog · 2013-08-16T04:32:38.191Z · score: 2 (2 votes) · LW · GW

I read it as expressing the same view as The Neglected Virtue of Scholarship.

comment by RichardKennaway · 2013-08-16T10:28:06.243Z · score: 1 (1 votes) · LW · GW

From the description of him on Wikipedia, I am certain it is the former, although the bone wedrifid picks with "composed" is symptomatic of where he falls short of his contemporary, Voltaire. He was a most refined, civilised, intelligent, and educated writer, very popular among the intellectual class, and achieved memberships of distinguished academic societies, but his strength, a great one indeed, was in writing well on what was already known, and he created little that was new. Voltaire's name lives to this day, but Fontenelle's, while important in his time, does not.

Scholarship is indeed a virtue, but Fontenelle's was not in service of a higher goal.

comment by AlexanderD · 2013-08-08T05:24:50.873Z · score: 3 (11 votes) · LW · GW

What the Great Learning teaches is: to illustrate illustrious virtue; to renovate the people; and to rest in the highest excellence.
The point where to rest being known, the object of pursuit is then determined; and, that being determined, a calm unperturbedness may be attained to.
To that calmness there will succeed a tranquil repose. In that repose there may be careful deliberation, and that deliberation will be followed by the attainment of the desired end.
The ancients who wished to illustrate illustrious virtue throughout the world, first ordered well their own States.
Wishing to order well their States, they first regulated their families.
Wishing to regulate their families, they first cultivated their persons.
Wishing to cultivate their persons, they first rectified their hearts.
Wishing to rectify their hearts, they first sought to be sincere in their thoughts.
Wishing to be sincere in their thoughts, they first extended to the utmost of their knowledge.
Such extension of knowledge lay in the investigation of things.
Things being investigated, knowledge became complete.
Their knowledge being complete, their thoughts were sincere.
Their thoughts being sincere, their hearts were then rectified.
Their hearts being rectified, their persons were cultivated.
Their persons being cultivated, their families were regulated.
Their families being regulated, their States were rightly governed.
Their States being rightly governed, the entire world was at peace.
From the Son of Heaven down to the mass of the people, all must consider the cultivation of the person the root of everything besides.
It cannot be, when the root is neglected, that what should spring from it will be well ordered.
It never has been the case that what was of great importance has been slightly cared for, and, at the same time, that what was of slight importance has been greatly cared for.

-The Great Learning, one of the Four Books and Five Classics of Confucian thought.

comment by wedrifid · 2013-08-07T02:29:45.513Z · score: 3 (5 votes) · LW · GW

Also: "Fuck every cause supported by compulsory taxation, and compulsory use of fiat currency." says the same exact thing, more precisely.

Huh? No it doesn't. It says an entirely different thing.

comment by Eugine_Nier · 2013-08-02T06:27:31.794Z · score: 3 (39 votes) · LW · GW

Historically, most hackers have been not only men, but men of a sort of Mannie O’Kelly-Davis “git ‘er done” variety, and that’s beginning to change now, so new norms of behavior must be adopted in order to create a welcoming and inclusive community.

  • Jeff Read

I have a better idea. Let’s drive away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors. That way we might continue to actually, you know, get stuff done.

Eric Raymond

comment by MixedNuts · 2013-08-02T09:36:12.981Z · score: 13 (29 votes) · LW · GW

Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.

comment by wedrifid · 2013-08-02T13:33:46.607Z · score: 14 (18 votes) · LW · GW

Empirically, heaping scorn on everyone and seeing who sticks around leads to lots of time wasted on flame wars.

Straw man. The grandparent explicitly made the scorn conditional, not 'on everyone'.

comment by Tyrrell_McAllister · 2013-08-07T03:13:29.023Z · score: -1 (7 votes) · LW · GW

Straw man. The grandparent explicitly made the scorn conditional, not 'on everyone'.

Failure to steel man. Replacing "everyone" with "people" leaves the basic point unchanged.

ETA: ... or, I should say, leaves a point that (1) deserves reply and (2) was probably what the original hyperbolic version was getting at anyway.

comment by wedrifid · 2013-08-07T03:55:46.749Z · score: 3 (5 votes) · LW · GW

Failure to steel man.

Abuse of the 'steel man' concept and attempt to introduce a toxic social norm. I am strongly opposed to this influence.

MixedNuts attempts to refute a quote using a non-sequitur. Supporting a false refutation is not being generous, it is being biased. It is being unfair to the initial speaker.

Replacing "everyone" with "people" leaves the basic point unchanged.

So much so that it leaves the basic point a straw man.

comment by Tyrrell_McAllister · 2013-08-07T04:14:19.867Z · score: -1 (5 votes) · LW · GW

Supporting a false refutation is not being generous, it is being biased. It is being unfair to the initial speaker.

Steel-manning a refutation does not equal supporting that refutation. In fact, steel-manning entails criticizing the original refutation, at least implicitly.

However, when a claim is plausibly intended to be a hyperbolic version of a reasonable claim, pointing out that the hyperbolic version is a straw man, without addressing the reasonable version, is mostly just poisoning the discourse.

(This charge doesn't apply to you if you sincerely believed that MixedNuts was non-hyperbolically claiming that literally everyone has scorn heaped on them in the community under discussion, or that MixedNuts would be read that way by many readers.)

comment by wedrifid · 2013-08-07T04:23:09.652Z · score: 3 (3 votes) · LW · GW

I oppose your influence in this context for the aforementioned reasons.

However, when a claim is plausibly intended to be a hyperbolic version of a reasonable claim,

The point that you think is reasonable is still a straw man.

comment by Tyrrell_McAllister · 2013-08-07T05:12:55.719Z · score: 0 (0 votes) · LW · GW

The point that you think is reasonable is still a straw man.

It would help me to understand why my version is a straw man if you would steel-man it. Then I could compare your steel man to my straw man and better feel the force of your criticism. (I certainly wouldn't take you to be supporting my straw man, which seemed to be your earlier concern.)

As it stands, I am puzzled by your accusation because Eric Raymond said, "Let’s drive away people unwilling to adopt that 'git’r'done' attitude with withering scorn ...". Why is it a straw man to characterize this as "heaping scorn on people and seeing who sticks around"?

Is it because you read it as "heaping scorn on people randomly...", rather than as "heaping scorn on people who are unwilling to adopt that 'git’r'done' attitude ..."? Or is it something else?

comment by wedrifid · 2013-08-07T06:23:30.913Z · score: 2 (2 votes) · LW · GW

It would help me to understand why my version is a straw man if you would steel-man it.

There isn't a convenient steel man available. Not all wrong (or, to be agnostic with respect to the correctness of our positions, disagreed with) positions have another position nearby in concept space that is agreed with (or, sometimes, disagreed with only with significant respect and more complicated reasoning).

As it stands, I am puzzled by your accusation because Eric Raymond said, "Let’s drive away people unwilling to adopt that 'git’r'done' attitude with withering scorn ...". Why is it a straw man to characterize this as "heaping scorn on people and seeing who sticks around"?

Because that is a different described procedure. They are similar in as much as scorn is applied in both cases but the selection process for when scorn is applied is removed and the intended outcome is changed.

To illustrate, consider taking the required equivocation back in the other direction. We end up with:

Empirically, <driving away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors> leads to lots of time wasted on flame wars.

This seems to be a different empirical claim. It is also a more controversial claim and one that is less obviously correct. I certainly wouldn't expect scorn to be the optimal response in such circumstances but the claim that it wastes more time than the described alternative is still an empirical claim that would actually require empiricism to be done and cited. It isn't something that I have seen anywhere.

comment by Tyrrell_McAllister · 2013-08-08T00:55:39.625Z · score: -1 (1 votes) · LW · GW

This was a helpful comment.

There isn't a convenient steel man available. Not all wrong (or, to be agnostic with respect to the correctness of our positions, disagreed with) positions have another position nearby in concept space that is agreed with (or, sometimes, disagreed with only with significant respect and more complicated reasoning).

I agree that, in general, wrong positions may lack steel-man versions. However, I am not convinced that this is the case here. Indeed, it seems to me that you provide just such a steel man in your comment.

Because that is a different described procedure. They are similar in as much as scorn is applied in both cases but the selection process for when scorn is applied is removed and the intended outcome is changed.

You are reading "seeing who sticks around" as the reason why the scorn is being applied. This is a possible reading. It might be the intended meaning, but it might not. The intended meaning might just be that "seeing who sticks around" is an outcome, and not the intended outcome.

If the meaning was what you said, the sentence could have been written as "heaping scorn on people to see who sticks around". That would have been equally concise and less ambiguous. Since that wasn't what was written, your reading is less certain.

This seems to be a different empirical claim. It is also a more controversial claim and one that is less obviously correct.

Refutations of straw men are usually obviously correct. That is why straw men are offered. The steel man version of the straw-man-based refutation will rarely be so obviously correct, but it will be obviously better. The steel man will be more relevant, raise more important issues, be more likely to move the conversation forward in a productive way, and so on.

You seemed to me to be offering just such a steel man when you wrote,

Empirically, <driving away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors> leads to lots of time wasted on flame wars.

Yes, your version is a different empirical claim, but steel men are generally different claims from the original "unsteeled" version. Your version raises controversial issues, but that need not obviate productive discussion.

Most importantly, and as you point out, your steel man version raises empirical issues, which would help keep the conversation connected to reality. Moreover, addressing those empirical questions would probably require getting into the specific dynamics of the community under discussion. (What have the documented conversations in this specific community actually been like? What are the actual social dynamics and the actual history of how they've changed over time? What has this community accomplished, and under just what conditions, as a function of how much scorn was being applied? Etc.)

This would make the conversation far more likely to stay relevant to the actual matter at hand. The conversation would be more likely to stay at the object level, instead of floating in the meta level, where accusations of fallacies live.

To summarize, I think that what you offered is a good steel man of MixedNuts's original claim for the following reasons:

  1. It is recognizably related to what MixedNuts said, although it is different. Moreover, it is plausible that he could be convinced that this is what he should have said.

  2. The antecedent ("driving away people unwilling to adopt that 'git’r'done' attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors") is not a straw man.

  3. It raises promising and empirically grounded points of disagreement, as I argue above.

comment by AndHisHorse · 2013-08-07T03:19:24.197Z · score: 1 (3 votes) · LW · GW

I don't believe that it does, and here's why.

Heaping scorn on everyone and seeing who sticks around is a selection process; the condition for surviving is being able to accept scorn, whether or not such scorn is warranted by the value system of the society. This is somewhat similar to hazing.

Heaping scorn on a specific group of people for their unwillingness to adopt the values of the society (or, rather, some powerful subset of the society which has enough clout to control how things are run) is a selection process based on something of value to the society, and is more like punishment or selective admissions: people with the valued trait are encouraged, those without are allowed to leave.

It would appear that there are very different implications, as the former selects those who can take unjustified scorn (a quality of dubious value), and the latter selects for any demonstrable quantity desired by the society (in this case, a specific attitude towards problem-solving).

comment by Tyrrell_McAllister · 2013-08-07T05:21:13.062Z · score: 2 (2 votes) · LW · GW

This is a good argument for the claim that MixedNuts's hyperbolic version, read literally, misses something important. (Your argument convinces me, anyway.)

It is not clear to me that your argument addresses the "steel man" version in which "everyone" is replaced by "people who are unwilling to adopt that 'git’r'done' attitude".

comment by RichardKennaway · 2013-08-02T12:57:15.332Z · score: 8 (14 votes) · LW · GW

Empirically, heaping scorn on everyone and seeing who sticks around

Eric Raymond isn't suggesting that. Why are you?

comment by Lumifer · 2013-08-02T17:07:53.487Z · score: 0 (8 votes) · LW · GW

A relevant example:

http://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/

Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn't waste lots of time on flame wars.

comment by novalis · 2013-08-04T01:47:11.460Z · score: 8 (8 votes) · LW · GW

Linux kernel seems to me a quite well-managed operation (of herding cats, too!) that doesn't waste lots of time on flame wars.

I don't follow kernel development much. Recently, a colleague pointed me to the rdrand instruction. I was curious about Linux kernel support for it, and I found this thread: http://thread.gmane.org/gmane.linux.kernel/1173350

Notice that Linus spends a bunch of time (a) flaming people and (b) being wrong about how crypto works (even though the issue was not relevant to the patch).

Is this typical of the linux-kernel mailing list? I decided to look at the latest hundred messages. I saw some minor rudeness, but nothing at that level. Of course, none of these messages were from Linus. But I didn't have to go back more than a few days to find Linus saying things like, "some ass-wipe inside the android team." Imagine if you were that Android developer, and you were reading that email? Would that make you want to work on Linux? Or would that make you want to go find a project where the leader doesn't shit on people?

Here's a revealing quote from one recent message from Linus: "Otherwise I'll have to start shouting at people again." Notice that Linus perceives shouting as a punishment. He's right to do so, as that's how people take it. Sure, "don't get offended", "git 'er done", etc -- but realistically, developers are human and don't necessarily have time to do a bunch of CBT so that they can brush off insults.

Some people, I guess, can continue to be productive after their project leader insults them. The rest either have periodic drops in productivity, or choose to work on projects which are run by people willing to act professionally.

tl;dr: Would you put up with a boss who frequently called you an idiot in public?

comment by Lumifer · 2013-08-06T16:35:23.087Z · score: 3 (5 votes) · LW · GW

Would you put up with a boss who frequently called you an idiot in public?

Actually, that depends.

Mostly that depends on what the intent (and context) of calling me an idiot in public is. If the intent is, basically, power play -- the goal is to belittle me and elevate himself, reassert his alpha-ness, shift blame, provide an outlet for his desire to inflict pain on somebody -- then no, I'm not going to put up with it.

On the other hand, if this is all a part of a culturally normal back-and-forth, if all the boss wants is for me to sit up and take notice, if I can without repercussions reply to him in public pointing out that it's his fat head that gets into his way of understanding basic things like X, Y, and Z and that he's wrong -- I'm fine with that.

The microcultures of joking-around-with-insults exist for good reasons. Nobody forces you to like them, but you want to shut them down and that seems rather excessive to me.

comment by novalis · 2013-08-06T17:07:03.013Z · score: 2 (2 votes) · LW · GW

I think it's pretty clear that Linus is more on the power-play end of the spectrum. Notice his comment above about the Android developer; that's not someone who is part of his microculture (the person in question was a developer on the Android email client, not a kernel hacker). And again, the shouting-as-punishment thing shows that Linus understands the effect that he has, but doesn't care.

Also, Linus, as the person in the position of power, isn't in a position to judge whether his culture is fun. Of course it's fun for him, because he's at the top. "I was just joking around" is always what bullies say when they get called out. The real question is whether it's fun for others. The recent discussion (that presumably sparked the quotes in this thread) was started by someone who didn't find it fun. So even if there are some "good reasons" (none of which you have named), they don't necessarily outweigh the reasons not to have such a culture.

comment by Lumifer · 2013-08-06T17:43:40.021Z · score: 1 (5 votes) · LW · GW

I think it's pretty clear that Linus is more on the power-play end of the spectrum.

That's not clear to me at all.

Note that management of any kind involves creating incentives for your employees/subordinates/those-who-listen-to-you. The incentives include both carrots and sticks and sticks are punishments and are meant to be so. If you want to talk about carrots-only management styles, well, that's a different discussion.

The real question is whether it's fun for others.

I disagree. You treat fun and enjoyment of working at some place as the ultimate, terminal value. It is not. The goal of working is to produce, to create, to make. Whether it's "fun" is subordinate to that. Sure, there are feedback loops, but organizations which exist for the benefit of their employees (to make their life comfortable and "fun") are not a good thing.

comment by novalis · 2013-08-06T19:34:48.545Z · score: 4 (4 votes) · LW · GW

The incentives include both carrots and sticks and sticks are punishments and are meant to be so. If you want to talk about carrots-only management styles, well, that's a different discussion.

For what it's worth, I've never worked at a place that successfully used aversive stimulus. And, since the job market for programmers is so hot, I can't imagine that anyone would willingly do so (outside the games industry, which is a weird case). This is especially true of kernel hackers, who are all highly qualified developers who could find work easily.

I disagree. You treat fun and enjoyment of working at some place as the ultimate, terminal value. It is not. The goal of working is to produce, to create, to make. Whether it's "fun" is subordinate to that. Sure, there are feedback loops, but organizations which exist for the benefit of their employees (to make their life comfortable and "fun") are not a good thing.

I would point out that Linus Torvalds's autobiography is called "Just for Fun". Also, Linus doesn't have employees. Yes, he does manage Linux, but he doesn't employ anyone. I also pointed out a number of ways in which Linus's style was harmful to productivity.

comment by Lumifer · 2013-08-06T19:50:03.703Z · score: 1 (5 votes) · LW · GW

For what it's worth, I've never worked at a place that successfully used aversive stimulus.

Ahem. I think you mean to say that you never touched the electric fence. Doesn't mean the fence is not there.

Imagine that someone at your workplace decided not to come to work for a week or so, 'cause he didn't feel like it. What would be the consequences? Are there any, err... "aversive stimuli" in play here?

I can't imagine that anyone would willingly do so ... This is especially true of kernel hackers

No need for imagination. The empirical reality is that a lot of kernel hackers successfully work with Linus and have been doing this for years and years.

Also, Linus doesn't have employees.

Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.

comment by Grant · 2013-08-06T20:27:41.990Z · score: 3 (3 votes) · LW · GW

Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.

As of 2012-04-16, 75% of kernel development is paid. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.

comment by Lumifer · 2013-08-06T20:48:42.273Z · score: 1 (5 votes) · LW · GW

if Linus removed them from development

Um, Linux kernel doesn't work like that. Linus doesn't "add" anyone to development or "remove" anyone. And I don't know if companies who pay the developers would be likely to fire them if the developers' patches start to get rejected on a regular basis.

Oh, and you misquoted your source. It's not 75% of developers, it's 75% of the share of kernel development and, of course, some developers are much more prolific than others.

comment by Grant · 2013-08-06T20:54:32.180Z · score: 3 (3 votes) · LW · GW

Certainly he and his team are less likely to accept patches from people who they've had trouble with in the past? And people who have trouble getting patches accepted (for whatever reason) are probably not going to be paid to continue doing kernel development?

It would surprise me if he's never outright banned anyone.

Thanks for the correction, edited my comment above.

comment by wedrifid · 2013-08-07T02:22:24.874Z · score: 2 (2 votes) · LW · GW

Um, Linux kernel doesn't work like that. Linus doesn't "add" anyone to development or "remove" anyone.

You are describing a (dubious) difference in word use, not a difference in how the world works.

comment by Lumifer · 2013-08-07T02:41:24.984Z · score: 3 (5 votes) · LW · GW

I don' t think so -- it is a difference in how the world works. Anyone in the world can submit kernel patches. The filtering does not occur at the people level, it occurs at the piece-of-code level.

Linus does not say "I pronounce you a kernel developer" or "You're no longer a kernel developer" -- he says "I accept this patch" or "I do not accept this patch".

comment by novalis · 2013-08-06T23:17:29.763Z · score: 1 (3 votes) · LW · GW

Ahem. I think you mean to say that you never touched the electric fence. Doesn't mean the fence is not there.

No, I mean that touching the electric fence did not make me a more productive worker.

The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.

I'm not saying that Linus's style will inevitably lead to instant doom. That would be silly. I'm saying that it's not optimal. Linux hasn't exactly taken over the world yet, so there's definitely room for improvement.

comment by 4hodmt · 2013-08-07T01:05:37.075Z · score: 1 (1 votes) · LW · GW

It's important to distinguish between Linux the operating system kernel, and the complete system of GNU+Linux+various graphical interfaces sometimes called "Linux".

The Linux kernel can also be used with other userspaces, eg. Busybox or Android, and it's very popular in these combinations on embedded systems and phones/tablets respectively. GNU+Linux is popular on servers. The only area where Linux is unsuccessful is desktops, so it's unfortunate that desktop use is so salient when people talk about "Linux".

Linus only works on the kernel itself, and that's making great progress towards taking over the world.

comment by novalis · 2013-08-07T01:30:01.965Z · score: 3 (3 votes) · LW · GW

Yes, I used to work for RMS; I am well aware of the difference. I should also note that most of the systems you mention use proprietary kernel modules; it would be better if they didn't, and perhaps if Linus's attitude were different, there would be more interest in fixing the problem.

Also, desktops are where I spend most of my time, so I think they still matter a lot.

comment by 4hodmt · 2013-08-07T11:08:11.740Z · score: 0 (0 votes) · LW · GW

I use GNU+Linux on the desktop myself, and I share RMS's goals, although I'm willing to make bigger compromises for the sake of practicality than him. Linus does not share RMS's goals, so my point is that from Linus's point of view his management techniques are highly effective.

comment by NancyLebovitz · 2013-08-08T01:59:48.507Z · score: 0 (0 votes) · LW · GW

The only area where Linux is unsuccessful is desktops, so it's unfortunate that desktop use is so salient when people talk about "Linux".

Pure hypothesis: Linux being unsuccessful on desktops is not a coincidence, because Linux is written in a low-empathy environment, but writing UI for the general public means that you don't get to blame users when they don't like your software.

Possible test: Firefox is fairly good open source software for the general public. What's the culture at Mozilla/Firefox like for the programmers?

comment by Lumifer · 2013-08-08T02:12:51.211Z · score: 4 (4 votes) · LW · GW

Pure hypothesis: Linux being unsuccessful on desktops is not a coincidence, because Linux is written in a low-empathy environment

Um. The claim by novalis is that the Linux kernel is written in a "low-empathy" environment. The kernel has nothing to do with UI which, along with most applications, is quite separate. Linus has no influence over UI design or user-friendliness in general.

There are two main GUI environments on Linux -- Gnome and KDE. I don't know what the atmosphere is for developers inside these organizations. I think there is a fair amount of infighting and office politics, but I have no clue if they are polite and tactful about it.

comment by [deleted] · 2013-08-08T22:45:31.095Z · score: 1 (1 votes) · LW · GW

You know what Ubuntu is named after, BTW?

comment by Lumifer · 2013-08-09T00:09:29.835Z · score: 0 (0 votes) · LW · GW

Yes, I do, though I don't see the relevance.

comment by [deleted] · 2013-08-09T10:49:02.166Z · score: 0 (0 votes) · LW · GW

(Evidence about whether the Ubuntu people are ‘friendly’.)

comment by Lumifer · 2013-08-09T15:18:16.360Z · score: 0 (0 votes) · LW · GW

It's evidence in the same sense that the name of product like Repairwear Laser Focus Wrinkle & UV Damage Corrector is evidence that this face cream laser focuses your wrinkles and corrects your UV damage 8-/

"Ubuntu", by the way, means a lot more than friendliness.

comment by Lumifer · 2013-08-06T23:59:00.232Z · score: -1 (3 votes) · LW · GW

touching the electric fence did not make me a more productive worker.

How do you know?

I'm saying that it's not optimal.

How do you know? (other than in a trivial sense that anything in real life is not going to be optimal)

You're making naked assertions without providing evidence.

comment by novalis · 2013-08-07T01:31:55.395Z · score: 3 (3 votes) · LW · GW

touching the electric fence did not make me a more productive worker.

How do you know?

Well, I can tell you that afterwards, I felt like shit and didn't get much done for a while. Or I started looking for a new job (whether or not I ended up taking one, this takes time and mental energy away from my current job). And getting yelled at has never seemed to me to correlate with me actually being wrong, so I'm not clear on how it would have changed my behavior.

I'm saying that it's not optimal.

How do you know? (other than in a trivial sense that anything in real life is not going to be optimal)

You're making naked assertions without providing evidence.

Upthread, you linked to an article which quotes someone saying, "Thanks for standing up for politeness/respect. If it works, I'll start doing Linux kernel dev. It's been too scary for years." I also pointed out, in my discussion of the rdrand thread, that Linus wastes a bunch of time by being cantankerous. And speaking of the rdrand thread (which I swear I didn't choose as my example for this reason; I really did just stumble across it a few weeks ago), your linked article also quoted Matt Mackall, whom Linus yelled at in that thread: he's no longer a kernel hacker. Is Linus's attitude why? Well, he's complained about Linus's attitude before, and shortly after that thread, he ceased posting on LKML. And he's probably pretty smart -- he wrote Mercurial -- so it's a shame for the kernel to lose him.

I can tell you that I, personally, would be uninterested in working under Linus, although kernel development isn't really my area of expertise, so maybe I don't count.

comment by Lumifer · 2013-08-07T02:00:02.619Z · score: 0 (2 votes) · LW · GW

getting yelled at has never seemed to me to correlate with me actually being wrong

I hope you didn't take my position to be that yelling at people is always the right thing to do. There certainly is lots of yelling which is stupid, unjustified, and not useful in any sense.

The issue is whether yelling can ever be useful. You are saying that no, it can never be. I disagree.

The secondary issue is whether Linus runs kernel development in a good/proper/desirable/productive way. The major question here is the metric -- how do we decide what is a "good/... way". From your point of view, if you define a good way as "fun" for developers, then sure, it probably is possible to run the kernel in a more fun way.

From my point of view, the proof of the pudding is in the eating. Is the kernel a good piece of software? I would argue that it is, and that it is a remarkably successful piece of software. More, I would argue that Linus deserves a lot of credit for making it so. Given this, I'm suspicious of claims that Linus' way is "non-optimal", especially if there is the strong underlying current of "I, personally, don't like it".

comment by novalis · 2013-08-07T02:10:51.510Z · score: 2 (2 votes) · LW · GW

I hope you didn't take my position to be that yelling at people is always the right thing to do. There certainly is lots of yelling which is stupid, unjustified, and not useful in any sense.

The issue is whether yelling can ever be useful. You are saying that no, it can never be. I disagree.

No, the issue is whether Linus's yelling is useful, or, whether yelling is generally useful enough in free/open source projects that it outweighs the costs. Specifically, whether "Let’s drive away people unwilling to adopt that “git’r'done” attitude with withering scorn, rather than waste our time pacifying tender-minded ninnies and grievance collectors. That way we might continue to actually, you know, get stuff done." is good or bad advice.

Given this, I'm suspicious of claims that Linus' way is "non-optimal", especially if there is the strong underlying current of "I, personally, don't like it".

You should be even more suspicious, then, of Linus saying that it's necessary and proper, given that he's said that he, personally, does like it.

comment by Lumifer · 2013-08-07T02:16:17.493Z · score: 0 (2 votes) · LW · GW

...is good or bad advice

Do you think we have a basic difference in values or there's some evidence which might push one of us towards the other one's position?

You should be even more suspicious, then, of Linus

He has the huge advantage in that he actually delivered and continues to deliver. His method is known to work. Beware the nirvana fallacy.

comment by novalis · 2013-08-07T05:53:31.822Z · score: 1 (1 votes) · LW · GW

Do you think we have a basic difference in values or there's some evidence which might push one of us towards the other one's position?

That's a pretty good question.

Hypothesis: I think some of it might be a case of the "Typical Mind Fallacy". Maybe if Linus yelled at you, you wouldn't be bothered at all. But I know that my day would be ruined, and I would be less productive all week. So I assume that many people are like me, and you assume that many people are like you.

I would be curious about a controlled experiment, where free/open source project leaders were told to act more/less like Linus for a month to see what would happen. But I guess that's pretty unlikely to happen. And one confounder is that a lot of people might have already left (or never joined) the free/open source community because of attitudes like Linus's. We could measure project popularity (say, by number of stars on github) against some rating of a project's friendliness.

We might also survey programmers in general about what forces do/don't encourage them to work on specific free/open source projects.

I'm sure there are studies available of what sorts of management are effective generally. I'll ask my MBA friend. I did a two-minute Google search for studies about what cause people to leave their jobs generally, but found a such a variety of conflicting data that I decided it would need more time than I have.

These things could definitely influence me to change my mind.

I also think there might be a value difference, in that I do value fun pretty highly. That's especially true in the free/open source world, where nobody's getting rich, and where a lot of people are volunteers (this last is less true on Linux than on some other projects, but perhaps part of that is that all of the volunteers have been driven away)? But in general, I would like to enjoy the thing I spent eight (or twelve) hours a day on. And if even if this did make me somewhat less productive than I would be if I was less happy, I don't really mind that much.

comment by Lumifer · 2013-08-07T16:48:30.219Z · score: 0 (2 votes) · LW · GW

I think some of it might be a case of the "Typical Mind Fallacy"

Yes, I think the Typical Mind Fallacy plays some role in this. But then let's explicitly go around it. Let's postulate that the population of, say, qualified programmers, is diverse. Some are shy wallflowers, wilting from any glance they perceive as disapproving, some thrive in a rough-and-tumble environments where you prove your solution is better by smashing your opponent into bits. Most are somewhere in between.

This diverse population would self-sort by preferences -- the wallflowers would gravitate towards polite, supportive, never-a-harsh-word environments (in our case, OSS projects), while the roar-and-smash types will gravitate towards the get-it-done-NOW-you-maggot environments. Since OSS projects are easy to create and it's easy for developers to move from project to project, the entire system should evolve towards an equilibrium where most people find the environment they're comfortable with and stick with it.

Now, that seems to me a fine way for the world to work. But would you object to such a state of the world, after all, there are some projects there which are "mean" and where you (and likely some other people) would be uncomfortable and unproductive?

I'm sure there are studies available of what sorts of management are effective generally.

Oh, there are piles and piles of those. The only problem is, they all come to different conclusions (with a strong dependency on the decade in which the study was done).

I also think there might be a value difference, in that I do value fun pretty highly.

Put yourself into manager's shoes and consider the difference between instrumental and terminal values.

You, an employee/contributor, value fun highly. That is a terminal value for you. Being productive is a secondary goal and may also be an instrumental value (some but not all people are not having fun if they see themselves as being unproductive).

Now, for a manager, the fun of his employees/contributors/developers is NOT a terminal value. It's only an instrumental value, the true terminal value is to Get Shit Done.

Do you see how that leads to different perspectives?

comment by novalis · 2013-08-07T18:25:19.574Z · score: 1 (1 votes) · LW · GW

Since OSS projects are easy to create and it's easy for developers to move from project to project

Creating projects is easy; forking is hard. And nobody wants to create a new kernel from scratch. Kernel hackers don't really have a lot of options. So I don't think your theoretical world has anything to do with the real world. Also, it seems to me that culture doesn't end up contained within a single project; Linux depends on GCC, for instance, so the Linux people have to interact with the GCC people. Which means that culture will bleed over. I was recently at a technical conference and a guy there said, "yeah, security is perhaps the only community that's less friendly than Linux kernel development." So now it's not just one project that's off-limits, but a whole field.

I also don't think there are necessarily any actual roar-and-smash types. That is, I think a fair number of people think it's fun to lay a beatdown on some uppity schmuck. I've experienced that myself, certainly. Why else would anyone bother wasting time arguing with creationists? But I'm not sure there are a lot of people who find it fun to be on the losing end of this. This is an extension of Arguments as Soldiers. When you're having a knock-down, drag-out fight with someone, it's harder to back down.

Notice that the original example of a person in that category was Mannie O'Kelly -- a fictional character.

Put yourself into manager's shoes

[Linus]:

And I do it partly (mostly) because it's who I am, and partly because I honestly despise being subtle or "nice".

(later in that email, he does give a nod to effectiveness, but that doesn't seem to be his primary motivator).

I think it remains an open question whether Linus's style is in fact better than the alternative from the "get shit done" perspective. And the original quote implied, without evidence, that in fact it is. Not really sure why this is a "rationality" quote.

comment by solipsist · 2013-08-07T19:04:35.130Z · score: 1 (1 votes) · LW · GW

I think it remains an open question whether Linus's style is in fact better than the alternative from the "get shit done" perspective. And the original quote implied, without evidence, that in fact it is.

I agree. My further comments shouldn't detract from this fact.

Creating a project is easy -- forking is hard. And nobody wants to create a new kernel from scratch. Kernel hackers don't really have a lot of options.

I don't agree. Every CS student and their mother wants to write their own OS. There are a lot [of] projects out there.

As to the effectiveness of the community, there's an important datapoint. BSD came before Linux, but Linux took over the world. I think this is generally attributed to a more vibrant community of developers.

comment by Lumifer · 2013-08-07T19:40:21.689Z · score: 0 (2 votes) · LW · GW

Creating projects is easy; forking is hard.

Forking is pretty easy -- it's getting people to follow your fork that's hard.

I also don't think there are necessarily any actual roar-and-smash types.

Well, there are certainly enough programmers who prefer to discuss code in terms of "only a brain-dead moron could write a library that does foo" or "why is this retarded object making three fucking calls to the database for each invocation", etc.

And while people generally don't find it fun to be on the losing side, this does not stop them from seeking and entering competitions and competitive spheres. Consider sports, e.g. boxing or martial arts.

[Linus]: And I do it partly (mostly) because it's who I am, and partly because I honestly despise being subtle or "nice".

Steelman this. I am pretty sure that in the North European culture being "subtle or nice" is dangerously close to being dishonest. You do not do anyone a favour by pretending he's doing OK while in reality he's clearly not doing OK. There is a difference between being direct and blunt - and being mean and nasty.

I think it remains an open question whether Linus's style is in fact better than the alternative from the "get shit done" perspective.

As I said, Linus' style is proven to work. We know it works well. An alternative style might work better or it might not -- we don't know.

I suspect you have a strong prior but no evidence.

comment by novalis · 2013-08-07T21:46:33.990Z · score: 5 (5 votes) · LW · GW

[Linus]: And I do it partly (mostly) because it's who I am, and partly because I honestly despise being subtle or "nice".

Steelman this. I am pretty sure that in the North European culture being "subtle or nice" is dangerously close to being dishonest. You do not do anyone a favour by pretending he's doing OK while in reality he's clearly not doing OK. There is a difference between being direct and blunt - and being mean and nasty.

I don't understand what you're saying here. Are you saying that anyone is proposing that Linus to act in a way that he would see as dishonest? Because I don't think that's the proposal. Consider the difference between these three statements:

  • Only a fucking idiot would think it's OK to frobnicate a beezlebib in the kernel.
  • It is not OK to frobnicate a beezlebib in the kernel.
  • I would prefer that you not frobnicate a beezlebib in the kernel.

The first one is rude, the second one is blunt, the third one is subtle/tactful/whatever. Linus appears to think that people are asking for subtle, when instead they're merely asking for not-rude. Blunt could even be:

  • When you frobnicate a beezlebib, it fucks the primary hairball inverters, so never do that.

So he doesn't even have to stop cursing.

As I said, Linus' style is proven to work. We know it works well. An alternative style might work better or it might not -- we don't know.

There are many FOSS projects that don't use Linus's style and do work well. What's so special about Linux?

I suspect you have a strong prior but no evidence.

I've run a free/open source project; I tried to run it in a friendly way, and it worked out well (and continues to do so even after all of the original developers have left).

I can also point to Karl Fogel's book "Producing Open Source Software", where he says that rudeness shouldn't be tolerated. He's worked on a number of free/open source projects, so he's had the chance to experience a bunch of different styles.

comment by Lumifer · 2013-08-08T01:04:33.753Z · score: 1 (3 votes) · LW · GW

The first one is rude, the second one is blunt, the third one is subtle/tactful/whatever.

We keep hitting the Typical Mind Fallacy over and over again :-)

Let me offer you my interpretation: the first one is blunt and might or might not be rude, depending on what the social norms and context are (and on whether thinking about frobnicating the beezlebib does provide incontrovertible evidence of severe brain trauma). The second one is not blunt at all, it's entirely neutral. The third one is a slighly more polite version of neutral. Your fourth example is still neutral, by the way -- there's nothing particularly blunt about explaining why something should not be done (or about using four-letter words, for that matter).

To contrast I'll offer my examples:

  • (rude) You are a moron and can't code your way out of a wet paper bag! Stuff your code where the sun don't shine and never show it to me again!
  • (blunt) This is not working and will never work. You need to scrap this entirely and start from scratch.
  • (subtle) While this is a valuable contribution, we would really appreciate it if you went and twiddled the bogon emitter for us while we try to deal with the beezlebib frobnication on our own.

What's so special about Linux?

It's only the most successful open software ever. Otherwise, not much :-P

comment by novalis · 2013-10-11T23:20:51.045Z · score: 0 (0 votes) · LW · GW

I recently came across this, which seems to have some evidence in my favor (and some irrelevant stuff): http://www.bakadesuyo.com/2013/10/extraordinary-leader/

comment by Grant · 2013-08-07T22:08:22.066Z · score: 0 (0 votes) · LW · GW

A more direct approach might be: "no patches which frobnicate a beezlebib will be accepted".

There are many FOSS projects that don't use Linus's style and do work well. What's so special about Linux?

I would say the size (in terms of SLOC count), scope (everything from TVs to supercomputers), lack of a equivalent substitute (MySQL or Postgres? Apache or Nginx? Linux or... BSD?), importance of correctness (its the kernel, stupid), and commercial involvement (Google, Oracle, etc.) make it very different from most FOSS projects. Mostly I'd say the size, complexity and very low tolerance of bugs.

I have no idea if Linus's attitude is helpful or not. I tend to think he could do better with more direct, polite approaches like the above, but I don't hold that belief very strongly.

comment by Document · 2013-08-07T20:16:38.843Z · score: 0 (0 votes) · LW · GW

Steelman this.

Posts like this encourage me to remark that I want to have a website where I feel free to respond to others' actual words, not by how I'd rationalize those words if I were personally committed to them.

comment by Eugine_Nier · 2013-08-09T06:48:14.577Z · score: -2 (4 votes) · LW · GW

I think some of it might be a case of the "Typical Mind Fallacy". Maybe if Linus yelled at you, you wouldn't be bothered at all. But I know that my day would be ruined, and I would be less productive all week.

The right comparison is to compare that to how much you'd be bothered if you had to clean up the mess left by an incompetent coworker. Or having to deal with an incompetent bogon in middle management.

comment by novalis · 2013-08-09T17:29:26.980Z · score: 1 (1 votes) · LW · GW

The right comparison is to compare that to how much you'd be bothered if you had to clean up the mess left by an incompetent coworker. Or having to deal with an incompetent bogon in middle management.

Unsurprisingly, I've had to deal with both of these things. It has never seemed to me that yelling at someone could make them more competent. Educating them, or firing them and replacing them seems like a better plan.

comment by Eugine_Nier · 2013-08-07T01:36:51.402Z · score: -2 (6 votes) · LW · GW

"Thanks for standing up for politeness/respect. If it works, I'll start doing Linux kernel dev. It's been too scary for years."

The issue is whether the person in question would have been a productive contributor.

comment by Eugine_Nier · 2013-08-07T01:38:38.131Z · score: -2 (2 votes) · LW · GW

Linux hasn't exactly taken over the world yet, so there's definitely room for improvement.

Well Bill Gates and Steve Jobs have similar reputations.

comment by novalis · 2013-08-07T02:05:09.053Z · score: 1 (1 votes) · LW · GW

Bill Gates failed to create an organization that would thrive in his absence. We'll see how Steve Jobs did in a few more years (it seems likely that he did better, but he also had the famous "reality distortion field", which Linus doesn't). Steve Jobs also got kicked out of his own company for a bunch of years.

comment by Eugine_Nier · 2013-08-07T03:36:08.570Z · score: -2 (4 votes) · LW · GW

Steve Jobs also got kicked out of his own company for a bunch of years.

During which time the company tanked.

In any case, your argument was that Linus might have better succeeded in "taking over the world" if he had used a less confrontational style. My point is that the people who did "take over the world" used the same style.

comment by Estarlio · 2013-08-07T12:43:29.946Z · score: 2 (2 votes) · LW · GW

Note that management of any kind involves creating incentives for your employees/subordinates/those-who-listen-to-you. The incentives include both carrots and sticks and sticks are punishments and are meant to be so.

Punishments seem to have rapidly decreasing returns, especially given the availability of alternatives that are less abusive. Otherwise we'd threaten to people when we wanted to make them more productive, rather than rewarding them - which most of the time we don't above a low level of performance.

comment by NancyLebovitz · 2013-08-08T02:02:44.989Z · score: 1 (1 votes) · LW · GW

This is a shift of topic-- heaping scorn is one particular sort of punishment. Firing someone who isn't working after having given them several warnings is a punishment, but it isn't the same as a high-flame environment.

comment by Lumifer · 2013-08-07T16:22:02.360Z · score: 1 (1 votes) · LW · GW

Punishments seem to have rapidly decreasing returns, especially given the availability of alternatives that are less abusive.

I don't understand the point that you are arguing.

Basically all human groups -- workplaces, societies, countries, knitting circles -- have punishments for members who do unacceptable things. The punishments range from a stern talking to, ostracism, or ejection from the group to imprisonment, torture, and killing.

In which real-life work setting you will not be punished for arbitrarily not coming to work, for consistently turning in shoddy/unacceptable results, for maliciously disrupting the workplace?

comment by Estarlio · 2013-08-07T17:35:56.669Z · score: 2 (2 votes) · LW · GW

Of course all societies have punishments, but that doesn't address the point you were responding to which was that Linus was more on the power-play end of the spectrum. The ratio of reward to punishment, your leverage as determined by the availability of viable alternatives, matters in determining which end of that spectrum you're on.

And that has implications for the quality of work you can get from people - while you may be punished for blatantly shoddy work, you're not going to be punished for not doing your best if people don't know what that is. The threat of being fired can only make people work so hard.

comment by Lumifer · 2013-08-07T17:41:49.203Z · score: 0 (2 votes) · LW · GW

Linus was more on the power-play end of the spectrum. The ratio of reward to punishment, your leverage as determined by the availability of viable alternatives, matters in determining which end of that spectrum you're on.

Um. How do you determine the ratio of reward to punishment for Linux kernel developers?

Also whether you engage in power play is determined by your intent, not by ratio or leverage. Those determine the consequences (accept/revolt/escape) but not whether the original critique was legitimate or purely status-gaining.

comment by AndHisHorse · 2013-08-07T17:48:46.366Z · score: 0 (0 votes) · LW · GW

You bring up some good points. I would go so far as to say that given a) the amount of subjective interpretation from the observers, b) the limited number of first-hand witnesses, and c) the difficulty of comparing the small number of sample societies for which we have observers, that in the absence of evidence roughly the strength of a formal study, this thread may not be able to reach an agreeable conclusion for lack of data.

comment by Vaniver · 2013-08-06T17:32:37.923Z · score: 0 (2 votes) · LW · GW

The real question is whether it's fun for others.

The claim, as I understand it, is that the culture trades off fun for productivity. A common example given is Apple, where Steve Jobs was a hawk that excoriated his underlings, and thus induced them to create beautiful, world-conquering products.

comment by Eugine_Nier · 2013-08-07T01:09:41.677Z · score: -1 (3 votes) · LW · GW

Also that the culture selects for the people who find being productive fun.

comment by Risto_Saarelma · 2013-08-02T11:26:22.375Z · score: -1 (5 votes) · LW · GW

While the more socially enlightened attitudes lead to very effective and high signal-to-noise conflict handling, as can be observed on Tumblr and MetaFilter?

comment by [deleted] · 2013-08-02T21:47:53.414Z · score: 9 (19 votes) · LW · GW

Here's my thought process upon reading this. (Initially, I assumed “git 'er done” meant something like ‘women are unimportant except as sex objects, and I misread “unwilling” as “willing”.)

  • ‘How comes that guy, who when talking about sex on his blog gets mind-killed to the point of forgetting how to do high-school maths, makes so much sense everywhere else? Maybe he was saner when younger, then got worse with age, or something.’ I follow the link, expecting it to go to somewhere other than Armed and Dangerous, e.g. somewhere on catb.org.
  • I notice the link does go to his blog, and to a recent post at that. ‘So he is still capable of talking sense about such topics after all?’ I notice I am confused.
  • I realize he said “unwilling” not “willing”. ‘Er... Nope. He's crazy as usual.’
  • Appalled at the idea that anyone, even ESR, would say anything like that in public with an almost straight face, I decide to look “git 'er done” up. ‘Oh, that makes perfect sense, and I agree with him. But that's not about sex (except insofar as the cut-through-the-bullshit communication style is less rare among men than among women), so that doesn't actually show he's not mind-killed beyond all repair.’

(Anyway, if an adult woman complains because you called her a girl, the course of action that leaves you the most time to get stuff done is apologizing, not doing that again, and getting back to work, not endlessly whining about how ridiculous the PC crowd are.)

comment by Eugine_Nier · 2013-08-03T05:11:35.923Z · score: -1 (13 votes) · LW · GW

(Anyway, if an adult woman complains because you called her a girl, the course of action that leaves you the most time to get stuff done is apologizing, not doing that again, and getting back to work, not endlessly whining about how ridiculous the PC crowd are.)

Not necessarily, it might just encourage further frivolous complaints.

comment by [deleted] · 2013-08-04T15:15:38.914Z · score: 4 (8 votes) · LW · GW

As opposed to feeding trolls, which is widely known to be extremely effective in making them shut up?

comment by wedrifid · 2013-08-04T18:00:12.852Z · score: 5 (9 votes) · LW · GW

As opposed to feeding trolls, which is widely known to be extremely effective in making them shut up?

In the context the group you position here as 'trolls' are described as frivolous complainers. You advocate apologising and complying. Eugine is correct in pointing out that this can represent a perverse incentive (both in theory and in often observed practice).

comment by [deleted] · 2013-09-21T18:54:05.194Z · score: 0 (2 votes) · LW · GW

I dunno... if someone's goal is to fuel a flamewar to discredit you, it would seem to me that ranting about that is more likely to make their day than just reacting as though they had pointed out you misspelled their name and then going back to your business.

comment by NancyLebovitz · 2013-08-03T00:41:17.080Z · score: 4 (4 votes) · LW · GW

The courtesy rules at LW are pretty strict. I don't know whether things are different at CFAR and MIRI, but does insufficient scorn interfere with things getting done?

comment by Eugine_Nier · 2013-08-03T04:35:43.460Z · score: -1 (5 votes) · LW · GW

We use the karma system for that.

comment by NancyLebovitz · 2013-08-04T15:37:08.317Z · score: 3 (3 votes) · LW · GW

LW uses a karma system. I assume that CFAR and MIRI include a lot of in person and private conversation which isn't subject to a karma system.

How do you think the effectiveness of cultures which have karma + courtesy compares to cultures which permit flaming?

comment by NancyLebovitz · 2013-08-07T17:28:11.132Z · score: 3 (3 votes) · LW · GW

In the thread, there were at least a couple of examples of high-verbal-abuse programming cultures (Apple and Linux) which get significant amounts of useful work done, and I think there were more.

I don't believe that scorn just gets dumped on people who don't have a git'r'done attitude-- there have certainly been flame wars about the best programming language and operating systems, and no doubt about other legitimate differences of opinion.

Still, I'm wondering about successful programming environments which enforce courtesy rules. The only one I can think of is dreamwidth from its self-description. Running a livejournal clone isn't nothing, but it also isn't as much as inventing new products. Any others?

comment by NancyLebovitz · 2013-08-08T02:06:58.939Z · score: 0 (0 votes) · LW · GW

So I asked a friend about courteous programming environments, and he mentioned a couple that he's worked at:

Webmethods, renamed as Novell Business Service Management Managed Objects at Software AG

Anyone know where Google fits on the courtesy to flame spectrum? How about Steam?

comment by Lumifer · 2013-08-08T02:17:00.587Z · score: 0 (0 votes) · LW · GW

There is a bit of a difference between commercial, for-profit companies (especially public ones) and FOSS projects.

comment by Ben Pace (Benito) · 2013-08-28T10:36:17.759Z · score: 2 (6 votes) · LW · GW

This... theory of female promiscuity has been championed by the anthropologist Sarah Blaffer Hrdy. Hrdy has described herself as a feminist sociobiologist, and she may take a more than scientific interest in arguing that female primates tend to be "highly competitive... sexually assertive individuals." Then again, male Darwinian may get a certain thrill from saying males are built for lifelong sex-a-thons. Scientific theories spring from many sources. The only question in the end is whether they work.

Robert Wright, The Moral Animal

comment by Decius · 2013-08-07T17:54:11.184Z · score: 2 (4 votes) · LW · GW

I see small examples everywhere I look; they're just too specific to point the way to a general solution.

James Portnow/Daniel Floyd

comment by linkhyrule5 · 2013-08-24T06:55:11.234Z · score: 1 (1 votes) · LW · GW

More of an anti-death quote, but:

"“Must I accept the barren Gift?
-learn death, and lose my Mastery?
Then let them know whose blood and breath
will take the Gift and set them free:
whose is the voice and whose the mind
to set at naught the well-sung Game-
when finned Finality arrives
and calls me by my secret Name.

Not old enough to love as yet,
but old enough to die, indeed-
-the death-fear bites my throat and heart,
fanged cousin to the Pale One's breed.
But past the fear lies life for all-
perhaps for me: and, past my dread,
past loss of Mastery and life,
the Sea shall yet give up Her dead!

.....

So rage, proud Power! Fail again,
and see my blood teach Death to die!”
-- The Silent Lord, Deep Wizardry, Diane Duane

comment by Panic_Lobster · 2013-08-23T06:41:20.968Z · score: 1 (1 votes) · LW · GW

Faced with the task of extracting useful future out of our personal pasts, we organisms try to get something for free (or at least at bargain price): to find the laws of the world -- and if there aren't any, to find approximate laws of the world -- anything at all that will give us an edge. From some perspectives it appears utterly remarkable that we organisms get any purchase on nature at all. Is there any deep reason why nature should tip its hand, or reveal its regularities to casual inspection? Any useful future-producer is apt to be something of a trick -- a makeshift system that happens to work, more often than not, a lucky hit on a regularity in the world that can be tracked. Any such lucky anticipators Mother Nature stumbles over are bound to be prized, of course, if they improve an organism's edge.

--Daniel Dennet Consciousness Explained

comment by ChristianKl · 2013-08-11T12:12:35.194Z · score: 1 (7 votes) · LW · GW

Science becomes an extra-neural extension of the human nervous system. We might expect the structure of the nervous system to throw some light on the structure of science; and, vice versa, the structure of science might elucidate the working of the human nervous system.

--Alfred Korzybski Science and Sanity Page 376 (1933)

comment by Jayson_Virissimo · 2013-08-11T17:54:18.750Z · score: 0 (2 votes) · LW · GW

Interesting, if indeed it is true. I'm not sure how this is supposed to be a rationality quote though.

comment by ChristianKl · 2013-08-11T23:47:04.192Z · score: 0 (0 votes) · LW · GW

It a quote about thinking about how to think. It not the standard way of thinking around here but thinking interesting thoughts about thinking encourages rationality.

comment by JackLight · 2013-08-11T20:20:05.289Z · score: 0 (0 votes) · LW · GW

In sense that you should be searching for the truth in both directions

comment by CronoDAS · 2013-08-06T00:04:20.287Z · score: 1 (1 votes) · LW · GW

Fixed, thanks.

comment by CronoDAS · 2013-08-05T06:33:38.007Z · score: 1 (21 votes) · LW · GW

Fuck every cause that ends in murder and children crying.

-- Iain M. Banks

comment by wedrifid · 2013-08-05T12:21:51.889Z · score: 7 (9 votes) · LW · GW

Fuck every cause that ends in murder and children crying.

I suppose I somewhat appreciate the sentiment. I note that labelling the killing 'murder' has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.

comment by KnaveOfAllTrades · 2013-08-05T12:00:56.444Z · score: 5 (5 votes) · LW · GW

Does this pay rent in policy changes? It seems probable that existing policy positions will already determine the contexts in which we might choose to apply this quote, so that the quote will only be generating the appearance of additional evidential weight, but will in fact result in double-counting if we use its applicability as evidence for or against a proposal, because we already chose to use the quote because we disagreed with the porposal. For example: 'This imperialist intervention is wrong—Fuck every cause that ends in murder and children crying.' Is the latter clause doing any work?

(First version of this comment:

Does this pay rent in suggested policies? It feels like under all plausible interpretations, it's at best 'I'm so righteous!' and possibly other things.)

comment by wedrifid · 2013-08-05T12:20:07.269Z · score: 7 (9 votes) · LW · GW

Does this pay rent in suggested policies?

Yes. It rules out all sorts of policies, including good ones. It likely rules out murdering Hitler to prevent a war, especially if that requires killing guards in order to get to him.

comment by KnaveOfAllTrades · 2013-08-05T12:31:16.575Z · score: 3 (3 votes) · LW · GW

Upvoted; wording was bad. Edited.

comment by wedrifid · 2013-08-05T13:12:51.069Z · score: 3 (3 votes) · LW · GW

Upvoted; wording was bad. Edited.

I agree entirely with your new wording. This quote seems to be the sort of claim to bring out conditionally against causes we oppose but conveniently ignore when we support the cause.

comment by Bayeslisk · 2013-08-06T16:58:01.975Z · score: 4 (6 votes) · LW · GW

As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we're not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. "The perfect must not be the enemy of the good" and so on.

comment by Lumifer · 2013-08-06T18:24:42.075Z · score: 5 (5 votes) · LW · GW

Sometimes, you have to do shitty things to improve the world

And sometimes you do shitty things because you think they will improve the world, but hey, even though the road to hell is very well-paved already, there's always a place for another cobblestone...

The heuristic of this quote is that it is a firewall against a runaway utility function. If you convince yourself that something will generate gazillions of utilons, you'd be willing to pay a very high price to reach this, even though your estimates might be in error. This heuristic puts a cap on the price.

comment by Bayeslisk · 2013-08-06T19:32:06.250Z · score: 5 (5 votes) · LW · GW

It's good as an exhortation to build a Schelling fence, but without that sentiment, it's pretty hollow. Reading the context, though, I agree with you: it's a reminder that feeling really sure about something and being willing to sacrifice a lot of you and other (possibly unwilling) people to create a putative utopia probably means you're wrong.

"Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming. She turned and ran..."

(As an aside, I now have the perfect line for if I ever become an evil mastermind and someone quotes that at me: "But you see, murder and children screaming is only the beginning!")

comment by Eugine_Nier · 2013-08-07T01:27:28.585Z · score: -1 (3 votes) · LW · GW

The problem is that there are better heuristics out there. Look up "just war theory" for starters.

comment by katydee · 2013-08-05T12:58:12.268Z · score: 4 (10 votes) · LW · GW

This seems better-suited for MoreEmotional than LessWrong.

comment by David_Gerard · 2013-08-05T20:55:22.189Z · score: 4 (4 votes) · LW · GW

I think this is a useful heuristic because humans are just not good at calculating this stuff. Ethical Injunctions suggests that you do in fact check with your emotions when the numbers say something novel. (This is why I'm sceptical about deciding on numbers pulled out of your arse rather than pulling the decision directly out of your arse.)

comment by katydee · 2013-08-06T02:43:53.069Z · score: 1 (3 votes) · LW · GW

I don't think Banks even believed that, though. Several of his books certainly seem to be evidence to the contrary.

comment by Lumifer · 2013-08-06T18:19:43.034Z · score: 3 (9 votes) · LW · GW

I wonder if people here realize how anti-utilitarianism this quote is :-)

comment by Document · 2013-08-06T18:21:24.966Z · score: 0 (2 votes) · LW · GW

"Murder and children crying" aren't allowed to have negative weight in a utility function?

comment by Lumifer · 2013-08-06T18:26:51.098Z · score: 0 (4 votes) · LW · GW

It's not about weight, it's about an absolute, discontinuous, hard limit -- regardless of how many utilons you can pile up on the other end of the scale.

comment by Bayeslisk · 2013-08-06T20:22:28.954Z · score: 2 (4 votes) · LW · GW

Well, no. It's against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I'm reminded of a post here at some point whose gist was "if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right."

comment by AndHisHorse · 2013-08-06T20:34:21.178Z · score: 0 (2 votes) · LW · GW

Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including "every" plan which ends in murder and children crying.

comment by Decius · 2013-08-07T17:55:42.707Z · score: 1 (1 votes) · LW · GW

If your plan ends in murder and children crying, what happens if your plan goes wrong?

comment by Document · 2013-08-10T02:55:05.020Z · score: 1 (1 votes) · LW · GW

The murder and children crying fail to occur in the intended quantity?

comment by linkhyrule5 · 2013-08-10T01:53:42.771Z · score: 0 (2 votes) · LW · GW

If your plan requires you to get into a car with your family, what happens if you crash?

comment by Said Achmiz (SaidAchmiz) · 2013-08-10T02:24:07.880Z · score: 1 (1 votes) · LW · GW

Well, getting into a car with your family is not inherently bad, so it's not a very good parallel... but if your overall point is that "expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way", then that's definitely true.

I think that the "what if it all goes wrong" sort of comment is meant to trigger the response of "oh god... it was all for nothing! Nothing!!!". Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.

comment by Decius · 2013-08-10T02:19:05.368Z · score: 0 (0 votes) · LW · GW

The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don't go on vacation, despite the nonrefundable expenses incurred. The plan didn't end in death and sadness, even if a particular implementation did.

When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.

comment by wedrifid · 2013-08-10T02:27:54.643Z · score: 1 (1 votes) · LW · GW

When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.

This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.

comment by glomerulus · 2013-08-10T02:41:16.183Z · score: 2 (2 votes) · LW · GW

For most people, murder and children crying are a bad outcome for a plan, but if they're what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could "fail" and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren't, then the planner would presumably have selected them as the desired plan outcome.

comment by Decius · 2013-08-10T03:13:46.530Z · score: 0 (0 votes) · LW · GW

Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.

comment by Decius · 2013-08-10T03:12:43.200Z · score: 0 (0 votes) · LW · GW

I think we need to examine what we mean by 'fail'.

A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.

If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)

comment by Bayeslisk · 2013-08-06T20:36:22.102Z · score: 0 (2 votes) · LW · GW

It's not a matter of "the plan might go wrong", it's a matter of "the plan might be wrong", and the universal part comes from "no, really, yours too, because you aren't remotely special."

comment by linkhyrule5 · 2013-08-10T01:54:35.744Z · score: 0 (0 votes) · LW · GW

Seems like one of those rules that apply to humans but not to a perfect rationalist, then.

comment by Bayeslisk · 2013-08-10T05:47:42.390Z · score: 1 (1 votes) · LW · GW

Sounds about right to me.

comment by wedrifid · 2013-08-07T06:44:39.362Z · score: -1 (7 votes) · LW · GW

I wonder if people here realize how anti-utilitarianism this quote is :-)

You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn't. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.

It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote's criteria for not being 'Fucked' are abhorrent.

comment by Lumifer · 2013-08-07T16:13:25.970Z · score: 1 (7 votes) · LW · GW

It is also anti-consequentialism.

It is not. "Murder and children crying" here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects ("collateral damage"), but still consequences.

I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. "murder and children crying") be be unacceptable.

comment by AndHisHorse · 2013-08-07T16:47:22.451Z · score: 2 (4 votes) · LW · GW

There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an "end" of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.

When wedrifid says that the quote is "anti-consequentialism", they are saying that it refuses to weigh all of the consequences - including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.

To declare a consequence "unacceptable" is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.

But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth -3^^^3 utilons - but preventing two murders by committing one results in a net sum of +3^^^3 utilons.

The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.

comment by Lumifer · 2013-08-07T17:16:35.475Z · score: 1 (3 votes) · LW · GW

There is nothing about consequentialism which distinguishes means from ends.

Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct. ... Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one's conduct from the character of the behaviour itself rather than the outcomes of the conduct.

The "character of the behaviour" is means.

To declare a consequence "unacceptable" is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value

Consequentialism does not demand "computation of value". It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don't see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.

comment by AndHisHorse · 2013-08-07T17:41:07.292Z · score: 0 (2 votes) · LW · GW

You have a point, there are means and ends. I was using the term "means" as synonymous with "methods used to achieve instrumental ends", which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.

As for your other point, I'm afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish "good" from "bad" moral actions, even if all "good" actions are equal, and all "bad" actions likewise.

Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines "good". And if a given outcome has infinitely negative value, than its negation must have infinitely positive value - which means that the negation is just as desirable as the original outcome is undesirable.

comment by Lukas_Gloor · 2013-08-08T19:18:46.001Z · score: 1 (1 votes) · LW · GW

Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be "consequentialized", i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.

The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences -- both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the "consequences" that matter, part of whatever you're trying to minimize. "Me murdering someone" gets a different weight than "someone else murdering someone", which in turn gets a different weight from "letting someone else die through 'natural causes' when it could be easily prevented".

And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a "mere" foreseen -- but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.

And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.

comment by wedrifid · 2013-08-07T16:54:06.738Z · score: 1 (1 votes) · LW · GW

I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. "murder and children crying") be be unacceptable.

Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don't intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.

It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match this criteria are abhorrent.

comment by Randy_M · 2013-08-06T21:32:06.296Z · score: 2 (2 votes) · LW · GW

Is that both, or either or? Because if it is either or it may include such attrocities as going to bed on time and eating vegetables. If it is both, it seems to imply killing those not as beloved by children may be acceptable.

comment by [deleted] · 2013-08-06T10:40:54.102Z · score: 1 (1 votes) · LW · GW

That's kind-of a good point, but I seriously doubt that that quote would be that effective in making people get it who don't already.

comment by RobertLumley · 2013-08-07T14:46:48.828Z · score: 0 (2 votes) · LW · GW

This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)

comment by Document · 2013-08-07T02:57:44.146Z · score: 0 (2 votes) · LW · GW

I, too, support the cause of opposing every such cause.

comment by bouilhet · 2013-08-04T17:32:24.310Z · score: 1 (9 votes) · LW · GW

Occam's razor is, of course, not an arbitrary rule nor one justified by its practical success. It simply says that unnecessary elements in a symbolism mean nothing.

Signs which serve one purpose are logically equivalent, signs which serve no purpose are logically meaningless.

  • Ludwig Wittgenstein, Tractatus Logico-Philosophicus 5.47321
comment by BT_Uytya · 2013-08-03T13:16:46.404Z · score: 1 (7 votes) · LW · GW

Sages and scientists heard those words, and fear seized them. However, they disbelieved the horrible prophecy, deeming the possibility of perdition too improbable. They lifted the starship from its bed, shattered it into pieces with platinum hammers, plunged the pieces into hard radiation, and thus the ship was turned into myriads of volatile atoms, which are always silent, for atoms have no history; they are identical, whatever origin they have, whether it be bright suns, dead planets or intelligent creatures, — virtuous or vile — for raw matter is same in the Cosmos, and it is other things you should be afraid of.

Still, even atoms were gathered, frozen into one clod and sent into distant sky. Only then were Enterites able to say "We are saved. Nothing threatens us now".

-- Stanislaw Lem, White Death

(as far as I know, this sweet short story never have been translated into English; I translated this passage myself from my Russian copy, so I will be glad if someone corrects my mistakes)

comment by RolfAndreassen · 2013-08-03T20:54:44.036Z · score: 4 (4 votes) · LW · GW

Not quite seeing the applicability as a rationality quote; but in "it's bed" you should drop the apostrophe.

comment by jasonsaied · 2013-08-08T05:07:40.890Z · score: 1 (1 votes) · LW · GW

I'd say it's highlighting the human fallacy to try to ignore and escape from bad news. Instead of facing this prophecy, they just destroyed the ship that delivered it to them and told themselves they were safe.

comment by BT_Uytya · 2013-08-09T23:26:41.440Z · score: 0 (0 votes) · LW · GW

Actually, prophesy was about the ship; the spaceship crashed into Aragena, their planet, and then curious inhabitants looked inside (and found nothing dangerous). After that came the messenger of their King and told them that they all are doomed.

And they indeed were.

comment by linkhyrule5 · 2013-08-05T03:28:35.170Z · score: 1 (1 votes) · LW · GW

I imagine there's an implied "and then the Reapers came" or something.

comment by BT_Uytya · 2013-08-09T23:09:13.597Z · score: 0 (0 votes) · LW · GW

Probably I'm incredible late with that, but:

a) thank you, embarrassing mistake fixed

b) I was fascinated with the "volatile atoms" bit. It feels like a line taken from a poem on reductionism. I'm not sure that I managed to convey it because I'm not so much versed in English fiction and poetry.

Also, I liked their safety measures, it's a pity they hadn't worked in the end.

comment by [deleted] · 2013-08-03T10:17:11.929Z · score: 1 (5 votes) · LW · GW

"[W]hen you have eliminated the impossible, whatever remains, however improbable, must be the truth." -- Sherlock Holmes

comment by wedrifid · 2013-08-03T16:16:23.386Z · score: 9 (9 votes) · LW · GW

"[W]hen you have eliminated the impossible, whatever remains, however improbable, must be the truth." -- Sherlock Holmes

Technically true. Some notable 'improbable' things that remain are the chance that you screwed up your thinking or measuring somewhere or that you are hallucinating. (I agree denotatively but are wary about the connotations.)

comment by Ben Pace (Benito) · 2013-08-03T12:11:16.673Z · score: 5 (13 votes) · LW · GW

"When you have updated on the evidence, whatever is the most probable, however socially unnacceptable, must be believed."

comment by fubarobfusco · 2013-08-06T16:21:57.745Z · score: 14 (14 votes) · LW · GW

"When you have updated on the evidence, whatever is the most probable, must be believed, even if it is uncontroversial, mundane, and doesn't make startling conversation at parties."

comment by KnaveOfAllTrades · 2013-08-05T12:09:54.561Z · score: 4 (4 votes) · LW · GW

Agree with sibling qualifications, though note that I find this extremely useful as a Finding Lost Stuff heuristic, and by using it as a motto, have significantly decreased my instantiation of the literal streetlight effect.

comment by Document · 2013-08-03T17:42:00.698Z · score: 2 (2 votes) · LW · GW

Duplicate (although correctly attributed this time).

comment by Estarlio · 2013-08-03T18:39:09.908Z · score: 0 (0 votes) · LW · GW

I remember a response to this which goes something like - when you have eliminated the impossible, what remains may be more improbable than having made a mistake in one of your earlier impossibility proofs.

comment by Halfwitz · 2013-09-17T15:36:50.529Z · score: 0 (0 votes) · LW · GW

The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason. -- G.K. Chesterson

Though the work of an apologist, I thought this was a good litany against turning oneself into a paper-clip maximiser.

comment by Darklight · 2013-09-02T19:30:11.869Z · score: 0 (0 votes) · LW · GW

The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life.

-- Albert Einstein

comment by CellBioGuy · 2013-08-20T04:19:30.831Z · score: 0 (2 votes) · LW · GW

"In theory, there is no difference between theory and practice. In practice, there is."

comment by Vaniver · 2013-08-20T06:25:26.900Z · score: 4 (4 votes) · LW · GW

Dupe.

comment by RichardKennaway · 2013-08-20T19:13:19.337Z · score: -1 (3 votes) · LW · GW

Man likes complexity. He does not want to take only one step; it is more interesting to look forward to millions of steps. The one who is seeking the truth gets into a maze, and that maze interests him. He wants to go through it a thousand times more. It is just like children. Their whole interest is in running about; they do not want to see the door and go in until they are very tired. So it is with grown-up people. They all say that they are seeking truth, but they like the maze. That is why the mystics made the greatest truths a mystery, to be given only to the few who were ready for them, letting the others play because it was the time for them to play.

Hazrat Inayat Khan

Man loves complexity so much! He makes a thing big and says, 'This is valuable'. If it is simple he says, 'It has no value'. That is why the ancient people, knowing human nature, told a person when he said he wanted spiritual attainment, 'Very well; for ten years go around the temple, walk around it a hundred times in the morning and in the evening. Go to the Ganges, take pitchers full of water during twenty or fifty years, then you will get inspiration'. That is what must be done with people who will not be satisfied with a simple explanation of the truth, who want complexity.

ibid.

comment by metastable · 2013-08-20T20:34:07.492Z · score: 5 (7 votes) · LW · GW

But Naaman was wroth, and went away...And his servants came near, and spake unto him, and said, My father, if the prophet had bid thee do some great thing, wouldest thou not have done it? how much rather then, when he saith to thee, Wash, and be clean?

2 Kings 5: 11-13

comment by RichardKennaway · 2013-08-20T22:27:50.966Z · score: 0 (2 votes) · LW · GW

Will the Lord be pleased with thousands of rams, or with ten thousands of rivers of oil? shall I give my firstborn for my transgression, the fruit of my body for the sin of my soul?

He hath shewed thee, O man, what is good; and what doth the Lord require of thee, but to do justly, and to love mercy, and to walk humbly with thy God?

Micah 6: 7-8

comment by gwern · 2013-09-01T23:03:04.210Z · score: 2 (2 votes) · LW · GW

"Not to commit evils,
But to practice all good,
And to keep the heart pure -
This is the teaching of the Buddhas."

--multiple sutras

comment by simplicio · 2013-08-09T18:16:58.164Z · score: -1 (15 votes) · LW · GW

David Chapman thinks that using LW-style Bayesianism as a theory of epistemology (as opposed to just probability) lumps together too many types of uncertainty; to wit:

Here is an off-the-top-of-my-head list of types:

  • inherent effective randomness, due to dynamical chaos

  • physical inaccessibility of phenomena

  • time-varying phenomena (so samples are drawn from different distributions)

  • sensing/measurement error

  • model/abstraction error

  • one’s own cognitive/computational limitations

I think he is correct, and LWers are overselling Bayesianism as a solution to too many problems (at the very least, without having shown it to be).

comment by itaibn0 · 2013-08-09T18:29:25.101Z · score: 13 (13 votes) · LW · GW

I believe you are posting this in the wrong thread.

comment by gwern · 2013-08-09T20:19:58.675Z · score: 10 (14 votes) · LW · GW

I do not see why any of Chapman's examples cannot be given appropriate distributions and modeled in a Bayesian analysis just like anything else:

Dynamical chaos? Very statistically modelable, in fact, you can't really deal with it at all without statistics, in areas like weather forecasting.

Inaccessibility? Very modelable; just a case of missing data & imputation. (I'm told that handling issues like censoring, truncation, rounding, or intervaling are considered one of the strengths of fully Bayesian methods and a good reason for using stuff like JAGS; in contrast, whenever I've tried to deal with one of those issues using regular maximum-likelihood approaches it has been... painful.)

Time-varying? Well, there's only a huge section of statistics devoted to the topic of time-series and forecasts...

Sensing/measurement error? Trivial, in fact, one of the best cases for statistical adjustment (see psychometrics) and arguably dealing with measurement error is the origin of modern statistics (the first instances of least-squared coming from Gauss and other astronomers dealing with errors in astronomical measurement, and of course Laplace applied Bayesian methods to astronomy as well).

Model/abstraction error? See everything under the heading of 'model checking' and things like model-averaging; local favorite Bayesian statistician Andrew Gelman is very active in this area, no doubt he would be quite surprised to learn that he is misapplying Bayesian methods in that area.

One’s own cognitive/computational limitations? Not just beautifully handled by Bayesian methods + decision theory, but the former is actually offering insight into the former, for example "Burn-in, bias, and the rationality of anchoring".

comment by RichardKennaway · 2013-08-10T12:51:10.928Z · score: 6 (6 votes) · LW · GW

Agreed about chaos, missing data, time series, and noise, but I think the next is off the mark:

Model/abstraction error? See everything under the heading of 'model checking' and things like model-averaging; local favorite Bayesian statistician Andrew Gelman is very active in this area, no doubt he would be quite surprised to learn that he is misapplying Bayesian methods in that area.

He might be surprised to be described as applying Bayesian methods at all in that area. Model checking, in his view, is an essential part of "Bayesian data analysis", but it is not itself carried out by Bayesian methods. The strictly Bayesian part -- that is, the application of Bayes' theorem -- ends with the computation of the posterior distribution of the model parameters given the priors and the data. Model-checking must (he says) be undertaken by other means because the truth may not be in the support of the prior, a situation in which the strict Bayesian is lost. From "Philosophy and the practice of Bayesian statistics", by Gelman and Shalizi (my emphasis):

In contrast, Bayesian statistics or “inverse probability”—starting with a prior distribution, getting data, and moving to the posterior distribution—is associated with an inductive approach of learning about the general from particulars. Rather than testing and attempted falsification, learning proceeds more smoothly: an accretion of evidence is summarized by a posterior distribution, and scientific process is associated with the rise and fall in the posterior probabilities of various models .... We think most of this received view of Bayesian inference is wrong.

...

To reiterate, it is hard to claim that the prior distributions used in applied work represent statisticians’ states of knowledge and belief before examining their data, if only because most statisticians do not believe their models are true, so their prior degree of belief in all of Θ is not 1 but 0.

If anyone's itching to say "what about universal priors?", Gelman and Shalizi say that in practice there is no such thing. The idealised picture of Bayesian practice, in which the prior density is non-zero everywhere, and successive models come into favour or pass out of favour by nothing more than updating from data by Bayes theorem, is, they say, unworkable.

The main point where we disagree with many Bayesians is that we do not see Bayesian methods as generally useful for giving the posterior probability that a model is true, or the probability for preferring model A over model B, or whatever.

They liken the process to Kuhnian paradigm-shifting:

In some way, Kuhn’s distinction between normal and revolutionary science is analogous to the distinction between learning within a Bayesian model, and checking the model as preparation to discard or expand it.

but find Popperian hypothetico-deductivism a closer fit:

In our hypothetico-deductive view of data analysis, we build a statistical model out of available parts and drive it as far as it can take us, and then a little farther. When the model breaks down, we dissect it and figure out what went wrong. For Bayesian models, the most useful way of figuring out how the model breaks down is through posterior predictive checks, creating simulations of the data and comparing them to the actual data. The comparison can often be done visually; see Gelman et al. (2003, ch. 6) for a range of examples. Once we have an idea about where the problem lies, we can tinker with the model, or perhaps try a radically new design. Either way, we are using deductive reasoning as a tool to get the most out of a model, and we test the model—it is falsifiable, and when it is consequentially falsified, we alter or abandon it.

For Gelman and Shalizi, model checking is an essential part of Bayesian practice, not because it is a Bayesian process but because it is a necessarily non-Bayesian supplement to the strictly Bayesian part: Bayesian data analysis cannot proceed by Bayes alone. Bayes proposes; model-checking disposes.

I'm not a statistician and do not wish to take a view on this. But I believe I have accurately stated their view. The paper contains some references to other statisticians who, they says are more in favour of universal Bayesianism, but I have not read them.

comment by gwern · 2015-03-03T23:20:52.920Z · score: 3 (3 votes) · LW · GW

Model-checking must (he says) be undertaken by other means because the truth may not be in the support of the prior, a situation in which the strict Bayesian is lost.

Loath as I am to disagree with Gelman & Shalizi, I'm not convinced that the sort of model-checking they advocate such as posterior p-values are fundamentally and in principle non-Bayesian, rather than practical problems. I mostly agree with "Posterior predictive checks can and should be Bayesian: Comment on Gelman and Shalizi,'Philosophy and the practice of Bayesian statistics'", Kruschke 2013 - I don't see why that sort of procedure cannot be subsumed with more flexible and general models in an ensemble approach, and poor fits of particular parametric models found automatically and posterior shifted to more complex but better fitting models. If we fit one model and find that it is a bad model, then the root problem was that we were only looking at one model when we knew that there were many other models but out of laziness or limited computations we discarded them all. You might say that when we do an informal posterior predictive check, what we are doing is a Bayesian model comparison of one or two explicit models with the models generated by a large multi-layer network of sigmoids (specifically <80 billion of them)... If you're running into problems because your model-space is too narrow - expand it! Models should be able to grow (this is a common feature of Bayesian nonparametrics).

This may be hard in practice, but then it's just another example of how we must compromise our ideals because of our limits, not a fundamental limitation on a theory or paradigm.

comment by IlyaShpitser · 2013-08-09T20:24:21.368Z · score: 6 (6 votes) · LW · GW

gwern, I am curious. You do a lot of practical data analysis. How often do you use non-Bayesian methods?

comment by gwern · 2013-08-09T20:41:32.398Z · score: 9 (9 votes) · LW · GW

Pretty frequently (if you'll pardon the pun). Almost all papers are written using non-Bayesian methods, people expect results in non-Bayesian terms, etc.

Besides that: I decided years ago (~2009) that as appealing as Bayesian approaches were to me, I should study 'normal' statistics & data analysis first - so I understood them and why I didn't want to use them before I began studying Bayesian statistics. I didn't want to wind up in a situation where I was some sort of Bayesian fanatic who could tell you how to do a Bayesian analysis but couldn't explain what was wrong with the regular approach or why Bayesian approaches were better!

(I think I'm going to be switching gears relatively soon, though: I'm working with a track coach on modeling triple-jumping performance, and the smallness of the data suggests it'll be a natural fit for a multilevel model using informative priors, which I'll want to read Gelman's textbook on, and that should be a good jumping off point.)

comment by linkhyrule5 · 2013-08-10T01:49:53.366Z · score: 1 (1 votes) · LW · GW

Random question - if you were to recommend a textbook or two, from frequentist and Bayesian analysis both, to a random interested undergraduate...

(As you might guess, not a hypothetical, unfortunately.)

comment by RichardKennaway · 2013-08-12T08:28:35.981Z · score: 3 (3 votes) · LW · GW

Expanding further on my previous reply, I believe that the claimed (by Gelman and Shalizi) non-Bayesian nature of model-checking is wrong: the truth is that everything that goes under the name of model-checking works, to the extent that it does, so far as it approximates the underlying Bayesian structure. It is not called Bayesian, because it is not an actual, numerical use of Bayes theorem, and the reason we are not doing that is because we do not know how: in practice we cannot work with universal priors.

So Bayesian ideas are applicable to the problem of model/abstraction error, but we cannot apply them numerically. In fact, that is pretty much what model/abstraction error means -- if we did have numbers, they would be part of the model. Model checking is what we do when we cannot calculate any further with numerical probabilities.

Cf. my analogy here with understanding thermodynamics.

I believe that would be Eliezer's response to Gelman and Shalizi. I would not expect them to be convinced though. Shalizi would probably dismiss the idea as moonshine and absurdity.

ETA: Eliezer on the subject:

So if a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.

ETA: Why is the grandparent at -4? David Chapman and simplicio may be wrong about this, but neither are saying anything stupid, or so much thrashed out in the past as to not merit further words.

comment by ESRogs · 2013-08-11T18:01:00.800Z · score: 0 (0 votes) · LW · GW

the former is actually offering insight into the former

Judging by the abstract I assume you meant to write, the latter is offering insight into the former?

comment by Eugine_Nier · 2013-08-10T05:34:26.612Z · score: -2 (4 votes) · LW · GW

One’s own cognitive/computational limitations? Not just beautifully handled by Bayesian methods + decision theory,

Unless there's been an enormous breakthrough in the past 2 years, I believe this is still a major unsolved problem. Also decision theory is about cooperating with other agents, not overcoming cognitive limitations.

comment by simplicio · 2013-08-09T21:40:49.432Z · score: -2 (6 votes) · LW · GW

Note that I was speaking of "Bayesianism" as practiced on LW, not of Bayesian statistics the academic field. I do not believe these are the same.

I believe Chapman is writing a more detailed critique of what he sees here; I will be sure to link you to it when it comes.

comment by gwern · 2013-08-09T21:56:37.293Z · score: 0 (8 votes) · LW · GW

Note that I was speaking of "Bayesianism" as practiced on LW, not of Bayesian statistics the academic field. I do not believe these are the same.

I think that's absurd if that's what he really means. Just because we are not daily posting new research papers employing model-averaging or non-parametric Bayesian statistics does not mean that we do not think those techniques are useful and incorporated in our epistemology or that we would consider the standard answers correct, and this argument can be applied to any area of knowledge that LWers might draw upon or consider correct. If we criticize p-values as a form of building knowledge, is that not a part of 'Bayesian epistemology' because we are drawing arguments from Jaynes or Ioannidis and did not invent them ab initio?

'Your physics can't deal with modeling subatomic interactions, and so sadly your entire epistemology is erroneous.' '??? There's a huge and extremely successful area of physics devoted to that, and I have no freaking idea what you are talking about. Are you really as ignorant and superficial as you sound like, in listing as a weakness something which is actually a major strength of the physics viewpoint?' 'Oh, but I meant physics as practiced on LessWrong! Clearly that other physics is simply not relevant. Come back when LW has built its own LHC and replicated all the standard results in the field, and then I'll admit that particle physics as practiced on LW is the same thing as particle physics the academic field, because otherwise I refuse to believe they can be the same.'

comment by [deleted] · 2013-08-09T23:58:13.342Z · score: 5 (7 votes) · LW · GW

I think you're not being charitable again. Consider the difference between physics as practiced by quantum woo mystics, and physics as practiced by physicists or even engineers. I think that simplicio is referring to a similar (though less striking) tendency for the representative LWer to quasi-religiously misapply and oversell probability theory (which may or may not be the case, but should be argued with something other than uncharitable ridicule).

comment by simplicio · 2013-08-09T22:02:23.959Z · score: -1 (1 votes) · LW · GW

I think you may be extrapolating much too far from the quote I posted. Also, my statistics level is well below both yours and Chapman's so I am not a good interlocutor for you.

comment by gwern · 2013-08-09T22:10:00.596Z · score: 1 (5 votes) · LW · GW

I think you may be extrapolating much too far from the quote I posted.

I don't think I am. It's a very simple quote: "here is a list of n items Bayesian statistics and hence epistemology cannot handle; therefore, it cannot be right." And it's dead wrong because all n items are handled just fine.

comment by [deleted] · 2013-08-09T23:50:19.555Z · score: 3 (5 votes) · LW · GW

I think you are being uncharitable. The list was of different types of uncertainty that Bayesians treat as the same, with a side of skepticism that they should be handled the same, not things you can't model with bayesian epistemology.

The question is not whether Bayes can handle those different types of uncertainty, it's whether they should be handled by a unified probability theory.

I think the position that we shouldn't (or don't yet) have a unified uncertainty model is wrong, but I don't think it's so stupid as to be worth getting heated about and being uncivil.

comment by [deleted] · 2013-08-10T14:50:21.900Z · score: 1 (3 votes) · LW · GW

I think the position that we shouldn't (or don't yet) have a unified uncertainty model is wrong

Did somebody solve the problem of logical uncertainty while I wasn't looking?

but I don't think it's so stupid as to be worth getting heated about and being uncivil.

I disagree that Gwern is being uncivil. I don't think Chapman has any ground to criticize LW-style epistemology when he's made it abundantly clear he has no idea what it is supposed to be. (Indeed, that's his principal criticism: the people he's talked to about it tell him different things.)

It'd be like if Berkeley asked a bunch of Weierstrass' first students about their "supposed" fix for infinitesimals. Because the students hadn't completely grasped it yet, they gave Berkeley a rope, a rubber hose, and a burlap sack instead of giving him the elephant. Then Berkeley goes and writes a sequel to the Analyst disparaging this "new Calculus" for being incoherent.

In that world, I think Berkeley's the one being uncivil.

comment by Eugine_Nier · 2013-08-03T07:23:56.556Z · score: -1 (27 votes) · LW · GW

Everybody hates the bust, but the real harm is done in the boom, with capital being diverted to things that don’t make sense, because the boom’s distortions make them seem to make sense.

Glenn Reynolds

comment by b1shop · 2013-08-04T01:16:42.895Z · score: 19 (25 votes) · LW · GW

I'm downvoting this quote. Read at a basic level, it supports a particular economic theory rather than a larger point of rationality.

For the record, the Austrian Business Cycle Theory is not generally accepted by mainstream economists. This isn't the place to discuss why, and it isn't the place to give ABCT the illusion of a "rational" stamp of approval.

comment by Vaniver · 2013-08-05T16:51:40.699Z · score: 5 (5 votes) · LW · GW

Read at a basic level, it supports a particular economic theory rather than a larger point of rationality.

I read it as an extension of Gendlin; the damage comes from living in the untrue world, not from the realization that the untruth is untrue, even if the second is much more visible.

comment by Grant · 2013-08-06T06:54:59.094Z · score: -1 (3 votes) · LW · GW

Ditto, and downvoting b1shop's response since the quote did not mention any particular economic theory. Busts caused by widespread bad investments aren't necessarily the problem, the widespread bad investments are the problem. Blaming the bust in these cases may be shooting the messenger.

Thats not to say all busts are largely caused by widespread bad investments, or anything about why these bad investments happen. It is however very clear in hindsight that many boom-phase investments are crazy.

comment by Rob Bensinger (RobbBB) · 2013-08-06T07:21:27.715Z · score: 5 (7 votes) · LW · GW

I'm not downvoting Eugine, because Vaniver's interpretation is interesting. But I am upvoting b1shop, because the quotation does sound like Austrianism on a bumper sticker. So it applause-lights a false fringe theory associated with an anti-empirical intellectual community, in addition to plausibly generating specific false beliefs about economics and/or ethics if taken on its face. (Busts, or more generally human misery, are the reason 'distortions' and 'not making sense' are a bad thing in the first place; economies aren't primarily maps.) It's interesting and revealing in subtle ways, but misleading in banal and obvious ways.

comment by wedrifid · 2013-08-06T07:56:28.635Z · score: 0 (2 votes) · LW · GW

I'm not downvoting Eugine, because Vaniver's interpretation is interesting. But I am upvoting b1shop, because the quotation does sound like Austrianism on a bumper sticker.

I'm not downvoting Eugine, because Vaniver's interpretation matches mine. I am upvoting Grant and downvoting b1shop because he claims that there is no rationality message despite the rather obvious cognitive biases that it relates to.

I will refrain from actively supporting the quote because it uses "the real harm", which makes it a strong statement about relative harms of various activities when that constitutes at best a controversial claim and one that is open to rather a lot of interpretation. (I would endorse an "also" claim or even a "and the most interesting" claim.)

comment by Vaniver · 2013-08-06T17:12:32.759Z · score: 3 (3 votes) · LW · GW

Ditto, and downvoting b1shop's response since the quote did not mention any particular economic theory.

I wouldn't recommend downvoting b1shop's response (I didn't), because they are correct that the basic reading of the quote relies on particular economic assumptions. There are economic theories that put the fault in the bust- if things were intelligently managed, you could keep the bubble inflated at just the right amount to prevent it from popping or inflating further, and never have to deal with the bust.

For example, look at this graph that Krugman posted in 2010. The "projected real GDP" is from Mark Thoma, another economist, but where you choose to draw that line says a lot about your assumptions. The Austrian would basically draw it from trough to trough, claiming that all the reported GDP above that line was activity that could be recorded but didn't actually generate lasting wealth. In that view, the bubbles are clearly harmful; in Krugman's view, the busts are harmful. It's the difference between a trillion dollars that we can never get back, and a trillion dollars that was never there.

comment by Lumifer · 2013-08-06T18:12:04.209Z · score: 2 (2 votes) · LW · GW

if things were intelligently managed, you could keep the bubble inflated at just the right amount to prevent it from popping or inflating further, and never have to deal with the bust.

Two things. First, a bubble that never deflates or pops is not a bubble, it's sustainable growth.

Second, there is a LOT of empirical evidence that "intelligent management" of economy -- which has been practiced since the first half of the XX century to various degrees in many countries -- vastly underperforms its promises.

comment by Vaniver · 2013-08-06T18:23:58.289Z · score: 1 (1 votes) · LW · GW

Agreed on both points. I'm not endorsing that theory, or related steelmanned versions.

comment by Grant · 2013-08-06T17:35:33.114Z · score: 1 (1 votes) · LW · GW

It does assume that asset bubbles are made up of bad investments which are costly to undo. While this insight may have been originally Austrian, I didn't think it was at all contentious. The dot-com bubble is a clearer example, as the housing bubble was both an asset bubble and banking failure (and many of the dot-com investments were just off-the-wall crazy).

As Vernon Smith showed, asset bubbles happen even with derivatives who's value is objective (and without central banks). Its hard for me to see the bust as the problem in those cases.

Would a Keynesian say that any economic downturn can be averted in the face of any and all bad investments?

comment by Vaniver · 2013-08-06T17:50:17.505Z · score: 1 (1 votes) · LW · GW

Would a Keynesian say that any economic downturn can be averted in the face of any and all bad investments?

Doubtful. (I should make clear that I'm not a professional economist, and I couldn't talk math with a Keynesian without doing serious reading first.) To go off the same graph, it does identify the tech bubble in ~2000 as being above the projected line.

My impression of the difference is that in the terms of a crude analogy, the Austrian prefers to rip the band-aid off, and the Keynesian prefers to slowly peel it back.

comment by Grant · 2013-08-05T03:41:42.170Z · score: 2 (4 votes) · LW · GW

All true, but there are many booms which seem to produce crazy investments; the dot-com boom is the most obvious recent example. You don't need to accept ABCT to accept this, and I'd guess most people who do notice this don't accept ABCT.

comment by linkhyrule5 · 2013-08-05T03:30:57.159Z · score: 1 (1 votes) · LW · GW

Would you mind explaining? You could PM me or toss it in the Open thread if you don't think it belongs here.

comment by b1shop · 2013-10-31T18:53:26.064Z · score: 1 (1 votes) · LW · GW

Sorry, haven't logged in in a while.

I'm only an econ undergrad, so I'm not a drop-dead expert in economics. However, I work as a business valuator by day, so I like to think I know a thing or two about evaluating the profitability of projects.

There's a lot of Rothbardian baggage about money I associate with the theory. That may or may not be a separate conversation. Don't even bother trying to argue against my points here if you believe fractional reserve banking is bad, because we don't agree on enough to have a productive conversation about this issue. We should instead focus on money and FRB first.

The ABCT story is about excessively low interest rates causing firms to be too farsighted in their planning. If rates increase, then projects that were profitable are no longer profitable, and the economy contracts.

Here's a few reasons why I don't like this story:

  • It requires a massive level of incompetence from entrepreneurs. Arguably the most popular business valuation resource for estimating costs of capital, Duff and Phelps, has a report on adjusting risk free rates for the expected future path. If businesses are unstable because they are not robust to 5% swings in interest rates, then they will likely be unstable due to other shocks as well. ABCT requires them to fall for the same trap over and over again.
  • It's drastically asymmetric. ABCT only focuses on distortions caused by too much money being printed. What about the distortions caused by too little money being printed? Modern cases show this is far more damaging. The transmission mechanism isn't based on interest rates, but it still matters a lot.
  • The case for expansionary policy causing bubbles is not as strong as many think. NGDP growth during the worst of the housing bubble was only 5%. That's below average growth over the past few decades, which were a remarkably stable time. Yes, interest rates were low, but that had more to do with an influx of foreign savers than Fed policy. (Aside: Interest rates are a bad indicator of monetary policy. High interest rates during German hyperinflation is a great example.)
  • As far as the late 90's go, yes, lots of bad investments were made. I think this was caused not by bad monetary economics but by irrational investor beliefs. I imagine people would still have invested in Pets.com regardless of Fed action or inaction. Monetary policy might explain excessive valuations everywhere, but it doesn't explain excessive, localized valuations. Additionally, interest rates are mostly irrelevant to the tech sector where financing is usually based on equity rather than debt.
  • If the ABCT policy is true, we'd expect to see a bust in long-term schemes during recessions and a boom in short-term schemes. Instead, we see a bust in both.
  • ABCT seems married to the idea that expansionary monetary policy is "unsustainable" and interest rates must return to "natural" levels. This is nonsense. The Fed has been performing QE for years, and it's been tremendously helpful by most accounts. Fed "inaction" is still action.
  • There's not much empirical support for the theory.

Edit: Broken link.

comment by mwengler · 2013-08-05T16:41:44.122Z · score: 3 (3 votes) · LW · GW

The boom produces a lot of stuff which is theoretically not the optimum stuff to produce using the resources used in the boom. However, to the extent the boom brings resources out of the woodwork that may not have been used to produce anything at all in the absence of the boom, it may not actually be a net loss compared to a realistic counterfactual.

The bust accompanied by significant unemployment is of a virtual certainly producing less than any of the counterfactuals in which more people are employed. Of course it IS possible to employ some people digging holes and others to fill them in, but I think this is a strawman, generally artificially increased employment produces something of value.

The Austrians may have it wrong because the obviousness of the bust being the unproductive distortion is lost to them in the intellectual excitement of realizing you can't have a bust without a boom, and so they mistakenly think it is the boom which is less productive.

Sometimes the obvious answer IS right. I think the fact that particularly intelligent people acting in groups miss this more often than is optimum should be one of the cognitive biases on our list of biases we study and stay aware of.

Unemployed people produce less than employed people. The odd construction of a corner case does not make this generally true statement generally false.

comment by fubarobfusco · 2013-08-06T16:24:42.326Z · score: 1 (3 votes) · LW · GW

Hindsight bias. It's only after the bust that you find out which boom things made sense after all and which didn't.

comment by Martin-2 · 2013-08-02T20:58:57.361Z · score: -1 (5 votes) · LW · GW

Elayne blinked in shock. “You would have actually done it? Just… left us alone? To fight?”

"Some argued for it," Haman said.

“I myself took that position,” the woman said. “I made the argument, though I did not truly believe it was right.”

“What?” Loial asked [...] “But why did you-“

“An argument must have opposition if it is to prove itself, my son,” she said. “One who argues truly learns the depth of his commitment through adversity. Did you not learn that trees grow roots most strongly when wind blows through them?”

Covril, The Wheel of Time

comment by Document · 2013-08-03T02:26:06.514Z · score: 5 (5 votes) · LW · GW

Is that true (for trees or people)?

Edit: For one example, this person currently linked in the sidebar isn't sure.

comment by Martin-2 · 2013-08-03T08:25:13.696Z · score: 1 (1 votes) · LW · GW

If this quote were about people improving through adversity I wouldn't have posted it (I also read that article). But I think it's true for arguments. The last sentence does a better job of fitting the character than illuminating the point so I could have left it out.

comment by Document · 2013-08-03T08:33:36.686Z · score: 0 (0 votes) · LW · GW

Do arguments themselves "improve", rather than simply being right or wrong?

comment by Martin-2 · 2013-08-03T09:14:00.293Z · score: 2 (2 votes) · LW · GW

Maybe, since arguments have component parts that can be individually right or wrong; or maybe not, since chains of reasoning rely on every single link; or maybe, since my argument improves (along with my beliefs) as I toss out and replace the old one.

Come to think of it, if "trees grow roots most strongly when wind blows through them" because the trees with weak roots can't survive in those conditions then this would make a very bad metaphor for people.

comment by Nornagest · 2013-08-03T22:42:40.357Z · score: 8 (8 votes) · LW · GW

Come to think of it, if "trees grow roots most strongly when wind blows through them" because the trees with weak roots can't survive in those conditions then this would make a very bad metaphor for people.

No, it's probably accurate as stated. I don't know about trees as such, but if you try to start vegetable seedlings indoors and then transfer them outside, they'll often die in the first major wind; the solution is to get the air around them moving while they're still indoors (as with a fan), which causes them to devote resources to growing stronger root systems and stems.

comment by [deleted] · 2013-08-13T14:58:15.226Z · score: -3 (11 votes) · LW · GW

You should never bet against anything in science at odds of more than about 10-12 to 1 against.

Ernest Rutherford

comment by DanArmak · 2013-08-16T19:41:25.507Z · score: 2 (2 votes) · LW · GW

That sounds like a ridiculous thing to say and I can't really steelman it.

Do you have a reliable source for this quote? The Wikipedia talk page for the Rutherford article contains this exchange:

Now that we have dealt with the statistics quote, let's move on to the next quote, which is purportedly: You should never bet against anything in science at odds of more than about 1012 to 1. The number 1012 seems oddly precise, although the cited collection of quotes supports it, and yesterday editor 134.225.100.110 changed it to 10-12, which was reverted a few hours later by Gadfium. I suggest that what he really said was not 1012 (one thousand and twelve), and not 10-12 (ten to twelve), but rather 1012 (ten to the twelfth), which seems a much more likely thing for a physicist to say. A brief Google search turned up evidence for all 3 hypotheses (!), all in what appear to be not very reliable quote collections. Can anyone find a more reliable source, such as a book about Rutherford, to check what he actually did say? Dirac66 (talk) 19:35, 12 October 2012 (UTC)

I reverted because the source given didn't support the change. Now that you've raised the matter, I see that all three variants do appear in Google, and I agree finding an authoritative version is desirable

The quote itself, while still on the page, references this site which is an unsourced quote collection.

comment by [deleted] · 2013-08-16T23:06:02.942Z · score: 0 (0 votes) · LW · GW

OK, maybe the quote isn't legit, but after all quite a lot of our favorite quotes are misquotations—that's not the point. It's an interesting thought even if no Nobel laureate ever said it. Is it ridiculous? It makes a lot of sense to me.

comment by gwern · 2013-08-17T00:06:26.989Z · score: 2 (2 votes) · LW · GW

It's ridiculous if taken literally as a universal prior or bound, because it's very easy to contrive situations in which refusing to give probabilities below 1/10^12 lets you be dutch-booked or otherwise screw up - for example, log2(10^12) is 40, so if I flip a fair coin 50 times, say, and ask you to bet on every possible sequence.... (Or simply consider how many operations your CPU does every minute, and consider being asked "what are the odds your CPU will screw up an operation this minute?" You would be in the strange situation of believing that your computer is doomed even as it continues to run fine.)

But it's much more reasonable if you consider it as applying only to high-level theories or conclusions of long arguments which have not been highly mechanized; I discuss this in http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error and see particularly the link to "Probing the Improbable".

comment by [deleted] · 2013-08-18T03:33:30.203Z · score: 1 (1 votes) · LW · GW

But it's much more reasonable if you consider it as applying only to high-level theories

Yes, that's how I read it. Obviously it doesn't literally mean you can't be very sure about anything; the message is that science is wrong very often and you shouldn't bet too much on the latest theory. So even if it's a complete misquote, it's a nice thought.

comment by DanArmak · 2013-08-17T12:21:51.247Z · score: 1 (1 votes) · LW · GW

In addition to gwern's reply, if you read it as 10-to-1 to 12-to-1 odds, or even 1012-to-1 odds, and not 10^12-to-1 odds, then obviously there are lots of physical theories that deal with events that are less likely than 1/1012. And lots of experiments whose outcome people are more than 1012-to-1 sure about, and they are right to be so sure.

You quoted the most ridiculous figure, that of 10-to-1 or 12-to-1. I'm quite legitimately more than 12-to-1 sure about some things in physics, and I'm not even a physicist! The Wikipedia talk quote makes the point that all three possible quotes are to be found on the internet.

comment by Eugine_Nier · 2013-08-03T07:34:33.887Z · score: -5 (9 votes) · LW · GW

[H]onorable ideas and indelible truths have a life of their own. Even when a culture as a whole remains oblivious and unguarded, the facts tend to rise to the surface one way or another. (..) [T]he truth cannot be destroyed, but it can be forgotten, at least for a time.

Brandon Smith

comment by taelor · 2013-08-20T07:09:03.039Z · score: -6 (10 votes) · LW · GW

An old riddle asked by the mystics of many religions—the Zen Buddhists, the Sufis of Islam, or the rabbis of the Talmud— asks: “Is there a sound in the forest if a tree crashes down and no one is around to hear it?” We now know that the right answer to this is “no.” There are sound waves. But there is no sound unless some-one perceives it. Sound is created by perception. Sound is communication. This may seem trite; after all, the mystics of old already knew this, for they, too, always answered that there is no sound unless someone can hear it. Yet the implications of this rather trite statement are great indeed. It means that it is the recipient who communicates. The so-called communicator, that is, the person who emits the communication, does not communicate. He utters. Unless there is someone who hears, there is no communication. There is only noise. The communicator speaks or writes or sings—but he does not communicate. Indeed he cannot communicate. He can only make it possible, or impossible, for a recipient—or rather percipient—to perceive.

-- Peter F. Drucker, A Functioning Society

comment by Mitchell_Porter · 2013-08-20T08:23:50.282Z · score: 5 (5 votes) · LW · GW

No, not all sound is communication. No, you aren't communicating just by listening and understanding. To communicate is to send a message and have it received.

What's the context of this paragraph?

comment by JQuinton · 2013-08-28T20:10:57.456Z · score: 0 (0 votes) · LW · GW

This seems to contradict one of the main Sequences here, namely A Human's Guide To Words. Specifically Taboo Your Words (which even uses the tree in the forest example). This is probably why it was downvoted.

comment by Eugine_Nier · 2013-08-02T06:20:10.266Z · score: -6 (18 votes) · LW · GW

If you covet what rich people have, you should emulate their behaviors. If you want money, do with money what rich people do with money. If you want smart kids like the rich folk, you should raise your kids like the rich folk raise their kids.

Sean

comment by wedrifid · 2013-08-02T17:36:12.343Z · score: 14 (20 votes) · LW · GW

If you want money, do with money what rich people do with money.

Ok, I purchased my mansion and a sportscar. What's step 2?

comment by Vaniver · 2013-08-02T20:30:26.593Z · score: 9 (11 votes) · LW · GW

Ok, I purchased my mansion and a sportscar. What's step 2?

You might be interested in borrowing a copy of The Millionaire Next Door from a library. It's a bit more accurate about rich people than television.

comment by jbay · 2013-08-02T14:22:56.746Z · score: 10 (10 votes) · LW · GW

Does this strike you as cargo cult language?

comment by Jiro · 2013-08-02T20:07:32.395Z · score: 14 (14 votes) · LW · GW

That's not cargo cult language, it's just ordinary cargo cult behavior.

comment by bentarm · 2013-08-02T11:39:22.115Z · score: 7 (7 votes) · LW · GW

If you want smart kids like the rich folk, you should raise your kids like the rich folk raise their kids.

Is there any reason to believe this is true? I would guess Judith Harris would say no, and she's spent a lot more time thinking about this than I have.

comment by Eugine_Nier · 2013-08-03T04:33:03.102Z · score: 1 (5 votes) · LW · GW

Well, it's clear that propensity to acquire wealth isn't purely genetic.

comment by [deleted] · 2013-08-02T17:51:20.462Z · score: -3 (5 votes) · LW · GW

First off, it seems like wealth is a zero-sum game. If I make money then someone somewhere else is losing money. They may be getting something of equivalent value in exchange, or they might just be getting screwed--it really doesn't matter.

You could probably argue that most people want to be richer, right? I mean, if you asked a random segment of the population, you'd probably get a conservative 80% of them to say "yes, I'd like to have more money."

If all these people followed the rule in this quote, then this would be the problem: what's the outcome when two or more people who have identical goals (and methods of obtaining those goals) play each other in a zero-sum game?

I'm not sure that it would end well for the majority of people.

comment by Lumifer · 2013-08-02T19:07:23.923Z · score: 7 (9 votes) · LW · GW

First off, it seems like wealth is a zero-sum game.

Wealth is very clearly NOT a zero-sum game.

If I make money then someone somewhere else is losing money

Wealth isn't about money, it's about value. Ask yourself: if you create value, is someone somewhere else destroying value?

Money (in this context) is just a unit of account. Any central bank can produce an unlimited amount of these.

comment by RowanE · 2013-08-02T18:45:12.267Z · score: 3 (5 votes) · LW · GW

Wealth isn't a zero-sum game unless there is no economic growth; people getting richer in non-zero-sum ways is ( as i understand it) what economic growth is.

comment by James_Miller · 2013-08-02T03:06:49.263Z · score: -11 (19 votes) · LW · GW

When you kill yourself, you forfeit the right to control your own story.

Justin Peters in Slate.com in an article about Aaron Swartz.

comment by wedrifid · 2013-08-02T05:12:27.217Z · score: 11 (13 votes) · LW · GW

When you kill yourself, you forfeit the right to control your own story.

People don't have that right, in general. Except in the technical 'might makes right' sense employed by some tyrants.

The implied claim seems to be that it is more morally acceptable to disrespect individuals who kill themselves (beyond the specific criticism of the particular decision). I have nothing but contempt for that claim and so obviously don't consider it to belong in this thread.

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T03:12:02.150Z · score: 8 (8 votes) · LW · GW

Uh... what does this mean, and how is it a rationality quote?

I mean, when you kill yourself, you become dead, and thus unable to do anything, including, but not limited to, "control your own story". But that's a trivial fact.

Is the quote meant to say something less trivial?

comment by James_Miller · 2013-08-02T03:52:41.514Z · score: -5 (11 votes) · LW · GW

Part of the cost of suicide is that you will have less control over how people remember you than if you had lived longer. The word "forfeit" implies that we should feel no obligation to respect the memory of people who kill themselves.

When someone dies we might feel a moral obligation to follow their wishes. Suicide, when the person was not in great pain, should nullify any such feeling.

comment by Said Achmiz (SaidAchmiz) · 2013-08-02T04:20:17.515Z · score: 11 (15 votes) · LW · GW

Well, in that case: that's dumb.

comment by wedrifid · 2013-08-02T05:15:40.443Z · score: 5 (5 votes) · LW · GW

The word "forfeit" implies that we should feel no obligation to respect the memory of people who kill themselves.

It says more than that. It implies that there is an obligation to respect the memory of people and that said obligation no longer applies if they kill themselves.

comment by Eugine_Nier · 2013-08-06T04:13:51.802Z · score: -12 (24 votes) · LW · GW

if you get something wrong, for any reason, learn to shut up and listen to those who got it right from the start. And if you break something, then you don't get an opinion on how it should be fixed.

Vox Day

comment by Bugmaster · 2013-08-06T04:40:22.776Z · score: 9 (9 votes) · LW · GW

And if you break something, then you don't get an opinion on how it should be fixed.

Why not ? If I broke it, there's a chance that I know exactly what I did. The next version of whatever it is I broke should eliminate that failure mode.

comment by Document · 2013-08-06T17:35:20.381Z · score: 7 (7 votes) · LW · GW

He's an expert on marriage; he's been married four times.

-Unknown

comment by wedrifid · 2013-08-06T05:26:07.447Z · score: 5 (9 votes) · LW · GW

if you get something wrong, for any reason, learn to shut up and listen to those who got it right from the start. And if you break something, then you don't get an opinion on how it should be fixed.

"No", and "false", respectively.

If something is mine or otherwise under my influence then my opinion on how it should be fixed shall determine my action. If, all things considered, I believe it will achieve my ends to have someone else fix it according to their abilities or expertise then I'll go ahead and do that. I'll also choose who to listen to according to my own judgement. Initial success at some practical activity represents some evidence about the likely usefulness of their word but it is far from definitive.

Incidentally, the blog post that represents the context of this quote seems to be ridiculous, gratuitous sexism.

comment by [deleted] · 2013-08-16T04:00:48.533Z · score: 4 (4 votes) · LW · GW

All too often people report on past and present applications [of computers], which is good, but not on the topic whose purpose is to sensitize you to future possibilities you might exploit. It is hard to get people to aggressively think about how things in their own area might be done differently. I have some times wondered whether it might be better if I asked people to apply computers to other areas of application than their own narrow speciality; perhaps they would be less inhibited there!

Since the purpose, as stated above, is to get the reader to think more carefully on the awkward topics of machines “thinking” and their vision of their personal future, you the reader should take your own opinions and try first to express them clearly, and then examine them with counter arguments, back and forth, until you are fairly clear as to what you believe and why you believe it. It is none of the author’s business in this matter what you believe, but it is the author’s business to get you to think and articulate your position clearly. For readers of the book I suggest instead of reading the next pages you stop and discuss with yourself, or possibly friends, these nasty problems; the surer you are of one side the more you should probably argue the other side!

Richard Hamming, The Art of Doing Science and Engineering (1997, PDF)