Rationality Quotes September 2014
post by jaime2000 · 2014-09-03T21:36:30.032Z · LW · GW · Legacy · 379 commentsContents
379 comments
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
379 comments
Comments sorted by top scores.
comment by dspeyer · 2014-09-03T17:06:19.949Z · LW(p) · GW(p)
A good rule of thumb might be, “If I added a zero to this number, would the sentence containing it mean something different to me?” If the answer is “no,” maybe the number has no business being in the sentence in the first place.
Randall Munroe on communicating with humans
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-09-05T13:40:21.163Z · LW(p) · GW(p)
Related: When (Not) To Use Probabilities:
Replies from: dspeyerI would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers. (...) you shouldn't go around thinking that, if you translate your gut feeling into "one in a thousand", then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times. Your brain is not so well-calibrated.
This specific topic came up recently in the context of the Large Hadron Collider (...) the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.
I object to the air of authority given these numbers pulled out of thin air. (...) No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities
↑ comment by dspeyer · 2014-09-05T16:10:03.320Z · LW(p) · GW(p)
For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:
Remember the Bayes mammogram problem? The correct answer is 7.8%; most doctors (and others) intuitively feel like the answer should be about 80%. So doctors – who are specifically trained in having good intuitive judgment about diseases – are wrong by an order of magnitude. And it “only” being one order of magnitude is not to the doctors’ credit: by changing the numbers in the problem we can make doctors’ answers as wrong as we want.
So the doctors probably would be better off explicitly doing the Bayesian calculation. But suppose some doctor’s internet is down (you have NO IDEA how much doctors secretly rely on the Internet) and she can’t remember the prevalence of breast cancer. If the doctor thinks her guess will be off by less than an order of magnitude, then making up a number and plugging it into Bayes will be more accurate than just using a gut feeling about how likely the test is to work. Even making up numbers based on basic knowledge like “Most women do not have breast cancer at any given time” might be enough to make Bayes Theorem outperform intuitive decision-making in many cases.
I tend to side with Yvain on this one, at least so long as your argument isn't going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that "there's a chance" doesn't.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-14T06:48:40.889Z · LW(p) · GW(p)
A detailed reading provides room for these to coexist. Compare:
If I added a zero to this number
with
Replies from: army1987off by less than an order of magnitude
↑ comment by A1987dM (army1987) · 2014-09-14T15:23:15.735Z · LW(p) · GW(p)
I'd agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.
comment by James_Miller · 2014-09-05T20:36:09.389Z · LW(p) · GW(p)
Replies from: NoneA skilled professional I know had to turn down an important freelance assignment because of a recurring commitment to chauffeur her son to a resumé-building “social action” assignment required by his high school. This involved driving the boy for 45 minutes to a community center, cooling her heels while he sorted used clothing for charity, and driving him back—forgoing income which, judiciously donated, could have fed, clothed, and inoculated an African village. The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.
↑ comment by [deleted] · 2014-09-10T20:32:12.250Z · LW(p) · GW(p)
The dubious “lessons” of this forced labor as an overqualified ragpicker are that children are entitled to treat their mothers’ time as worth nothing, that you can make the world a better place by destroying economic value, and that the moral worth of an action should be measured by the conspicuousness of the sacrifice rather than the gain to the beneficiary.
What about: "using the education system to collect forced labor as a 'lesson' in altruism teaches selfishness and fails at altruism"?
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-16T16:36:28.608Z · LW(p) · GW(p)
I have to ask, do people ever really believe that these sorts of thing are actually about helping people? I seem to recall my own ragpicking was pitched mainly in terms of how it would help my CV to have done some volunteering. That said, I can't tell if I'm just falling to hindsight bias and reinterpreting past events in favour of my current understanding of altruism, which is why I'm asking.
Makes me wonder how things would look if schools had a lesson on effective altruism a few times a year. Surely not everyone would agree, but the waterline might raise a little.
comment by Alejandro1 · 2014-09-01T19:10:29.303Z · LW(p) · GW(p)
Replies from: dspeyer, arundelo, ChristianKl, fortyeridania, TheAncientGeek, None, Jiro, Cyclismo, NoneI’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here)." It’s such an interesting statement, because it has three layers of meaning.
The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meaning is the opposite: I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to such obviously wrong conclusions. And yet, the third, true meaning is actually more like the first: I lack the empathy, moral imagination or analytical skills to attempt even a basic understanding of the people who disagree with me.
In short, “I’m stupid.” Something that few people would ever post so starkly on their Facebook feeds.
↑ comment by dspeyer · 2014-09-02T02:29:58.484Z · LW(p) · GW(p)
While I agree with your actual point, I note with amusement that what's worse is the people who claim they do understand: "I understand that you want to own a gun because it's a penis-substitute", "I understand that you don't want me to own a gun because you live in a fantasy world where there's no crime", "I understand that you're talking about my beauty because you think you own me", "I understand that you complain about people talking about your beauty as a way of boasting about how beautiful you are."... None of these explanations are anywhere near true.
It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."
Replies from: satt↑ comment by satt · 2014-09-02T03:52:09.474Z · LW(p) · GW(p)
It would be a sign of wisdom if someone actually did post "I'm stupid: I can hardly ever understand the viewpoint of anyone who disagrees with me."
Ah, but would it be, though?
Replies from: devas, aelita↑ comment by devas · 2014-09-02T09:08:56.833Z · LW(p) · GW(p)
it would probably be some kind of weird signalling game, maybe. On the other hand, posting:"I don't understand how etc etc, please, somebody explain to me the reasoning behind it" would be a good strategy to start debating and opening an avenue to "convert" others
↑ comment by arundelo · 2014-09-04T14:44:42.669Z · LW(p) · GW(p)
I like this and agree that usually or at least often the people making these "I don't understand how anyone could ..." statements aren't interested in actually understanding the people they disagree with. But I also liked Ozy's comment:
Replies from: arundelo, Richard_Kennaway, Viliam_BurI dunno. I feel like "I don't understand how anyone could believe X" is a much, much better position to take on issues than "I know exactly why my opponents disagree with me! It is because they are stupid and evil!" The former at least opens the possibility that your opponents believe things for good reasons that you don't understand -- which is often true!
In general, I believe it is a good thing to admit ignorance when one is actually ignorant, and I am willing to put up with a certain number of dumbass signalling games if it furthers this goal.
↑ comment by arundelo · 2014-09-04T15:53:27.317Z · LW(p) · GW(p)
Hacker School has a set of "social rules [...] designed to curtail specific behavior we've found to be destructive to a supportive, productive, and fun learning environment." One of them is "no feigning surprise":
The first rule means you shouldn't act surprised when people say they don't know something. This applies to both technical things ("What?! I can't believe you don't know what the stack is!") and non-technical things ("You don't know who RMS is?!"). Feigning surprise has absolutely no social or educational benefit: When people feign surprise, it's usually to make them feel better about themselves and others feel worse. And even when that's not the intention, it's almost always the effect. As you've probably already guessed, this rule is tightly coupled to our belief in the importance of people feeling comfortable saying "I don't know" and "I don't understand."
I think this is a good rule and when I find out someone doesn't know something that I think they "should" already know, I instead try to react as in xkcd 1053 (or by chalking it up to a momentary maladaptive brain activity change on their part, or by admitting that it's probably not that important that they know this thing). But I think "feigning surprise" is a bad name, because when I'm in this situation, I'm never pretending to be surprised in order to demonstrate how smart I am, I am always genuinely surprised. (Surprise means my model of the world is about to get better. Yay!)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-09-08T18:34:28.444Z · LW(p) · GW(p)
I don't think that sort of surprise is necessarily feigned. However, I do think it's usually better if that surprise isn't mentioned.
↑ comment by Richard_Kennaway · 2014-09-04T16:17:53.855Z · LW(p) · GW(p)
I dunno. I feel like "I don't understand how anyone could believe X" is a much, much better position to take on issues than "I know exactly why my opponents disagree with me! It is because they are stupid and evil!" The former at least opens the possibility that your opponents believe things for good reasons that you don't understand -- which is often true!
I am imagining the following exchange:
"I don't understand how anyone could believe X!"
"Great, the first step to understanding is noticing that you don't understand. Now, let me show you why X is true..."
I suspect that most people saying the first line would not take well to hearing the second.
Replies from: arundelo↑ comment by arundelo · 2014-09-04T16:47:10.730Z · LW(p) · GW(p)
I suspect the same, but still think "I can't understand why anyone would believe X" is probably better than "people who believe X or say they believe X only do so because they hate [children / freedom / poor people / rich people / black people / white people / this great country of ours / etc.]"
↑ comment by Viliam_Bur · 2014-09-05T13:30:20.468Z · LW(p) · GW(p)
We could charitably translate "I don't understand how anyone could X" as "I notice that my model of people who X is so bad, that if I tried to explain it, I would probably generate a strawman".
↑ comment by ChristianKl · 2014-09-04T13:10:55.655Z · LW(p) · GW(p)
Or add a fourth laying: I think that I will rise in status by publically signalling to my facebook friends: "I lack the ability or willingness to attempt even a basic understanding of the people who disagree with me."
↑ comment by fortyeridania · 2014-09-04T05:54:26.256Z · LW(p) · GW(p)
People do lots of silly things to signal commitment; the silliness is part of the point. This is a reason initiation rituals are often humiliating, and why members of minor religions often wear distinctive clothing or hairstyles. (I think I got this from this podcast interview with Larry Iannaccone.)
I think posts like the ones to which McArdle is referring, and the beliefs underlying them, are further examples of signaling attire. "I'm so committed, I'm even blind to whatever could be motivating the other side."
A related podcast is with Arnold Kling on his e-book (which I enjoyed) The Three Languages of Politics. It's about (duh) politics--specifically, American politics--but it also contains an interesting and helpful discussion on seeing things from others' point of view, and explicitly points to commitment-signaling (and its relation to beliefs) as a reason people fail to see eye to eye.
↑ comment by TheAncientGeek · 2014-09-01T19:59:03.770Z · LW(p) · GW(p)
Or, (4), "I keep asking, but they won't say"....
Replies from: Vulture↑ comment by Vulture · 2014-09-01T22:13:17.576Z · LW(p) · GW(p)
Does that happen?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-09-01T23:14:00.567Z · LW(p) · GW(p)
It does to me.Have you tried getting sense out of an NRx or HBD.er?
Replies from: Vulture, ChristianKl, Azathoth123, simplicio↑ comment by Vulture · 2014-09-01T23:35:53.104Z · LW(p) · GW(p)
Haven't tried it myself, but it seems to work for Scott Alexander
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-09-02T09:04:24.568Z · LW(p) · GW(p)
NRx are so bad at communicating their position in language inline can understand that they refer to Scotts ANTI reaction faq to explain it. This is the guy who steelmanned Gene "Timecube" Ray. He has superpowers.
Replies from: army1987, army1987↑ comment by A1987dM (army1987) · 2014-09-02T09:27:25.996Z · LW(p) · GW(p)
“Reactionary Philosophy In An Enormous, Planet-Sized Nutshell” is where he explain what the NR position is and “The Anti-Reactionary FAQ” is where he explains why he disagrees with it. The former is what neoreactionaries have linked to to explain it.
↑ comment by A1987dM (army1987) · 2014-09-02T09:35:20.430Z · LW(p) · GW(p)
This is the guy who steelmanned Gene "Timecube" Ray. He has superpowers.
Yes. That's why I'm somewhat surprised he seems to interpret “reptilian aliens” literally.
↑ comment by ChristianKl · 2014-09-04T13:45:39.448Z · LW(p) · GW(p)
There no reason to use those nonstandard abbreviations. Neither of them are in Urban dictionary.
NRx is probably neoreactionism but doesn't make it into the first 10 Google results. HBD.er in that spelling seems to be wrong as HBD'er is found when you Google it.
↑ comment by Azathoth123 · 2014-09-01T23:38:17.244Z · LW(p) · GW(p)
Have you tried getting sense out of an NRx or HBD.er?
Yes, what they say frequently makes a lot more sense than the mainstream position on the issue in question.
Replies from: None, TheAncientGeek↑ comment by [deleted] · 2014-09-10T20:40:35.127Z · LW(p) · GW(p)
I completely disagree. Their grasp of politics is largely based on meta-contrarianism, and has failed to "snap back" into basing one's views on a positive program whose goodness and rationality can be argued for with evidence.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-11T03:16:14.199Z · LW(p) · GW(p)
Their grasp of politics is largely based on meta-contrarianism, and has failed to "snap back" into basing one's views on a positive program whose goodness and rationality can be argued for with evidence.
Huh? HBD'ers are making observations about the world, they do not have a "positive program". As for NRx, they do have a positive program do use evidence to argue for it, see the NRx thread and the various blogs linked there for some examples.
↑ comment by TheAncientGeek · 2014-09-02T09:15:10.155Z · LW(p) · GW(p)
Makes sense to whom? They are capable of making converts, so they are capable of making sense to some people...people who 90% agree with them already. It's called dog whistle. Not being hearable by some people is built in.
↑ comment by simplicio · 2014-09-10T20:22:10.289Z · LW(p) · GW(p)
Bracket neoreaction for the time being. I get that you disagree with HBD positions, but do you literally have trouble comprehending their meaning?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-09-20T21:49:18.211Z · LW(p) · GW(p)
Yes. One time someone was moaning about imigrants from countries that don't have a long history of Democracy, and genuinely thought he meant eastern Europeans. He didn't, because they are white Christians and he doesn't object to white Christians. So to understand who he is objecting to, I have to apply a mental filter he has and I don't.
↑ comment by [deleted] · 2014-09-10T20:36:09.219Z · LW(p) · GW(p)
Hmmm... let's try filling something else in there.
"I don't understand how anyone could support ISIS/Bosnian genocide/North Darfur."
While I think a person is indeed more effective at life for being able to perform the cognitive contortions necessary to bend their way into the mindset of a murderous totalitarian (without actually believing what they're understanding), I don't consider normal people lacking for their failure to understand refined murderous evil of the particularly uncommon kind -- any more than I expect them to understand the appeal of furry fandom (which I feel a bit guilty for picking out as the canonical Ridiculously Uncommon Weird Thing).
Replies from: Cyclismo↑ comment by Cyclismo · 2014-09-15T18:53:23.228Z · LW(p) · GW(p)
You don't have to share a taste for, or approval of "...refined murderous evil of the particularly uncommon kind..." It can be explained as a reaction to events or conditions, and history is full of examples. HOWEVER. We have this language that we share, and it signifies. I understand that a rapist has mental instability and other mental health issues that cause him to act not in accordance with common perceptions of minimum human decency. But I can't say out loud, "I understand why some men rape women." It's an example of a truth that is too dangerous to say because emotions prevent others from hearing it.
Replies from: hairyfigment↑ comment by hairyfigment · 2014-09-15T19:51:10.746Z · LW(p) · GW(p)
You can (and did) say that, you just can't say it on Twitter with no context without causing people to yell at you. ETA: you like language? Gricean maxims.
↑ comment by Jiro · 2014-09-02T01:35:54.470Z · LW(p) · GW(p)
Now repeat the same statement, only instead of abortions and carbon taxes, substitute the words "believe in homeopathy". (Creationism also works.)
People do say that--yet it doesn't mean any of the things the quote claims it means (at least not in a nontrivial sense).
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-02T01:52:24.254Z · LW(p) · GW(p)
Then what does it mean in those cases? Because the only ones I can think of are the three Megan described.
If you mean "I can't imagine how anyone could be so stupid as to believe in homeopathy/creationism", which is my best guess for what you mean, that's a special of the second meaning.
Replies from: Jiro↑ comment by Jiro · 2014-09-02T03:22:33.999Z · LW(p) · GW(p)
"I don't understand how someone could believe X" typically means that the speaker doesn't understand how someone could believe in X based on good reasoning. Understanding how stupidity led someone to believe X doesn't count.
Normal conversation cannot be parsed literally. It is literally true that understanding how someone incorrectly believes X is a subclass of understanding how someone believes in X; but it's not what those words typically connote.
Replies from: ChristianKl, Richard_Kennaway↑ comment by ChristianKl · 2014-09-04T13:13:53.438Z · LW(p) · GW(p)
"I don't understand how someone could believe X" typically means that the speaker doesn't understand how someone could believe in X based on good reasoning. Understanding how stupidity led someone to believe X doesn't count.
Most people who say: "I don't understand how someone could believe X" would fail a reverse Turing test that position. They often literally don't understand how someone comes to believe X.
↑ comment by Richard_Kennaway · 2014-09-02T05:45:15.825Z · LW(p) · GW(p)
Normal conversation cannot be parsed literally.
I don't think that applies here. Your addition "based on good reasoning" is not a non-literal meaning, but a filling in of omitted detail. Gricean implicature is not non-literality, and the addition does not take the example outside McArdle's analysis.
As always, confusion is a property of the confused person, not of the thing outside themselves that they are confused about. If a person says they cannot understand how anyone could etc., that is, indeed, literally true. That person cannot understand the phenomenon; that is their problem. Yet their intended implication, which McArdle is pointing out does not follow, is that all of the problem is in the other person. Even if the other person is in error, how can one engage with them from the position of "I cannot understand how etc."? The words are an act of disengagement, behind a smokescreen that McArdle blows away..
Replies from: Jiro↑ comment by Jiro · 2014-09-02T06:13:49.602Z · LW(p) · GW(p)
Gricean implicature is not non-literality,
Sure it is. The qualifier changes the meaning of the statement. By definition, if the sentence lacks the qualifier but is to be interpreted as if it has one, it is to be interpreted differently than its literal words. Having to be interpreted as containing detail that is not explicitly written is a type of non-literalness.
If a person says they cannot understand how anyone could etc., that is, indeed, literally true.
No, it's not. I understand how someone can believe in creationism: they either misunderstand science (probably due to religious bias) or don't actually believe science works at all when it conflicts with religion. Saying "I don't understand how someone can believe in creationism" is literally false--I do understand how.
What it means is "I don't understand how someone can correctly believe in creationism." I understand how someone can believe in creationism, but my understanding involves the believer making mistakes. The statement communicates that I don't know of a reason other than making mistakes, not that I don't know any reason at all.
Even if the other person is in error, how can one engage with them from the position of "I cannot understand how etc."?
Because "I don't understand how" is synonymous, in ordinary conversation, with "the other person appears to be in error." It does not mean that I literally don't understand, but rather that I understand it as an error, so it is irrelevant that literally not understanding is an act of disengagement.
Replies from: MarkusRamikin, Richard_Kennaway↑ comment by MarkusRamikin · 2014-09-02T09:27:33.338Z · LW(p) · GW(p)
Now I just thought of this, so maybe I'm wrong, but I don't think "I don't understand how someone can think X" is really meant as any sort of piece of reasonable logic, or a substitution for one. I suspect this is merely the sort of stuff people come up with when made to think about it.
Rather, "I don't understand how..." is an appeal to the built in expectation that things make obvious sense. If I want to claim that "what you're saying is nontribal and I have nothing to do with it", stating that you're not making sense to me works whether or not I can actually follow your reasoning. Since if you really were not making sense to me with minimum effort on my part, this would imply bad things about you and what you're saying. It's just a rejection that makes no sense if you think about it, but it's not meant to be thought about - it's really closer to "la la la I am not listening to you".
Am I making sense?
Replies from: Jiro, Richard_Kennaway↑ comment by Jiro · 2014-09-02T15:11:07.126Z · LW(p) · GW(p)
This is close, but I don't think it captures everything. I used the examples of creationism and homeopathy because they are unusual examples where there isn't room for reasonable disagreement. Every person who believes in one of those does so because of bias, ignorance, or error. This disentangles the question of "what is meant by the statement" and "why would anyone want to say what is meant by the statement".
You have correctly identified why, for most topics, someone would want to say such a thing. Normally, "there's no room for reasonable disagreement; you're just wrong" is indeed used as a tribal membership indicator. But the statement doesn't mean "what you're saying is nontribal", it's just that legitimate, nontribal, reasons to say "you are just wrong" are rare.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-03T02:52:46.896Z · LW(p) · GW(p)
Every person who believes in one of those does so because of bias, ignorance, or error.
Well that's true for every false belief anyone has. So what's so special about those examples?
You say "there isn't room for reasonable disagreement", which taken literally is just another way of phrasing "I don't understand how anyone could believe X". In any case, could you expand on what you mean by "not room for reasonable disagreement" since in context it appears to mean "all the tribes present agree with it".
Replies from: Jiro↑ comment by Jiro · 2014-09-03T03:31:35.490Z · LW(p) · GW(p)
Well that's true for every false belief anyone has. So what's so special about those examples?
You're being literal again. Every person who believes in one of those primarily does so because of major bias, ignorance, or error. You can't just distrust a single source you should have trusted, or make a single bad calculation, and end up believing in creationism or homeopathy. Your belief-finding process has to contain fundamental flaws for that.
You say "there isn't room for reasonable disagreement", which taken literally is just another way of phrasing "I don't understand how anyone could believe X".
And "it has three sides" is just another way of phrasing "it is a triangle", but I can still explain what a triangle is by describing it as something with three sides. If it wasn't synonymous, it wouldn't be an explanation.
(Actually, it's not quite synonymous, for the same reason that the original statement wasn't correct: if you're taking it literally, "I don't understand how anyone could believe X" excludes cases where you understand that someone makes a mistake, and "there isn't room for reasonable disagreement" includes such cases.)
in context it appears to mean "all the tribes present agree with it".
You can describe anything which is believed by some people and not others in terms of tribes believing it. But not all such descriptions are equally useful; if the tribes fall into categories, it is better to specify the categories.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-04T13:38:10.538Z · LW(p) · GW(p)
You can't just distrust a single source you should have trusted, or make a single bad calculation, and end up believing in creationism or homeopathy.
You don't even need to do a bad calculation to believe in homeopathy. You just need to be in a social environment where everyone believes in homeopathy and not care enough about the issue to invest more effort into it.
If you simply follow the rule: If I live in a Western country it makes sense to trust the official government health ministry when it publishes information about health issues, you might come away with believing in homeopathy if you happen to live in Switzerland.
There are a lot of decent heuristics that can leave someone with that belief even if the belief is wrong.
Replies from: Jiro↑ comment by Richard_Kennaway · 2014-09-02T07:22:14.649Z · LW(p) · GW(p)
Non-literality isn't a get-out-of-your-words-free card. There is a clear difference between saying "you appear to be in error" and "I can't understand how anyone could think that", and the difference is clearly expressed by the literal meanings of those words.
And to explicate "I don't understand etc." with "Of course I do understand how you could think that, it's because you're ignorant or stupid" is not an improvement.
Replies from: Jiro↑ comment by Jiro · 2014-09-02T09:13:20.646Z · LW(p) · GW(p)
Non-literality isn't a get-out-of-your-words-free card. There is a clear difference between saying "you appear to be in error" and "I can't understand how anyone could think that", and the difference is clearly expressed by the literal meanings of those words.
Non-literalness is a get-out-of-your-words-free card when the words are normally used in conversation, by English speakers in general, to mean something non-literal. Yes, if you just invented the non-literal meaning yourself, there are limits to how far from the literal meaning you can be and still expect to be understood, but these limits do not apply when the non-literal meaning is already established usage.
And to explicate "I don't understand etc." with "Of course I do understand how you could think that, it's because you're ignorant or stupid" is not an improvement.
The original quote gives the intended meaning as "I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to..." In other words, the original rationality quote explicitly excludes the possibility of "I understand you believe it because you're ignorant or stupid". It misinterprets the statement as literally claiming that you don't understand in any way whatsoever.
The point is that the quote is a bad rationality quote because it makes a misinterpretation. Whether the statement that it misinterprets is itself a good thing to say is irrelevant to the question of whether it is being misinterpreted.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-03T07:27:38.396Z · LW(p) · GW(p)
Yes, if you just invented the non-literal meaning yourself, there are limits to how far from the literal meaning you can be and still expect to be understood, but these limits do not apply when the non-literal meaning is already established usage.
Established by whom? You are the one claiming that
"I don't understand how" is synonymous, in ordinary conversation, with "the other person appears to be in error."
These two expressions mean very different things. Notice that I am claiming that you are in error, but not saying, figuratively or literally, that I cannot understand how you could possibly think that.
Non-literalness is a get-out-of-your-words-free card when the words are normally used in conversation, by English speakers in general, to mean something non-literal.
That is not how figurative language works. I could expand on that at length, but I don't think it's worth it at this point.
Replies from: Jiro↑ comment by Jiro · 2014-09-03T13:05:34.181Z · LW(p) · GW(p)
Notice that I am claiming that you are in error, but not saying, figuratively or literally, that I cannot understand how you could possibly think that.
"A is synonymous with B" doesn't mean "every time someone said B, they also said A". "You've made more mistakes than a zebra has stripes" is also synonymous with "you're in error" and you clearly didn't say that, either.
(Of course, "is synonymous with" means "makes the same assertion about the main topic", not "is identical in all ways".)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-04T07:27:44.388Z · LW(p) · GW(p)
Of course, "is synonymous with" means "makes the same assertion about the main topic"
Indeed. "You've made more mistakes than a zebra has stripes" is therefore not synonymous with "you're in error". The former implies the latter, but the latter does not imply even the figurative sense of the former.
If what someone is actually thinking when they say "you've made more mistakes than a zebra has stripes" is no more than "you're in error", then they have used the wrong words to express their thought.
↑ comment by Cyclismo · 2014-09-15T16:11:18.578Z · LW(p) · GW(p)
The art of condescension is subtle and nuanced. "I'm always fascinated by..." can be sincere or not--when it is not, it is a variation on, "It never ceases to amaze me how..." If you were across the table from me, Alejandro, I could tell by your eyes. Most FB posts, tweets, blog posts and comments on magazine and newspaper articles are as bad or worse than what is described here. Rants masquerading as comments. That's why I like this venue here at LessWrong. Commenters actually trying to get more clarity, trying to make sure they understand, trying to make it clear with sincerely constructive criticism that they believe a better argument could be stated. If only it could be spread around the web-o-spehre. Virally.
↑ comment by [deleted] · 2014-09-02T03:20:01.400Z · LW(p) · GW(p)
In modernity at least, moral values are incommensurable, insofar as they are generally internally coherent, and no standard prior to and independent of their first principles can adjudicate between competitor theories. Hence the shrill, circular debates which make no, and simply cannot, progress in politics. I think moral utterances are appropriately conceptualised through an emotivist conception of use: as being used to express preference and attitude, and to make the person under conversation register that preference and attitude, while the sense/reference of the words nevertheless being inherited fragments from (more than anything else, in the Western world) Christian Aristotelian. The question then turns to one of where the preferences and attitudes with ostensible moral form come from, and how we should evaluate this second-order phenomenon.
Replies from: Nonecomment by B_For_Bandana · 2014-09-02T01:25:28.252Z · LW(p) · GW(p)
Always go to other people's funerals; otherwise they won't go to yours.
Yogi Berra, on Timeless Decision Theory.
Replies from: timujin↑ comment by timujin · 2014-09-02T11:27:41.497Z · LW(p) · GW(p)
If only I cared about who goes to my funeral.
Replies from: ishi↑ comment by ishi · 2014-09-03T10:41:01.253Z · LW(p) · GW(p)
At my funeral, there will be a guest list and not everyone will be on it (sorry). And, i'm going to be working the door, to make sure only the correct people get in. (After everyone is, since i've studied alot with rip van winkle, about easter sunday, and with uridice from the book Orpheus in greece, and with F Tipler (physics of immor(t)alityi'll go to my casket for the viewing.
However, because i'm not a ZentoDone or 'getting things done' type, I haven't set a date for my funeral yet. The only thing I really get done is my list of new year's resolutions, and i make one every year and follow it religiously. It has one item---'next year, i'll stop procrastinating'. (since i need to learn some new skills for employment purposes). This follows from the more fundamental principle or commandment 'If you can do it tomorrow why do it today?'. (I may also sell advance tickets to my funeral at a discount, so get them early!!! After all, you can't take it with you, and i think the role of helping people get things done by rendering unto mammon what is his or hers is one of the most inspiring (though i guess the song Frankenstein by NY Dolls has other possible inspiring figures.)
Replies from: timujincomment by Zubon · 2014-09-03T22:47:34.187Z · LW(p) · GW(p)
Your younger nerd takes offense quickly when someone near him begins to utter declarative sentences, because he reads into it an assertion that he, the nerd, does not already know the information being imparted. But your older nerd has more self-confidence, and besides, understands that frequently people need to think out loud. And highly advanced nerds will furthermore understand that uttering declarative sentences whose contents are already known to all present is part of the social process of making conversation and therefore should not be construed as aggression under any circumstances.
-- Cryptonomicon by Neal Stephenson
Replies from: Nornagest↑ comment by Nornagest · 2014-09-03T23:16:32.861Z · LW(p) · GW(p)
Neal Stephenson is good as a sci-fi writer, but I think he's almost as good as an ethnographer of nerds. Pretty much everything he writes has something like this in it, and most of it is spot-on.
On the other hand, he does occasionally succumb to a sort of mild geek-supremacist streak (best observed in Anathem, unless you're one of the six people besides me who were obsessed enough to read In The Beginning... Was The Command Line).
Replies from: VAuroch, Richard_Kennaway, None, MarkusRamikin↑ comment by Richard_Kennaway · 2014-09-11T17:43:17.870Z · LW(p) · GW(p)
unless you're one of the six people besides me who were obsessed enough to read In The Beginning... Was The Command Line).
It's a well-known essay. It even has a Wikipedia article.
I just re-read, well, re-skimmed it. Ah, the nostalgia. It's very dated now. 15 years on, its prediction that proprietary operating systems would lose out to free software has completely failed to come true. Linux still ticks over, great for running servers and signalling hacker cred, but if it's so great, why isn't everyone using it? At most it's one of three major platforms: Windows, OSX, and Linux. Or two out of five if you add iOS and Android (which is based on Linux). OS domination by Linux is no closer, and although there's a billion people using Android devices, command lines are not part of their experience.
Stephenson wrote his essay (and I read it) before Apple switched to Unix in the form of OSX, but you can't really say that OSX is Unix plus a GUI, rather OSX is an operating system that includes a Unix interface. In other words, exactly what Stephenson asked for:
The ideal OS for me would be one that had a well-designed GUI that was easy to set up and use, but that included terminal windows where I could revert to the command line interface, and run GNU software, when it made sense. A few years ago, Be Inc. invented exactly that OS. It is called the BeOS.
BeOS failed, and OSX appeared three years after Stephenson's essay. I wonder what he thinks of them now—both OSX and In the Beginning.
Replies from: Lumifer, Nornagest↑ comment by Lumifer · 2014-09-11T18:35:32.359Z · LW(p) · GW(p)
you can't really say that OSX is Unix plus a GUI, rather OSX is an operating system that includes a Unix interface
That's is a debatable point :-)
UNIX can be defined in many ways -- historically (what did the codebase evolve from), philosophically, technically (monolithic kernel, etc.), practically (availability and free access to the usual toolchains), etc.
I don't like OSX and Apple in general because I really don't like walled gardens and Apple operates on the "my way or the highway" principle. I generally run Windows for Office, Photoshop, games, etc. and Linux, nowadays usually Ubuntu, for heavy lifting. I am also a big fan of VMs which make a lot of things very convenient and, in particular, free you from having to make the big choice of the OS.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-09-12T17:40:17.210Z · LW(p) · GW(p)
Apple operates on the "my way or the highway" principle
FYI: The 'you can't run this untrusted code' dialog is easy to get around.
Replies from: Lumifer, Nornagest↑ comment by Nornagest · 2014-09-12T18:13:00.377Z · LW(p) · GW(p)
Can't speak for Lumifer, but I was more annoyed by the fact that (the version I got of) OSX doesn't ship with a working developer toolchain, and that getting one requires either jumping through Apple's hoops and signing up for a paid developer account, or doing a lot of sketchy stuff to the guts of the OS. This on a POSIX-compliant system! Cygwin is less of a pain, and it's purely a bolt-on framework.
(ETA: This is probably an exaggeration or an unusual problem; see below.)
It was particularly frustrating in my case because of versioning issues, but those wouldn't have applied to most people. Or to me if I'd been prompt, which I hadn't.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-09-12T21:29:03.241Z · LW(p) · GW(p)
You do not need to pay to get the developer tools. I have never paid for a compiler*, and I develop frequently.
*(other than LabView, which I didn't personally pay for but my labs did, and is definitely not part of XCode)
Replies from: Nornagest↑ comment by Nornagest · 2014-09-12T22:01:30.106Z · LW(p) · GW(p)
After some Googling, it seems that version problems may have been more central than I'd recalled. Xcode is free and includes command-line tools, but looking at it brings up vague memories of incompatibility with my OS at the time. The Apple developer website allows direct download of those tools but also requires a paid signup. And apparently trying to invoke gcc or the like from the command line should have brought up an update option, but that definitely didn't happen. Perhaps it wasn't an option in an OS build as old as mine, although it wouldn't have been older than 2009 or 2010. (I eventually just threw up my hands and installed an Ubuntu virt through Parallels.)
So, probably less severe than I'd thought, but the basic problem remains: violating Apple's assumptions is a bit like being a gazelle wending your way back to a familiar watering hole only to get splattered by a Hummer howling down the six-lane highway that's since been built in front of it.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-09-13T10:53:25.488Z · LW(p) · GW(p)
You can get it through the app store, which means you need an account with Apple, but you do not need to pay to get this account. It really is free.
I would note that violating any operating system's assumptions makes bad things happen.
↑ comment by Nornagest · 2014-09-11T18:17:06.058Z · LW(p) · GW(p)
It's a well-known essay. It even has a Wikipedia article.
Yeah, I bought a hard copy in a non-technical bookstore. "Six people" was a joke based on its, er, specialized audience compared to the lines of Snow Crash; in terms of absolute numbers it's probably less obscure than, say, Zodiac.
If memory serves, Stephenson came out in favor of OSX a couple years after its release, comparing it to BeOS in the context of his essay. I can't find the cite now, though. Speaking for myself, I find OSX's ability to transition more-or-less seamlessly between GUI and command-line modes appealing, but its walled developer garden unspeakably annoying.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-11T19:54:57.243Z · LW(p) · GW(p)
If memory serves, Stephenson came out in favor of OSX a couple years after its release
With some googling, I found this, a version of ITBWTCL annotated (by someone else) five years later, including a quote from Stephenson, saying that the essay "is now badly obsolete and probably needs a thorough revision". The quote is quoted in many places, but the only link I turned up for it on his own website was dead (not on the Wayback Machine either).
↑ comment by MarkusRamikin · 2014-09-04T05:44:37.478Z · LW(p) · GW(p)
On the other hand, he does occasionally succumb to a sort of mild geek-supremacist streak
You say that like it's a bad thing.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-06T18:45:34.855Z · LW(p) · GW(p)
You say that like it's a bad thing.
comment by michaelkeenan · 2014-09-01T22:23:50.911Z · LW(p) · GW(p)
A raise is only a raise for thirty days; after that, it’s just your salary.
-- David Russo
Replies from: None↑ comment by [deleted] · 2014-09-04T10:23:54.492Z · LW(p) · GW(p)
I don't understand what he wanted to say by this. Could somebody explain?
Replies from: Viliam_Bur, Torello, Lumifer, ChristianKl↑ comment by Viliam_Bur · 2014-09-05T13:42:58.431Z · LW(p) · GW(p)
Instead of giving your employees $100 raise, give them $1200 bonus once in a year. It's the same money, but it will make them more happy, because they will keep noticing it for years.
Replies from: Vaniver↑ comment by Vaniver · 2014-09-05T18:13:35.903Z · LW(p) · GW(p)
It'll also be easier to reduce a bonus (because of poor performance on the part of the employee or company) than it will be to reduce a salary.
Replies from: Cyclismo↑ comment by Cyclismo · 2014-09-15T15:45:40.806Z · LW(p) · GW(p)
I say give them smaller raises more frequently. After the first annual bonus, it becomes expected.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-16T02:08:30.676Z · LW(p) · GW(p)
Intermittent reward for the win.
↑ comment by Torello · 2014-09-05T01:49:45.505Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/Hedonic_treadmill
Basically what Lumifer said.
↑ comment by Lumifer · 2014-09-04T14:58:43.663Z · LW(p) · GW(p)
I speaks to anchoring and evaluating incentives relative to an expected level.
Basically, receiving a raise is seen as a good thing because you are getting more money than a month ago (anchor). But after a while you will be getting the same amount of money as a month ago (the anchor has moved) so there is no cause for joy.
↑ comment by ChristianKl · 2014-09-04T13:29:57.997Z · LW(p) · GW(p)
While you are getting a raise you might be more motivated to work. However after a while your new salary becomes new salary and you would need a new raise to get additional motivation.
comment by Salemicus · 2014-09-04T16:45:08.382Z · LW(p) · GW(p)
How to compose a successful critical commentary:
You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.
You should list any points of agreement (especially if they are not matters of general or widespread agreement).
You should mention anything you have learned from your target.
Only then are you permitted to say so much as a word of rebuttal or criticism.
D.C. Dennett, Intuition Pumps and Other Tools for Thinking. Dennett himself is summarising Anatol Rapoport.
Replies from: AlanCrowe↑ comment by AlanCrowe · 2014-09-05T23:07:11.384Z · LW(p) · GW(p)
I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.
The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the debate in the first place. What if your best effort to fill the gap is thin and unconvincing?
Example: Some people oppose the repeal of the prohibition of cannabis because "consumption will increase". When you try to make this argument clear you end up distinguishing between good-use and bad-use. There is the relax-on-a-Friday-night-after-work kind of use which is widely accepted in the case of alcohol and can be termed good-use. There is the behaviour that gets called "pissing your talent away" when it beer-based. That is bad-use.
When you try to bring clarity to the argument you have to replace "consumption will increase" by "bad-use will increase a lot and good-use will increase a little, leading to a net reduction in aggregate welfare." But the original "consumption will increase" was obviously true, while the clearer "bad+++, good+, net--" is less compelling.
The original argument had a gap (just why is an increase in consumption bad?). Writing more clearly exposes the gap. Your target will not say "Thanks for exposing the gap, I wish I'd put it that way.". But it is not an easy gap to fill convincingly. Your target is unlikely to appreciate your efforts on behalf of his case.
Replies from: CCC, AnneOminous↑ comment by CCC · 2014-09-06T15:33:56.590Z · LW(p) · GW(p)
With regards to your example, you try to fix the gap between "consumption will increase" and "that will be a bad thing as a whole" by claiming little good use and much bad use. But I don't think that's the strongest way to bridge that gap.
Rather, I'd suggest that the good use has negligible positive utility - just another way to relax on a Friday night, when there are already plenty of ways to relax on a Friday night, so how much utility does adding another one really give you? - while bad use has significant negative utility (here I may take the chance to sketch the verbal image of a bright young doctor dropping out of university due to bad use). Then I can claim that even if good-use increases by a few orders of magnitude more than bad-use, the net result is nonetheless negative, because bad use is just that terrible; that the negative effects of a single bad-user outweigh the positive effects of a thousand good-users.
As to your main point - what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking. Or to go and look through his writings, and see whether or not he addresses precisely that point. Or to go to a friend (preferably also an intelligent debator) and asking for his best effort to fill the gap, in the hope that it will be a better effort.
Replies from: Luke_A_Somers, khafra↑ comment by Luke_A_Somers · 2014-09-13T11:06:23.553Z · LW(p) · GW(p)
Entirely within the example, not pertaining to rationality per se, and I'm not sure you even hold the position you were arguing about:
1) good use is not restricted to relaxing on a Friday. It also includes effective pain relief with minimal and sometimes helpful side-effects. Medical marijuana use may be used as a cover for recreational use but it is also very real in itself.
2) a young doctor dropping out of university is comparable and perhaps lesser disutility to getting sent to prison. You'd have to get a lot of doctors dropping out to make legalization worse than the way things stand now.
Replies from: CCC↑ comment by CCC · 2014-09-13T17:19:20.285Z · LW(p) · GW(p)
My actual position on the medical marijuana issue is best summarised as "I don't know enough to have developed a firm opinion either way". This also means that I don't really know enough to properly debate on the issue, unfortunately.
Though, looking it up, I see there's a bill currently going through parliament in my part of the world that - if it passes - would legalise it for medicinal use.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-09-13T17:32:43.721Z · LW(p) · GW(p)
Have you read “Marijuana: Much More Than You Wanted To Know” on Slate Star Codex?
Replies from: CCC↑ comment by khafra · 2014-09-08T15:20:57.508Z · LW(p) · GW(p)
what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking.
So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.
Replies from: CCC, Luke_A_Somers↑ comment by CCC · 2014-09-08T19:38:42.954Z · LW(p) · GW(p)
"Pardon me, sir, but I don't quite understand how you went from Step A to Step C. Do you think you could possibly explain it in a little more detail?"
Accompanied, of course, by a very polite "Thank you" if they make the attempt to do so. Unless someone is going to vehemently lash out at any attempt to politely discuss his position, he's likely to either at least make an attempt (whether by providing a new explanation or directing you to the location of a pre-written one), or to plead lack of time (in which case you're no worse off than before).
Most of the time, he'll have some sort of explanation, that he considered inappropriate to include in the original statement (either because it is "obvious", or because the explanation is rather long and distracting and is beyond the scope of the original essay). Mind you, his explanation might be even more thin and unconvincing than the best you could come up with...
↑ comment by Luke_A_Somers · 2014-09-13T11:01:10.660Z · LW(p) · GW(p)
I think the idea was, 'when you've gotten to this point, that's when your pre-discussion period is over, and it is time to begin asking questions'.
And yes, it is often a good idea to ask questions before taking a position!
↑ comment by AnneOminous · 2014-09-17T04:27:57.076Z · LW(p) · GW(p)
Quote: "The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make."
Just no. An argument is an argument. It is complete or not. If there is a gap in the argument, in most cases there are two eventualities: (a) the leap is a true one assuming what others would find obvious, or (b) either an honest error in the argument or an attempt to cover up a flaw in the argument.
If there is a way to "fill in" the argument that is the only way it could be filled in, you are justified in doing so, while pointing out that you are doing so. If either of the (b) cases hold, however, you must still point them out, in order to maintain your own credibility. Especially if you are refuting an argument, the gap should be addressed and not glossed over.
You might treat the (b) situations differently, perhaps politely pointing out that the original author made an error there, or perhaps not-so-politely pointing out that something is amiss. But you still address the issue. If you do not, the onus is now on you, because you have then "adopted" that incomplete or erroneous argument.
For example: your own example argument has a rather huge and glaring hole in it: "bad-use will increase a lot and good-use will increase a little". However, history and modern examples both show this to be false: in the real world, decriminalization has increased bad-use only slightly if at all, and good-use more. (See the paper "The Portugal Experiment" for one good example.)
Was there any problem there with my treatment of this rather gaping "gap" in your argument?
comment by KPier · 2014-09-27T02:51:51.544Z · LW(p) · GW(p)
A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.
What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship, but the sincerity of his conviction can in nowise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
- W.J. Clifford, the Ethics of Belief
↑ comment by Lumifer · 2014-09-30T19:23:19.023Z · LW(p) · GW(p)
An interesting quote. It essentially puts forward the "reasonable person" legal theory. But that's not what's interesting about it.
The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?
Replies from: Anders_H, Richard_Kennaway, Cyan, TheOtherDave↑ comment by Anders_H · 2014-09-30T21:16:59.243Z · LW(p) · GW(p)
I realize your questions may be rhetorical, but I'm going to attempt an answer anyways, because it illustrates a point:
The morality of the shipowner's actions do not depend on the realized outcomes: It can only depend on his prior beliefs about the probability of the outcomes, and on the utility function that he uses to evaluate them. If we insisted on making morality conditional on the future, causality is broken: It will be impossible for any ethical agent to make use of such ethics as a decision theory.
The problem here is that the Shipowner's "sincerely held beliefs" are not identical to his genuine extrapolated prior. It is not stated in the text, but I think he is able to convince himself about "the soundness of the ship" only by ignoring degrees of belief: If he was a proper Bayesian, he would have realized that having "doubts" and not updating your beliefs is not logically consistent
In any decision theory that is usable by agents making decisions in real time, the morality of his action is determined either at the time he allowed the ship to sail, or at the time he allowed his prior to get corrupted. I personally believe the latter. This quotation illustrates why I see rationality as a moral obligation, even when it feels like a memetic plague.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-01T16:49:32.935Z · LW(p) · GW(p)
The morality of the shipowner's actions do not depend on the realized outcomes: It can only depend on his prior beliefs about the probability of the outcomes, and on the utility function that he uses to evaluate them.
I am not sure -- I see your point, but completely ignoring the actual outcome seems iffy to me. There are, of course, many different ways of judging morality and, empirically, a lot of them do care about realized outcomes.
The problem here is that the Shipowner's "sincerely held beliefs" are not identical to his genuine extrapolated prior.
I don't know what a "genuine extrapolated prior" is.
I see rationality as a moral obligation
Well, behaving according to the "reasonable person" standard is a legal obligation :-)
Replies from: simplicio↑ comment by simplicio · 2014-10-01T18:55:55.845Z · LW(p) · GW(p)
completely ignoring the actual outcome seems iffy to me
That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states.
In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable.
I don't know what a "genuine extrapolated prior" is.
That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence.
A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch.
I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.
Replies from: Lumifer, Cyan↑ comment by Lumifer · 2014-10-01T19:53:35.273Z · LW(p) · GW(p)
In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory
That's not self-evident to me. First, in this particular case as you yourself note, "Clifford says the shipowner is sincere in his belief". Second, in general, what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions?
That which would be arrived at by a reasonable person ... updating on the same evidence.
That would be a posterior, not a prior.
Replies from: simplicio↑ comment by simplicio · 2014-10-01T21:31:36.143Z · LW(p) · GW(p)
I think Clifford was wrong to say the shipowner was sincere in his belief. In the situation he describes, the belief is insincere - indeed such situations define what I think "insincere belief" ought to mean.
what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions?
Good question. Ought implies can, so in extreme cases I'd consider that to diminish their culpability. For less extreme cases - heh, I had never thought about it before, but I think the "reasonable man" standard is implicitly IQ-normalized. :)
That would be a posterior, not a prior.
Sure.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-02T14:51:06.735Z · LW(p) · GW(p)
I think Clifford was wrong to say the shipowner was sincere in his belief
This is called fighting the hypothetical.
I think the "reasonable man" standard is implicitly IQ-normalized. :)
While that may be so, the Clifford approach relying on the subpoenaed mental states relies on mental states and not on any external standard (including the one called "resonable person").
↑ comment by Cyan · 2014-10-01T19:41:26.972Z · LW(p) · GW(p)
That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.
I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.
↑ comment by Richard_Kennaway · 2014-09-30T23:07:06.773Z · LW(p) · GW(p)
The shipowner is pronounced "verily guilty" solely on the basis of his thought processes.
Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it. Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime. These are legal concepts, but I think they can reasonably be applied to ethics as well. Intentions and consequences both matter.
if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?
If the emigrants do not die, he is not guilty of their deaths. He is still morally at fault for sending to sea a ship he knew was unseaworthy. His inaction in reckless disregard for their lives can quite reasonably be judged a crime.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-01T16:38:46.727Z · LW(p) · GW(p)
Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it.
That is just not true. The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.
Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime.
You are mistaken. Suppressing your own doubts is not actus reus -- you need an action in physical reality. And, legally, there is a LOT of difference between an act and an omission, failing to act.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-10-01T19:49:04.758Z · LW(p) · GW(p)
The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.
The author said:
He knew that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs...
and more, which you have already read. This is clear enough to me.
Suppressing your own doubts is not actus reus -- you need an action in physical reality.
In this case, an inaction.
And, legally, there is a LOT of difference between an act and an omission, failing to act.
In general there is, but not when the person has a duty to perform an action, knows it is required, knows the consequences of not doing it, and does not. That is the situation presented.
↑ comment by Cyan · 2014-09-30T21:09:43.503Z · LW(p) · GW(p)
He had doubts, he extinguished them, and that's what makes him guilty.
This is not the whole story. In the quote
He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.
In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.
And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)
Replies from: KPier, Lumifer↑ comment by KPier · 2014-09-30T21:50:19.358Z · LW(p) · GW(p)
The next passage confirms that this is the author's interpretation as well:
Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.
And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents.
The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.
↑ comment by Lumifer · 2014-10-01T16:51:24.414Z · LW(p) · GW(p)
The key to the shipowner's... blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.
Pretty much everyone does that almost all the time. So, is everyone blameworthy?
Of course, if everyone is blameworthy then no one is.
Replies from: Cyan↑ comment by Cyan · 2014-10-01T19:08:17.043Z · LW(p) · GW(p)
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.
Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
That's a pretty good example of the Fallacy of Gray right there.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-02T15:41:02.018Z · LW(p) · GW(p)
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world.
How do you know?
Especially since falsely holding that belief would be an example.
Replies from: Cyan↑ comment by Cyan · 2014-10-02T15:52:48.406Z · LW(p) · GW(p)
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
↑ comment by TheOtherDave · 2014-09-30T21:06:51.117Z · LW(p) · GW(p)
It's not quite clear to me that the judgments being made here are solely about the owner's thought processes, though I agree that facts about behavior and thought processes are intermingled in this narrative in such a way as to make it unclear what conclusions are based on which facts.
Still... the owner had doubts suggested about the ship's seaworthiness, we're told, and this presumably is a fact about events in the world. The generally agreed-upon credibility of the sources of those suggestions is presumably also something that could be investigated without access to the owner's thoughts. Further, we can confirm that the owner didn't overhaul the ship, for example, nor retain the services of trained inspectors to determine the ship's seaworthiness (or, at least, we have no evidence that he did so, in situations where evidence would be expected if he had).
All of those are facts about behavior. Are those behaviors sufficient to hold the owner liable for the death of the sailors? Perhaps not; perhaps without the benefit of narrative omniscience we'd give the owner the benefit of the doubt. But... so what? In this case, we are being given additional data. In this case we know the owner's thought process, through the miracle of narrative.
You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous... perhaps because we might then be tempted to make judgments in real-world cases as if we knew the owner's thoughts, which we don't.
And, well, I agree that to make judgments in real-world cases as if we knew someone's thoughts is problematic... though sometimes not doing so is also problematic.
Anyway, to answer your question: given the data provided above I consider the shipowner negligent, regardless of whether the ship arrived safely at its destination, or whether it was destroyed by some force no ship could survive.
Do you disagree?
Replies from: shminux, Lumifer↑ comment by Shmi (shminux) · 2014-10-01T20:51:10.693Z · LW(p) · GW(p)
In absence of applicable regulations I think the veil of ignorance of sorts can help here. Would the shipowner make the same decision were he or his family one of the emigrants? What if it was some precious irreplaceable cargo on it? What if it was regular cargo but not fully insured? If the decision without the veil is significantly difference from the one with, then one can consider him "verily guilty", without worrying about his thoughts overmuch.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-10-01T21:14:14.590Z · LW(p) · GW(p)
Well, yes, I agree, but I'm not sure how that helps.
We're now replacing facts about his thoughts (which the story provides us) with speculations about what he might have done in various possible worlds (which seem reasonably easy to infer, either from what we're told about his thoughts, or from our experience with human nature, but are hardly directly observable).
How does this improve matters?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-01T21:51:12.891Z · LW(p) · GW(p)
I don't think they are pure speculations. This is not the shipowner's first launch, so the speculations over possible worlds can be approximated by observations over past decisions.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-10-02T04:29:20.896Z · LW(p) · GW(p)
(nods) As I say, reasonably easy to infer.
But I guess I'm still in the same place: this narrative is telling us the shipowner's thoughts.
I'm judging the shipowner accordingly.
That being said, if we insist on instead judging a similar case where we lack that knowledge... yeah, I dunno. What conclusion would you arrive at from a Rawlsian analysis and does it differ from a common-sense imputation of motive? I mean, in general, "someone credibly suggested the ship might be unseaworthy and Sam took no steps to investigate that possibility" sounds like negligence to me even in the absence of Rawlsian analysis.
↑ comment by Lumifer · 2014-10-01T16:56:47.234Z · LW(p) · GW(p)
You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous
No, I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
given the data provided above I consider the shipowner negligent ... Do you disagree?
Keep in mind that this parable was written specifically to make you come to this conclusion :-)
But yes, I disagree. I consider the data above to be insufficient to come to any conclusions about negligence.
Replies from: Cyan, TheOtherDave↑ comment by Cyan · 2014-10-01T20:00:14.337Z · LW(p) · GW(p)
I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
Mental processes inside someone's mind actually happen in physical reality.
Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
↑ comment by TheOtherDave · 2014-10-01T17:11:09.483Z · LW(p) · GW(p)
I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
So, I disagreed with this claim the first time you made it, since the grounds cited combine both facts about the shipowners thoughts and facts about physical reality (which I listed). You evidently find that objection so uncompelling as to not even be worth addressing, but I don't understand why. If you chose to unpack your reasons, I'd be interested.
But, again: even if it's true, so what? If we have access to the mental processes inside someone's mind, as we do in this example, why shouldn't we use that data in determining guilt?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-01T17:34:15.474Z · LW(p) · GW(p)
facts about physical reality
I read the story as asserting three facts about the physical reality: the ship was old, the ship was not overhauled, the ship sank in the middle of the ocean. I don't think these facts lead to the conclusion of negligence.
If we have access to the mental processes inside someone's mind
But we don't. We're talking about the world in which we live. I would presume that the morality in the world of telepaths would be quite different. Don't do this.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-10-01T21:06:55.868Z · LW(p) · GW(p)
If we have access to the mental processes inside someone's mind
But we don't.
When judging this story, we do.
We know what was going on in this shipowner's mind, because the story tells us.
I'm not generalizing. I'm making a claim about my judgment of this specific case, based on the facts we're given about it, which include facts about the shipowner's thoughts.
What's wrong with that?
As I said initially... I can see arguing that if we allow ourselves to judge this (fictional) situation based on the facts presented, we might then be tempted to judge other (importantly different) situations as if we knew analogous facts, when we don't. And I agree that doing so would be silly.
But to ignore the data we're given in this case because in a similar real-world situation we wouldn't have that data seems equally silly.
comment by dspeyer · 2014-09-01T17:36:19.375Z · LW(p) · GW(p)
Alex Jordan, a grad student at Stanford, came up with the idea of asking people to make moral judgments while he secretly tripped their disgust alarms. He stood at a pedestrian intersection on the Stanford campus and asked passersby to fill out a short survey. It asked people to make judgments about four controversial issues, such as marriage between first cousins, or a film studio’s decision to release a documentary with a director who had tricked some people into being interviewed. Alex stood right next to a trash can he had emptied. Before he recruited each subject, he put a new plastic liner into the metal can. Before half of the people walked up (and before they could see him), he sprayed the fart spray twice into the bag, which “perfumed” the whole intersection for a few minutes. Before other recruitments, he left the empty bag unsprayed. Sure enough, people made harsher judgments when they were breathing in foul air
-- The Righteous Mind Ch 3, Jonathan Haidt
I wonder if anyone who needs to make important judgments a lot makes an actual effort to maintain affective hygiene. It seems like a really good idea, but poor signalling.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-02T16:31:59.744Z · LW(p) · GW(p)
comment by Azathoth123 · 2014-09-13T19:08:04.186Z · LW(p) · GW(p)
Replies from: NoneWhat goes unsaid eventually goes unthought.
↑ comment by [deleted] · 2014-09-25T00:50:48.556Z · LW(p) · GW(p)
Alternatively:
Replies from: Azathoth123The most important thing is to be able to think what you want, not to say what you want. And if you feel you have to say everything you think, it may inhibit you from thinking improper thoughts. I think it's better to follow the opposite policy. Draw a sharp line between your thoughts and your speech. Inside your head, anything is allowed. Within my head I make a point of encouraging the most outrageous thoughts I can imagine. But, as in a secret society, nothing that happens within the building should be told to outsiders. The first rule of Fight Club is, you do not talk about Fight Club.
↑ comment by Azathoth123 · 2014-09-25T01:18:43.597Z · LW(p) · GW(p)
Paul Graham's quote is about a way to fight the trend Sailer describes, unfortunately that trend frequently ends up winning.
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-07T01:56:45.344Z · LW(p) · GW(p)
Often, one of these CEOs will operate in a way inconsistent with Thorndike's major thesis and yet he'll end up praising the CEO anyway. In poker, we'd call this the "won, didn't it?" fallacy-- judging a process by the specific, short-term result accomplished rather than examining the long-term result of multiple iterations of the process over time.
This Amazon.com review.
comment by jaime2000 · 2014-09-01T12:30:35.980Z · LW(p) · GW(p)
A Verb Called Self
I am the playing, but not the pause.
I am the effect, but not the cause.
I am the living, but not the cells.
I am the ringing, but not the bells.
I am the animal, but not the meat.
I am the walking, but not the feet.
I am the pattern, but not the clothes.
I am the smelling, but not the rose.
I am the waves, but not the sea
Whatever my substrate, my me is still me.
I am the sparks in the dark that exist as a dream -
I am the process, but not the machine.
~Jennifer Diane "Chatoyance" Reitz, Friendship Is Optimal: Caelum Est Conterrens
Replies from: Vulture↑ comment by Vulture · 2014-09-02T17:01:20.803Z · LW(p) · GW(p)
A couple of those (specifically lines 2, 5, and 11) should probably be "I'm" rather than "I am" to preserve the rhythm.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-05T02:40:29.486Z · LW(p) · GW(p)
I disagree with you on 5; it works better as I am than I'm.
EDIT: Also, 9 works better as "I'm"
Replies from: Vulture↑ comment by Vulture · 2014-09-05T13:21:44.391Z · LW(p) · GW(p)
Really? Huh. I'm counting from "I am the playing..." = 1, and I really can't read line 5 with "I am" so it scans - I keep stumbling over "animal".
Replies from: VAuroch↑ comment by VAuroch · 2014-09-05T19:48:00.605Z · LW(p) · GW(p)
I'm counting the same way. With stress in italics,
I am the an-i-mal but not the meat
sounds much better to me than
I'm the an-i-mal but not the meat
I should probably note that I read most of the lines with an approximately syllable-sized pause before 'but', and the animal line without that pause. The poem feels to me like it's written mainly in dactylls with some trochees and a final stressed syllable on each line.
Compare with
I am the play-ing, but not the pause
I'm the play-ing, but not the pause
I am the eff-ect, but not the cause
I'm the eff-ect, but not the cause
While I'm at this, how I read lines 9-11 as written
I am the waves, but not the sea
What-ev-er my sub-strate, my me is still me
I am the sparks in the dark that ex-ist as a dream
Which definitely break up the rhythm of the first half entirely, which is probably intentional, but particularly line 9 is awkward, which I didn't catch the first pass. If I was trying to keep that rhythm, I'd read it this way:
I'm the waves, but not the sea
What-ev-er my sub-strate, my me is still me
I'm the sparks in the dark that ex-ist as a dream
And be unhappy that "What'ver" is no longer reasonable English, even for poetry.
Replies from: therufscomment by lukeprog · 2014-09-24T07:06:10.217Z · LW(p) · GW(p)
He who knows only his own side of the case, knows little of that.
J.S. Mill
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-25T01:50:18.385Z · LW(p) · GW(p)
Right, her side of the story is pretty illuminating.
comment by seez · 2014-09-14T01:33:40.181Z · LW(p) · GW(p)
A conversation between me and my 7-year-old cousin:
Her: "do you believe in God?"
Me: "I don't, do you?"
Her: "I used to but, then I never really saw any proof, like miracles or good people getting saved from mean people and stuff. But I do believe in the Tooth Fairy, because ever time I put a tooth under my pillow, I get money out in the morning."
Replies from: seez, MTGandP↑ comment by MTGandP · 2014-09-28T01:28:14.507Z · LW(p) · GW(p)
Interesting that she seems to mentally classify God and the tooth fairy in the same category.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-09-30T12:45:00.997Z · LW(p) · GW(p)
Well, she's only 7.
Replies from: MTGandP↑ comment by MTGandP · 2014-09-30T23:03:05.434Z · LW(p) · GW(p)
I'm not sure what you mean. I personally have a mental category of "mythical beings that don't exist but some people believe exist", which includes God, the tooth fairy, Santa, unicorns, etc. This girl appears to have the same mental category, even though she believes in God but doesn't believe in the tooth fairy.
comment by RolfAndreassen · 2014-09-02T00:17:19.776Z · LW(p) · GW(p)
"I mean, my lord Salvara, that your own expectations have been used against you. You have a keen sense for men of business, surely. You've grown your family fortune several times over in your brief time handling it. Therefore, a man who wished to snare you in some scheme could do nothing wiser than to act the consummate man of business. To deliberately manifest all your expectations. To show you exactly what you expected and desired to see."
"It seems to me that if I accept your argument," the don said slowly, "then the self-evident truth of any legitimate thing could be taken as grounds for its falseness. I say Lukas Fehrwight is a merchant of Emberlain because he shows the signs of being so; you say those same signs are what prove him counterfeit. I need more sensible evidence than this."
-- Scott Lynch, "The Lies of Locke Lamora", page 150.
Replies from: TheTerribleTrivium↑ comment by TheTerribleTrivium · 2014-09-02T10:19:18.177Z · LW(p) · GW(p)
If I remember the book correctly, this part comes from a scene where Locke Lamora is attempting to pull a double con on the speaking character by both impersonating the merchant and a spy/internal security agent (Salvara) investigating the merchant. So while the don's character acts "rationally" here - he is doing so while being deceived because of his assumptions - showing the very same error again
comment by Mass_Driver · 2014-09-08T21:37:47.127Z · LW(p) · GW(p)
It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.
On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.
Steven Pinker, The New Republic 9/4/14
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-12T22:13:09.330Z · LW(p) · GW(p)
The rest of the article is also well worth the read.
comment by Lumifer · 2014-09-12T17:13:53.342Z · LW(p) · GW(p)
Replies from: army1987It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet.
↑ comment by A1987dM (army1987) · 2014-09-12T21:59:17.580Z · LW(p) · GW(p)
I would like this quote more if instead of “has a positive utility for getting” it said “wants to get”.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-22T07:27:54.426Z · LW(p) · GW(p)
The context is specifically a description of the theory of utility and how it is inconsistent with the preferences people actually exhibit.
Also, let me emphasize that the solution to the problem is not to say that people’s preferences are correct and so the utility model is wrong. Rather, in this example I find utility theory to be useful in demonstrating why the sort of everyday risk aversion exhibited by typical students (and survey respondents) does not make financial sense. Utility theory is an excellent normative model here.
Which is why it seems particularly silly to be defining these preferences in terms of a nonlinear utility curve that could never be.
It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet.
comment by [deleted] · 2014-09-10T16:03:05.838Z · LW(p) · GW(p)
When I visited Dieter Zeh and his group in Heidelberg in 1996, I was struck by how few accolades he’d gotten for his hugely important discovery of decoherence. Indeed, his curmudgeonly colleagues in the Heidelberg Physics Department had largely dismissed his work as too philosophical, even though their department was located on “Philosopher Street.” His group meetings had been moved to a church building, and I was astonished to learn that the only funding that he’d been able to get to write the first-ever book on decoherence came from the German Lutheran Church.
This really drove home to me that Hugh Everett was no exception: studying the foundations of physics isn’t a recipe for glamour and fame. It’s more like art: the best reason to do it is because you love it. Only a small minority of my physics colleagues choose to work on the really big questions, and when I meet them, I feel a real kinship. I imagine that a group of friends who’ve passed up on lucrative career options to become poets might feel a similar bond, knowing that they’re all in it not for the money but for the intellectual adventure.
-- Max Tegmark, Our Mathematical Universe, Chapter 8. The Level III Multiverse, "The Joys of Getting Scooped"
comment by Jack_LaSota · 2014-09-09T23:50:34.893Z · LW(p) · GW(p)
My transformation begins with me getting tired of my own bullshit.
comment by James_Miller · 2014-09-02T14:48:00.193Z · LW(p) · GW(p)
Replies from: CaueA heuristic shouldn't be the "least wrong" among all possible rules; it should be the least harmful if wrong.
↑ comment by Caue · 2014-09-02T15:51:19.715Z · LW(p) · GW(p)
Opportunity costs?
I would say it should be the one with best expected returns. But I guess Taleb thinks the possibility of a very bad black swan overrides everything else - or at least that's what I gathered from his recent crusade against GMOs.
Replies from: James_Miller, Lumifer, Azathoth123, Capla↑ comment by James_Miller · 2014-09-03T15:40:41.560Z · LW(p) · GW(p)
I would say it should be the one with best expected returns.
True, but not as easy to follow as Taleb's advice. In the extreme we could replace every piece of advice with "maximize your utility".
↑ comment by Lumifer · 2014-09-03T15:50:14.453Z · LW(p) · GW(p)
I would say it should be the one with best expected returns.
Not quite, as most people are risk-averse and care about the width about the distribution of the expected returns, not only about its mean.
Replies from: roystgnr↑ comment by roystgnr · 2014-09-04T01:45:58.998Z · LW(p) · GW(p)
If you measure "returns" in utility (rather than dollars, root mean squared error, lives, whatever) then the definition of utility (and in particular the typical pattern of decreasing marginal utility) takes care of risk aversion. But since nobody measures returns in utility your advice is good.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-04T02:20:22.116Z · LW(p) · GW(p)
the definition of utility ... takes care of risk aversion
I am not sure about that. If you're risk-neutral in utility, you should be indifferent between two fair-coin bets: (1) heads 9 utils, tails 11 utils; (2) heads -90 utils, tails 110 utils. Are you?
Replies from: VAuroch, CCC↑ comment by VAuroch · 2014-09-05T02:37:19.709Z · LW(p) · GW(p)
Yes, I am, by definition, because the util rewards, being in utilons, must factor in everything I care about, including the potential regret.
Unless your bets don't cash out as
Bet 1: If the coins lands heads you will receive 9 utils, and if it lands tails you will receive 11 utils
and
Bet 2: If the coins lands heads you will receive -90 utils, and if it lands tails you will receive 110 utils.
If it means something else, then the precise wording could make the decision different.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-05T03:16:49.384Z · LW(p) · GW(p)
util rewards, being in utilons, must factor in everything I care about, including the potential regret.
It's not quite the potential regret that is the issue, it is the degree of uncertainty, aka risk.
Do you happen to have any links to a coherent theory of utilons?
Replies from: VAuroch↑ comment by VAuroch · 2014-09-05T07:37:02.322Z · LW(p) · GW(p)
I'm pretty strongly cribbing off the end of So8res's MMEU rejection. Part of what I got from that chunk is that precisely quantifying utilons may be noncomputable, and even if not is currently intractable, but that doesn't matter. We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons, but in principle that doesn't change the appropriate response, if we were to be offered one.
So there is definitely higher potential for regret with the second bet, since losing a bunch when I could otherwise have gained a bunch, and that would reduce my utility for that case, but for the statement 'you will receive -90 utilons' to be true, it would have to include the consideration of my regret. So I should not add additional compensation for the regret; it's factored into the problem statement.
Which boils down to me being unintuitively indifferent, with even the slight uncomfortable feeling of being indifferent when intuition says I shouldn't be factored into the calculations.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-05T14:55:01.488Z · LW(p) · GW(p)
We know that we almost certainly will not and possibly cannot actually be offered a precise bet in utilons
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn't it?
I am not convinced that utilons automagically include everything -- it seems to me they wouldn't be consistent between different bets in that case (and, of course, each person has his own personal utilons which are not directly comparable to anyone else's).
Replies from: VAuroch↑ comment by VAuroch · 2014-09-05T19:55:20.428Z · LW(p) · GW(p)
If utilons don't automagically include everything, I don't think they're a useful concept. The concept of a quantified reward which includes everything is useful because it removes room for debate; a quantified reward that included mostly everything doesn't have that property, and doesn't seem any more useful than denominating things in $.
That makes it somewhat of a angels-on-the-head-of-a-pin issue, doesn't it?
Maybe, but the point is to remove object-level concerns about the precise degree of merits of the rewards and put it in a situation where you are arguing purely about the abstract issue. It is a convenient way to say 'All things being equal, and ignoring all outside factors', encapsulated as a fictional substance.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-05T20:18:58.720Z · LW(p) · GW(p)
If utilons don't automagically include everything, I don't think they're a useful concept.
Utilons are the output of the utility function. Will you, then, say that a utility function which doesn't include everything is not a useful concept?
And I'm still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
It all feels very hand-wavy.
a situation where you are arguing purely about the abstract issue
Which, of course, often has the advantage of clarity and the disadvantage of irrelevance...
Replies from: nshepperd, VAuroch↑ comment by nshepperd · 2014-09-08T02:05:31.365Z · LW(p) · GW(p)
And I'm still uncertain about the properties of utilons. What operations are defined for them? Comparison, probably, but what about addition? multiplicaton by a probability? Under which transformations they are invariant?
The same properties as of utility functions, I would assume. Which is to say, you can compare them, and take a weighted average over any probability measure, and also take a positive global affine transformation (ax+b where a>0). Generally speaking, any operation that's covariant under a positive affine transformation should be permitted.
↑ comment by VAuroch · 2014-09-05T20:55:48.012Z · LW(p) · GW(p)
Will you, then, say that a utility function which doesn't include everything is not a useful concept?
Yes, I think I agree. However, this is another implausible counterfactual, because the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world. And yes, it's very hand-wavy, because understanding what any individual human values is not meanginfully simpler than understanding human values overall, which is one of the Big Hard Problems. When we understand the latter, the former can become less hand-wavy.
It's no more abstract than is Bayes' Theorem; both are in principle easy to use and incredibly useful, and in practice require implausibly thorough information about the world, or else heavy approximation.
The utility function is generally considered to map to the real numbers, so utilons are real-valued and all appropriate transformations and operations are defined on them.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-08T01:39:18.677Z · LW(p) · GW(p)
the utility function is, as a concept, defined to include everything; it is the function that takes world-states and determines how much you value that world.
Some utility functions value world-states. But it's also quite common to call a "utility function" something that shows/tells/calculates how much you value something specific.
The utility function is generally considered to map to the real numbers
I am not sure of that. Utility functions often map to ranks, for example.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-09T07:00:23.871Z · LW(p) · GW(p)
But it's also quite common to call a "utility function" something that shows/tells/calculates how much you value something specific.
I'm not familiar with that usage, Could you point me to a case in which the term was used, that way? Naively, if I saw that phrasing I would most likely consider it akin to a mathematical "abuse of notation", where it actually referred to "the utility of the world in which exists over the otherwise-identical world in which did not exist", but where the subtleties are not relevant to the example at hand and are taken as understood.
I am not sure of that. Utility functions often map to ranks, for example.
Could you provide an example of this also? In the cases where someone specifies the output of a utility function, I've always seen it be real or rational numbers. (Intuitively worldstates should be finite, like the universe, and therefore map to the rationals rather than reals, but this isn't important.)
Replies from: Lumifer↑ comment by Lumifer · 2014-09-09T18:45:16.981Z · LW(p) · GW(p)
Could you point me to a case in which the term was used, that way?
Um, Wikipedia?
Replies from: VAuroch↑ comment by Azathoth123 · 2014-09-03T02:31:53.194Z · LW(p) · GW(p)
His point is that the upside is bounded much more than the downside.
Replies from: Caue, Capla↑ comment by Capla · 2014-09-06T20:38:50.301Z · LW(p) · GW(p)
What? He's crusading against GMOs? Can you give me some references?
I like his writing a lo, but I remember noting the snide way he dismissed doctors who "couldn't imagine" that there could be medicinal benefit to mother's milk, as if they were arrogant fools.
Replies from: Caue↑ comment by Caue · 2014-09-08T15:29:18.538Z · LW(p) · GW(p)
My source were his tweets. Sorry if I can't give anything concrete right now, but "Taleb GMO" apparently gets a lot of hits on google. I didn't really dive into it, but as I understood it he takes the precautionary principle (the burden of proof of safety is on GMOs, not of danger on opponents) and adds that nobody can ever really know the risks, so the burden of proof hasn't and can't be met.
"They're arrogant fools" seems to be Taleb's charming way of saying "they don't agree with me".
I like him too. I loved The Black Swan and Fooled by Randomness back when I read them. But I realized I didn't quite grok his epistemology a while back, when I found him debating religion with Dennett, Harris and Hitchens. Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of "science can't know everything". (www.youtube.com/watch?v=-hnqo4_X7PE)
I've been meaning to ask Less Wrong about Taleb for a while, because this just seems kookish to me, but it's entirely possible that I just don't get something.
Replies from: Jayson_Virissimo, ChristianKl, Lumifer↑ comment by Jayson_Virissimo · 2014-09-08T18:05:37.905Z · LW(p) · GW(p)
I feel like it should be pointed out that being kookish and being a source of valuable insight are not incompatible.
↑ comment by ChristianKl · 2014-09-08T16:40:52.914Z · LW(p) · GW(p)
Or rather, debating against them, for religion, as a Christian, as far as I can tell based on a version of "science can't know everything". (www.youtube.com/watch?v=-hnqo4_X7PE)
"Can't know" is misses the point. Doesn't know, is much more about what Taleb speaks about.
Robin Hanson lately wrote a post against being a rationalist. The core of Nassim arguments is to focus your skepticism where it matters. The cost of mistakenly being a Christian is low. The cost of mistakenly believing that your retirement portfolio is secure is high. According to Taleb people like the New Atheists should spend more of their time on those beliefs that actually matter.
It's also worth noting that the new atheists aren't skeptics in the sense that they believe it's hard to know things. Their books are full of statements of certainity. Taleb on the other hand is a skeptic in that sense.
For him religion also isn't primarily about believing in God but about following certain rituals. He doesn't believe in cutting Chelstrons fence with Ockham's razor.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-08T16:51:44.056Z · LW(p) · GW(p)
The cost of mistakenly being a Christian is low
That's not self-evident to me at all.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-08T21:42:59.393Z · LW(p) · GW(p)
That's not self-evident to me at all.
It's not self-evident, but the new atheists don't make a good argument that it has a high cost. Atheist scientists in good standing like Rob Baumeister say that being religious helps with will power.
Being a Mormon correlates with characteristics and therefore Mormon sometimes recognize other Mormons. Scientific investigation found that the use marker of being healthy for doing so and those markers can't be used for identifying Mormons.
There's some data that being religious correlates with longevity.
Of course those things aren't strong evidence that being religious is beneficial, but that's where Chesterton's fence comes into play for Taleb. He was born Christian so he stays Christian.
While my given name is Christian, I wasn't raised a Christian or believed in God at any point in my life and the evidence doesn't get my to start being a Christian but I do understand Taleb's position. Taleb doesn't argue that atheists should become Christians either.
Replies from: BloodyShrimp↑ comment by BloodyShrimp · 2014-09-09T01:26:14.658Z · LW(p) · GW(p)
(If there is something called "Chelston's Fence" (which my searches did not turn up), apologies.)
Chesterton's Fence isn't about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can't see any, and finding out those reasons before countering their actions. In Christianity's case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity's incompetence at understanding the universe) that Chesterton's Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
Replies from: ChristianKl, ChristianKl↑ comment by ChristianKl · 2014-09-09T09:51:33.151Z · LW(p) · GW(p)
Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
If Christianity would lower the willpower of it's members then it would be at a disadvantage in memetic competition against other worldviews that increase willpower.
In Christianity's case the reasons seem obvious enough
Predicting complex systems like memetic competition over the span of centuries between different memes is very hard. In cognitive psychology experiments frequently invalidate basic intuitions about the human mind.
Trust bootstrapping is certainly one of the functions of religion but it's not clear that's bad. Bootstrapping trust is generally a hard problem. Trust makes people cooperate. If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
As far as "antiquity's incompetence at understanding the universe" goes, understanding the universe is very important to people like the New Atheists but it's for Taleb it's not the main thing religion is about. For him it's about practically following a bunch of rituals such as being at church every Sunday.
Replies from: Mizue↑ comment by Mizue · 2014-09-09T17:36:13.553Z · LW(p) · GW(p)
If I remember right Taleb makes somewhere the point that the word believe derives from a word that means trust.
I often see this argument from religions themselves or similar sources, not from those opposed to religion. Not this specific argument, but this type of argument--the idea of using the etymology of a word to prove something about the concept represented by the word. As we know or should know, a word's etymology may not necessarily have much of a connection to what it means or how it is used today. ("malaria" means "bad air" because of the belief that it was caused by that. "terrific" means something that terrifies.)
Also consider that by conservation of expected evidence if the etymology of the word is evidence for your point, if that etymology were to turn out to be false, that would be evidence against your point. Would you consider it to be evidence against your point if somehow that etymology were to be shown false?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-09T18:54:57.699Z · LW(p) · GW(p)
Not this specific argument, but this type of argument--the idea of using the etymology of a word to prove something about the concept represented by the word.
In this case the debate is about how people in the past thought about religion. Looking at etymology helps for that purpose. But that not the most important part of my argument.
It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That's pointless if they want to convince someone who believes in religion1. If they don't want to argue against a strawman they actually have to switch to talking about religion1.
In general when someone says: "We should do A.", that person has freedom to define what he means with A. It's not a matter of searching for Bayesian evidence. It's a matter of defining a concept. If you want to define A saying: A is a bit like B in regard X and like C in regard Y is quite useful. Looking at etymology can help with that quest.
Overestimating the ability to understand what the other person means is a common failure mode. If you aren't clear about concepts than looking at evidence to validate concepts isn't productive.
Replies from: Caue↑ comment by Caue · 2014-09-09T19:24:53.454Z · LW(p) · GW(p)
It can also help to illustrate ideas. Taleb basically says that religion1 is a very useful concept. New atheists spend energy arguing that religion2 is a bad concept. That's pointless if they want to convince someone who believes in religion1. If they don't want to argue against a strawman they actually have to switch to talking about religion1.
But you could say that the new atheists do want to argue against what Taleb might call a strawman, because what they're trying to do really is to argue against religion2. They're speaking to the public at large, to the audience. Does the audience also not care about the factual claims of religion? If that distinction about the word "religion" is being made, I don't see why Taleb isn't the one being accused of trying to redefine it mid-discussion.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-09T20:01:22.899Z · LW(p) · GW(p)
Does the audience also not care about the factual claims of religion?
If you look at priorities of most people that they show through their actions, truth isn't on top of that list. Most people lie quite frequently and optimize for other ends.
Just take any political discussion and see how many people are happy to be correctly informed that their tribal beliefs are wrong. That probably even goes for this discuss and you have a lot of motivated cognition going on that makes you want to believe that people really care about truth.
If that distinction about the word "religion" is being made, I don't see why Taleb isn't the one being accused of trying to redefine it mid-discussion.
When speaking on the subject of religion Taleb generally simply speaks about his own motivation for believing what he believes. He doesn't argue that other people should start believing in religion. Taleb might child people for not being skeptic where it matters but generally not for being atheists.
Nearly any religious person while grant you that some religions are bad. As long as the new atheists argue against a religion that isn't really his religion he has no reason to change.
I would also add that it's quite okay when different people hold different beliefs.
Replies from: Caue↑ comment by Caue · 2014-09-09T20:29:39.590Z · LW(p) · GW(p)
I agree with the apparent LW consensus that much of religion is attire, habit, community/socializing, or "belief in belief", if that's what you mean. But then again, people actually do care about the big things, like whether God exists, and also about what is or isn't morally required of them.
I bet they will also take Taleb's defense as an endorsement of God's existence and the other factual claims of Christianity. I don't recall him saying that he's only a cultural Christian and doesn't care whether any of it is actually true.
I would also add that it's quite okay when different people hold different beliefs.
Well, I won't force anyone to change, but there's good and bad epistemology.
Also, the kind of Chesterton's fences that the new atheists are most interested in bringing down aren't just sitting there, but are actively harmful (and they may be there as a result of people practicing what you called religion1, but their removal is opposed with appeals to religion2).
Replies from: ChristianKl↑ comment by ChristianKl · 2014-09-09T20:56:21.795Z · LW(p) · GW(p)
I don't recall him saying that he's only a cultural Christian and doesn't care whether any of it is actually true.
You take a certain epistemology for granted that Taleb doesn't share.
Taleb follows heuristics of not wanting to be wrong on issues where being wrong is costly and putting less energy into updating beliefs on issues where being wrong is not costly.
He doesn't care about whether Christianity is true in the sense that he cares about analysing evidence about whether Christianity is true. He might care in the sense that he has an emotional attachment to it being true. If I lend you a book I care about whether you give it back to me because I trust you to give it back. That's a different kind of caring than I have about pure matter of facts.
One of Taleb's examples is how in the 19th century someone who went through to a doctor who would treat him based on intellectual reasoning would have probably have done worse than someone who went to a priest. Taleb is skeptic that you get very far with intellectual reasoning and thinks that only empiricism has made medicine better than doing nothing.
We might have made some progress but Taleb still thinks that there are choices where the Christian ritual will be useful even if the Christian ritual is build on bad assumptions, because following the ritual keeps people from acting based on hubris. It keeps people from thinking they understand enough to act based on understanding.
That's also the issue with the new atheists. They are too confident in their own knowledge and not skeptic enough. That lack of skepticism is in turn dangerous because they believe that just because no study showed gene manipulated plants to be harmful they are safe.
Replies from: Caue↑ comment by Caue · 2014-09-11T18:28:19.891Z · LW(p) · GW(p)
(thank you for helping me try to understand him on this point, by the way)
This seems coherent. But, to be honest, weak (which could mean I still don't get it).
We also seem to have gotten back to the beginning, and the quote. Leaving aside for now the motivated stopping regarding religion, we have a combination of the Precautionary Principle, the logic of Chesterton's Fence, and the difficulty of assessing risks on account of Black Swans.
... which would prescribe inaction in any question I can think of. It looks as if we're not even allowed to calculate the probability of outcomes, because no matter how much information we think we have, there can always be black swans just outside our models.
Should we have ever started mass vaccination campaigns? Smallpox was costly, but it was a known, bounded cost that we had been living with for thousands of years, and, although for all we knew the risks looked obviously worth it, relying on all we know to make decisions is a manifestation of hubris. I have no reason to expect being violenty assaulted when I go out tonight, but of course I can't possibly have taken all factors in consideration, so I should stay home, as it will be safer if I'm wrong. There's no reason to think pursuing GMOs will be dangerous, but that's only considering all we know, which can't be enough to meet the burden of proof under the strong precautionary principle. There's not close to enough evidence to even locate Christianity in hypothesis space, but that's just intellectual reasoning... We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn't a catastrophic black swan hiding behind that Fence?
Replies from: Azathoth123, Richard_Kennaway↑ comment by Azathoth123 · 2014-09-12T01:32:55.179Z · LW(p) · GW(p)
There's no reason to think pursuing GMOs will be dangerous,
The phrase "no reason to think" should raise alarm bells. It can mean we've looked and haven't found any, or that we haven't looked.
Replies from: Mizue, Caue↑ comment by Mizue · 2014-09-12T14:42:36.100Z · LW(p) · GW(p)
There's no reason to think that there's a teapot-shaped asteroid resembling Russell's teapot either.
And I'm pretty sure we haven't looked for one, either. Yet it would be ludicrous to treat it as if it had a substantial probability of existing.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-12T23:41:10.352Z · LW(p) · GW(p)
A prior eating most things is a bad idea. Thus the burden is on the GMO advocates to show their products are safe.
Replies from: hairyfigment↑ comment by hairyfigment · 2014-09-13T00:03:40.013Z · LW(p) · GW(p)
Note that probably all crops are "genetically modified" by less technologically advanced methods. I'm not sure if that disproves the criticism or shows that we should be cautious about eating anything.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-13T00:19:50.282Z · LW(p) · GW(p)
We should be cautious about eating anything that doesn't have a track record of being safe.
Replies from: nshepperd↑ comment by nshepperd · 2014-09-13T00:43:45.215Z · LW(p) · GW(p)
You changed your demand. If GM crops have less mutations than conventional crops, which are genetically modified by irradiation + selection (and have a track record of being safe), this establishes that GM crops are safe, if you accept the claim that, say, the antifreeze we already eat in fish is safe. Requiring GM crops themselves to have a track record is a bigger requirement.'
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-13T19:19:42.600Z · LW(p) · GW(p)
No, I'm saying we need some track record for each new crop including the GMO ones, roughly proportionate to how different they are from existing crops.
Replies from: nshepperd↑ comment by Caue · 2014-09-12T17:48:55.910Z · LW(p) · GW(p)
I agree with this.
But then we look, and this turns into "we haven't looked enough". Which can be true, so maybe we go "can anyone think of something concrete that can go wrong with this?", and ideally we will look into that, and try to calculate the expected utility.
But then it becomes "we can't look enough - no matter how hard we try, it will always be possible that there's something we missed".
Which is also true. But if, just in case, we decide to act as if unknown unknowns are both certain and significant enough to override the known variables, then we start vetoing the development of things like antibiotics or the internet, and we stay Christians because "it can't be proven wrong".
↑ comment by Richard_Kennaway · 2014-09-11T20:21:19.103Z · LW(p) · GW(p)
We see no reason not to bring down laws and customs against homosexuality, but how can we know there isn't a catastrophic black swan hiding behind that Fence?
HIV.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-11T20:29:18.572Z · LW(p) · GW(p)
HIV
Its worst impact was and is in Sub-Saharan Africa where the "laws and customs against homosexuality" are fully in place.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-11T20:50:23.899Z · LW(p) · GW(p)
The history here says the African epidemic was spread primarily heterosexually. There is also the confounder of differing levels of medical facilities in different countries.
That aside, which is not to say that Africa does not matter, in the US and Europe the impact was primarily in the gay community.
I recognise that this is a contentious area though, and would rather avoid a lengthy thread.
Replies from: Caue, nshepperd↑ comment by Caue · 2014-09-12T18:02:27.153Z · LW(p) · GW(p)
The point was just that we should be allowed to weight expected positives against expected negatives. Yes, there can be invisible items in the "cons" column (also on the "pros"), and it may make sense to require extra weight on the "pros" column to account for this, but we shouldn't be required to act as if the invisible "cons" definitely outweigh all "pros".
↑ comment by ChristianKl · 2014-09-09T09:23:57.367Z · LW(p) · GW(p)
(If there is something called "Chelston's Fence" (which my searches did not turn up), apologies.)
Sorry for the typo.
comment by Azathoth123 · 2014-09-18T05:10:18.193Z · LW(p) · GW(p)
Yet none of these sights [of the Scottish Highlands] had power, till a recent period, to attract a single poet or painter from more opulent and more tranquil regions. Indeed, law and police, trade and industry, have done far more than people of romantic dispositions will readily admit, to develope in our minds a sense of the wilder beauties of nature. A traveller must be freed from all apprehension of being murdered or starved before he can be charmed by the bold outlines and rich tints of the hills. He is not likely to be thrown into ecstasies by the abruptness of a precipice from which he is in imminent danger of falling two thousand feet perpendicular; by the boiling waves of a torrent which suddenly whirls away his baggage and forces him to run for his life; by the gloomy grandeur of a pass where he finds a corpse which marauders have just stripped and mangled; or by the screams of those eagles whose next meal may probably be on his own eyes.
Thomas Babington Macaulay, History of England
Frankly, the whole passage Steve Sailer quotes at the link is worth reading.
Replies from: gjm↑ comment by gjm · 2014-09-18T13:22:10.108Z · LW(p) · GW(p)
For those (I have some reason to think there are some) who would rather avoid giving Steve Sailer attention or clicks, or who would like more context than he provides, you can find the relevant chapter at Project Gutenberg along with the rest of volume 3 of Macaulay's History. (The other volumes are Gutenbergificated too, of course.) Macaulay's chapters are of substantial length; if you want just that section, search for "none of these sights" after following the link.
comment by Jack_LaSota · 2014-09-07T17:01:06.144Z · LW(p) · GW(p)
Katara: Do you think we'll really find airbenders?
Sokka: You want me to be like you, or totally honest?
Katara: Are you saying I'm a liar?
Sokka: I'm saying you're an optimist. Same thing, basically.
-Avatar: The Last Airbender
comment by Zubon · 2014-09-03T22:43:35.126Z · LW(p) · GW(p)
"You sound awfully sure of yourself, Waterhouse! I wonder if you can get me to feel that same level of confidence."
Waterhouse frowns at the coffee mug. "Well, it's all math," he says. "If the math works, why then you should be sure of yourself. That's the whole point of math."
-- Cryptonomicon by Neal Stephenson
Replies from: John_Maxwell_IV, soreff↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-09-07T01:55:26.546Z · LW(p) · GW(p)
This quote seems to me like it touches a common fallacy: that making "confident" probability estimates (close to 0 or 1) is the same as being a "confident" person. In fact, they're ontologically distinct.
↑ comment by soreff · 2014-09-06T04:28:51.472Z · LW(p) · GW(p)
Was the context one where Waterhouse was proving a conditional, "if axioms A, B, C, then theorem Z", or one where where he was trying to establish Z as a truth about the world, and therefore also had the burden of showing that axioms A, B, C were supported by experimental evidence?
Replies from: VAuroch↑ comment by VAuroch · 2014-09-22T07:38:32.250Z · LW(p) · GW(p)
Neither! The statement he is 'awfully sure of' is a probalistic conclusion he has derived from experimental evidence via Bayesian reasoning on the world's first programmable computer. Specifically, that statement is this:
"The Nipponese cryptosystem that we call Azure is the same thing as the German system that we call Pufferfish," he announces. "Both of them are also related somehow to another, newer cryptosystem I have dubbed Arethusa. All of these have something to do with gold. Probably gold mining operations of some sort. In the Philippines."
Part of the argument used to convince Comstock:
"But those places send out thousands of messages a day," Comstock protests. "What makes you think that you can pick out the messages that are a consequence of the Azure orders?"
"It's just a brute force statistics problem," Waterhouse says. "Suppose that Tokyo sent the Azure message to Rabaul on October 15th, 1943. Now, suppose I take all of the messages that were sent out from Rabaul on October 14th and I index them in various ways: what destinations they were transmitted to, how long they were, and, if we were able to decrypt them, what their subject matter was. Were they orders for troop movements? Supply shipments? Changes in tactics or procedures? Then, I take all of the messages that were sent out from Rabaul on October 16th--the day after the Azure message came in from Tokyo--and I run exactly the same statistical analysis on them."
Waterhouse steps back from the chalkboard and turns into a blinding fusillade of strobe lights. "You see, it is all about information flow. Information flows from Tokyo to Rabaul. We don't know what the information was. But it will, in some way, influence what Rabaul does afterwards. Rabaul is changed, irrevocably, by the arrival of that information, and by comparing Rabaul's observed behavior before and after that change, we can make inferences."
comment by Salemicus · 2014-09-08T11:41:38.335Z · LW(p) · GW(p)
Elinor agreed to it all, for she did not think he deserved the compliment of rational opposition.
Jane Austen, Sense and Sensibility.
Replies from: Caue, Lumifer↑ comment by Caue · 2014-09-08T15:44:01.302Z · LW(p) · GW(p)
Ambivalent about this one.
I like the idea of rational argument as a sign of intellectual respect, but I don't like things that are so easy to use as fully general debate stoppers, especially when they have a built-in status element.
Replies from: Salemicus↑ comment by Salemicus · 2014-09-09T07:11:20.305Z · LW(p) · GW(p)
But note that Elinor doesn't use it as a debate stopper, or to put down or belittle Ferrers. She simply chooses not to engage with his arguments, and agrees with him.
Replies from: Caue↑ comment by Caue · 2014-09-09T17:50:59.691Z · LW(p) · GW(p)
(I haven't read the book)
The way I usually come in contact with something like this is afterwards, when Elinor and her tribe are talking about those irrational greens, and how it's better to not even engage with them. They're just dumb/evil, you know, not like us.
Even without that part, this avoids opportunities for clearing up misunderstandings.
(anecdotally: some time ago a friend was telling me about discussions that are "just not worth having", and gave as an example "that time when we were talking about abortion and you said that X, I knew there was just no point in going any further". Turns out she had misunderstood me completely, and I actually had meant Y, with which she agrees. Glad we could clear that out - more than a year later, completely by accident. Which makes me wonder how many more of those misunderstandings are out there)
↑ comment by Lumifer · 2014-09-08T15:57:41.801Z · LW(p) · GW(p)
I see the point, but on the other hand it leads to "Lie back and think of England" situations...
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-09-12T17:34:34.940Z · LW(p) · GW(p)
Somehow I doubt that this argument is meant to be limitless in strength. It's more of a 'don't feed the trolls' guidance.
Replies from: Salemicus↑ comment by Salemicus · 2014-09-15T16:38:20.563Z · LW(p) · GW(p)
Exactly.
Ferrers is arguing - at great length! - that there is just as much space in a small cottage as in a much larger house. He is plainly ridiculous. Elinor sees that there is no point trying to correct him or engage someone so foolish in reasonable conversation, but she is far too well-bred to mock or insult him. So she does the correct thing in this situation, and agrees with his nonsense until it blows over.
She's certainly not going to take his advice, and knock down a stately home to build a cottage.
comment by John_Maxwell (John_Maxwell_IV) · 2014-09-01T23:57:26.631Z · LW(p) · GW(p)
I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles... this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.
-- Freeman Dyson
Replies from: Mizue↑ comment by Mizue · 2014-09-05T18:42:20.172Z · LW(p) · GW(p)
Airplanes may not work on fusion or weigh millions of tons, but still, substituting a few words in I could say similar things about airplanes. Or electrical grids. Or smallpox vaccination. But nobody does.
Hypothesis: he has an emotional reaction to the way nuclear weapons are used--he thinks that is arrogant--and he's letting those emotions bleed into his reaction to nuclear weapons themselves.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-09-05T21:39:00.326Z · LW(p) · GW(p)
Airplanes may not work on fusion or weigh millions of tons, but still, substituting a few words in I could say similar things about airplanes. Or electrical grids. Or smallpox vaccination. But nobody does.
Are you sure? I looked for just a bit and found
There is no sport equal to that which aviators enjoy while being carried through the air on great white wings.
http://inventors.about.com/od/wstartinventors/a/Quotes-Wright-Brothers.htm
I imagine if inventors have bombastic things to say about the things they invent, then frequently keep those thoughts to oneself to avoid sounding arrogant (e.g. I don't think it would have gone over well if Edison had started referring to himself as "Edison, the man who lit the world of the night").
Replies from: Mizuecomment by negamuhia · 2014-09-14T22:10:35.243Z · LW(p) · GW(p)
It’s tempting to think of technical audiences and general audiences as completely different, but I think that no matter who you’re talking to, the principles of explaining things clearly are the same. The only real difference is which things you can assume they already know, and in that sense, the difference between physicists and the general public isn’t necessarily more significant than the difference between physicists and biologists, or biologists and geologists.
Reminds me of Expecting Short Inferential Distances.
comment by Richard_Kennaway · 2014-09-12T08:42:52.298Z · LW(p) · GW(p)
Penny Arcade takes on the question of the economic value of a sacred thing. Script:
Gabe: Can you believe Notch is gonna sell Minecraft to MS?
Tycho: Yes! I can!
Gabe: Minecraft is, like, his baby though!
Tycho: I would sell an actual baby for two billion dollars.
Tycho: I would sell my baby to the Devil. Then, I would enter my Golden Sarcophagus and begin the ritual.
comment by elharo · 2014-09-23T10:47:31.838Z · LW(p) · GW(p)
In a study recently published in the journal PloS One, our two research teams, working independently, discovered that when people are presented with the trolley problem in a foreign language, they are more willing to sacrifice one person to save five than when they are presented with the dilemma in their native tongue.
One research team, working in Barcelona, recruited native Spanish speakers studying English (and vice versa) and randomly assigned them to read this dilemma in either English or Spanish. In their native tongue, only 18 percent said they would push the man, but in a foreign language, almost half (44 percent) would do so. The other research team, working in Chicago, found similar results with languages as diverse as Korean, Hebrew, Japanese, English and Spanish. For more than 1,000 participants, moral choice was influenced by whether the language was native or foreign. In practice, our moral code might be much more pliable than we think.
Extreme moral dilemmas are supposed to touch the very core of our moral being. So why the inconsistency? The answer, we believe, is reminiscent of Nelson Mandela’s advice about negotiation: “If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.” As psychology researchers such as Catherine Caldwell-Harris have shown, in general people react less strongly to emotional expressions in a foreign language.
-- Boaz Keysar and Albert Costa, Our Moral Tongue, New York Times, June 20, 2014
Replies from: Jiro, therufs, Salemicus↑ comment by Jiro · 2014-09-23T14:34:21.071Z · LW(p) · GW(p)
This quote implies a connection from "people react less strongly to emotional expressions in a foreign language" to "dilemmas in a foreign language don't touch the very core of our moral being". Furthermore, it connects or equates being more willing to sacrifice one person for five and "touch[ing] the core of our moral being" less. All rational people should object to the first implication, and most should object to the second one. This is a profoundly anti-rational quote, not a rationality quote.
Replies from: nshepperd, therufs↑ comment by nshepperd · 2014-09-23T23:23:20.999Z · LW(p) · GW(p)
I think you're reading a lot into that one sentence. I assumed that just to mean "there should not be inconsistencies due to irrelevant aspects like the language of delivery". Followed by a sound explanation for the unexpected inconsistency in terms of system 1 / system 2 thinking.
(The final paragraph of the article begins with "Our research does not show which choice is the right one.")
↑ comment by therufs · 2014-09-23T15:35:52.936Z · LW(p) · GW(p)
I disagree with Jiro and Salemicus. Learning about how human brains work is entirely relevant to rationality.
Replies from: Jiro↑ comment by Jiro · 2014-09-23T15:43:27.195Z · LW(p) · GW(p)
Someone who characterized the results the way they characterize them in this quote has learned some facts, but failed on the analysis.
It's like a quote which says "(correct mathematical result) proves that God has a direct hand in the creation of the world". That wouldn't be a rationality quote just because they really did learn a correct mathematical result.
Replies from: therufs↑ comment by Salemicus · 2014-09-23T14:46:01.632Z · LW(p) · GW(p)
I agree with Jiro, this appears to be an anti-rationality quote. The most straightforward interpretation of the data is that people didn't understand the question as well when posed in a foreign language.
Chalk this one up not to emotion, but to deontology.
Replies from: simplicio, Azathoth123↑ comment by simplicio · 2014-09-24T13:24:38.926Z · LW(p) · GW(p)
Possible that they understood the question, but hearing it in a foreign language meant cognitive strain, which meant they were already working in System 2. That's my read anyway.
Given to totally fluent second-language speakers, I bet the effect vanishes.
↑ comment by Azathoth123 · 2014-09-25T01:21:04.421Z · LW(p) · GW(p)
It's also possible that asking a different language causes subjects to think of the people in the dilemma as "not members of their tribe".
comment by James_Miller · 2014-09-03T01:50:02.500Z · LW(p) · GW(p)
Dreams demonstrate that our brains (and even rat brains) are capable of creating complex, immersive, fully convincing simulations. Waking life is also a kind of dream. Our consciousness exists, and is shown particular aspects of reality. We see what we see for adaptive reasons, not because it is the truth. Nerds are the ones who notice that something is off - and want to see what's really going on.
The View from Hell from an article recommended by asd.
Replies from: TheAncientGeek, Richard_Kennaway, Username, Ixiel↑ comment by TheAncientGeek · 2014-09-06T19:08:03.064Z · LW(p) · GW(p)
The easy way to make a convincing simulation is to disable the inner critic.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2014-09-09T05:00:29.933Z · LW(p) · GW(p)
The inner critic that is disabled during regular dreaming turns back on during lucid dreaming. People who have them seem to be quite impressed by lucid dreams.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-09-09T10:17:34.312Z · LW(p) · GW(p)
You still can't focus on stable details.
Replies from: chaosmage↑ comment by chaosmage · 2014-09-12T09:12:44.413Z · LW(p) · GW(p)
You can with training. It is a lot like training visualization: In the beginning, the easiest things to visualize are complex moving shapes (say a tree with wind going through it), but if you try for a couple of hours, you can get all the way down to simple geometric shapes.
↑ comment by Richard_Kennaway · 2014-09-03T08:03:01.498Z · LW(p) · GW(p)
We see what we see for adaptive reasons, not because it is the truth.
Contrast:
Nature cannot be fooled.
-- Feynman
One might even FTFY the first quote as:
"We see what we see for adaptive reasons, because it is the truth."
This part:
Nerds are the ones who notice that something is off - and want to see what's really going on.
is contradicted by the context of the whole article. The article is in praise of insight porn (the writer's own words for it) as the cognitive experience of choice for nerds (the writer's word for them, in whom he includes himself and for whom he is writing) while explicitly considering its actual truth to be of little importance. He praises the experience of reading Julian Jaynes and in the same breath dismisses Jaynes' actual claims as "batshit insane and obviously wrong".
In other words, "Nerds ... want to see what's really going on" is, like the whole article, a statement of insight porn, uttered for the feeling of truthy insight it gives, "not because it is the truth".
How useful is this to someone who actually wants "to see what's really going on"?
Replies from: None↑ comment by [deleted] · 2014-09-09T01:43:21.911Z · LW(p) · GW(p)
.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-09T07:55:14.685Z · LW(p) · GW(p)
It's a useful sketch of a type of experience.
Insight porn, in other words?
↑ comment by Username · 2014-09-06T18:13:16.044Z · LW(p) · GW(p)
I downvoted this and another comment further up for not being about anything but nerd pandering, which I feel is just ego-boosting noise. Not the type of content I want to see on here.
Replies from: therufs, James_Miller↑ comment by therufs · 2014-09-06T19:29:22.672Z · LW(p) · GW(p)
I think the comment in this thread would have been equally relevant and possibly better without the last sentence, but don't see how the Cryptonomicon quote (which I assume to be the one you meant?) as nerd-pandering, since it doesn't imply value judgments from it about being or identifying as a nerd.
Replies from: Username↑ comment by Username · 2014-09-07T03:55:50.825Z · LW(p) · GW(p)
The Cryptonomicron quote was great, I was talking about its child comment.
↑ comment by James_Miller · 2014-09-06T18:31:11.775Z · LW(p) · GW(p)
Well, if you think the quote doesn't say significantly more than "nerds are great" you are right to downvote it.
comment by bramflakes · 2014-09-26T22:13:45.627Z · LW(p) · GW(p)
Yeah I have a lot of questions. Like, is this Star Trek style where it's transmitting my matter as energy and reconstructing it on the other end, or is it just creating an exact duplicate of me and I'm really just committing suicide over and over? Hmm, no, I don't feel dead, but am I me, or am I Gordon #6? I might not know the difference. Well, I should continue either way. Even if that means making sacrifices for the Greater Gordon. I mean I can't think of a cause I believe in more than that!
comment by khafra · 2014-09-26T15:05:45.608Z · LW(p) · GW(p)
It's really weird how [Stop, Drop, and Roll] is taught pretty much yearly but personal finance or ethics usually just have one class at the end of highschool.
-- CornChowdah, on reddit
Replies from: simplicio↑ comment by simplicio · 2014-09-29T13:39:56.040Z · LW(p) · GW(p)
Yay for personal finance, boo for ethics, which is liable to become a mere bully pulpit for teachers' own views.
Replies from: tslarm, elharo↑ comment by elharo · 2014-09-30T11:15:27.954Z · LW(p) · GW(p)
Thinking back to my own religious high school education, I realize that the ethics component (though never called out as such, it was woven into the curriculum at every level) was indeed important; not so much because of the specific rules they taught and didn't teach; as simply in teaching me that ethics and morals were something to think about and discuss.
Then again, this was a Jesuit school; and Jesuit education has a reputation for being somewhat more Socratic and questioning than the typical deontological viewpoint of many schools.
But in any case, yay for personal finance.
comment by Gunnar_Zarncke · 2014-09-14T10:28:50.534Z · LW(p) · GW(p)
If the people be led by laws, and uniformity sought to be given them by punishments, they will try to avoid the punishment, but have no sense of shame. If they be led by virtue, and uniformity sought to be given them by the rules of propriety (could be translated as 'rites'), they will have the sense of the shame, and moreover will become good.
In the Great Learning (大學) by Confucius, translated by James Legge
Interestingly I found this in a piece about cancer treatment. An possibly underused well-application of Fluid Analogies.
comment by James_Miller · 2014-09-07T14:37:41.031Z · LW(p) · GW(p)
A lot of people believe fruit juices to be healthy. They must be… because they come from fruit, right? But a lot of the fruit juice you find in the supermarket isn’t really fruit juice. Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit. What you’re drinking is basically just fruit-flavored sugar water. That being said, even if you’re drinking 100% quality fruit juice, it is still a bad idea. Fruit juice is like fruit, except with all the good stuff (like the fiber) taken out… the main thing left of the actual fruit is the sugar. If you didn’t know, fruit juice actually contains a similar amount of sugar as a sugar-sweetened beverage
Kris Gunnars, Business Insider
Replies from: army1987, Jiro↑ comment by A1987dM (army1987) · 2014-09-07T20:10:50.453Z · LW(p) · GW(p)
Mostly correct, but only very loosely related to rationality.
all the good stuff (like the fiber) taken out
Vitamins also are good stuff but they aren't taken out (or when they are they usually are put back in, AFAIK).
Replies from: James_Miller↑ comment by James_Miller · 2014-09-07T22:25:17.581Z · LW(p) · GW(p)
but only very loosely related to rationality
Rationality involves having accurate beliefs. If lots of people share a mistaken belief that causes them to take harmful actions then pointing out this mistake is rationality-enhancing.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-09-08T08:09:03.188Z · LW(p) · GW(p)
pointing out this mistake is rationality-enhancing
The way giving someone a fish is fishing skill-enhancing, I'd guess...
Well, not quite. This particular mistake has a general lesson of ‘what you know about what foods are healthy may be wrong’ and an even more general one ‘beware the affect heuristic’, but there probably are more effective ways to teach the latter.
Replies from: James_Miller↑ comment by James_Miller · 2014-09-08T12:49:30.353Z · LW(p) · GW(p)
has a general lesson of
But the quote isn't attempting to teach a general lesson, it's attempting to improve one particular part of peoples' mental maps. If lots of people have an error in their map, and this error causes many of them to make a bad decision, then pointing out this error is rationality-enhancing.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-22T08:05:01.557Z · LW(p) · GW(p)
If lots of people have an error in their map, and this error causes many of them to make a bad decision, then pointing out this error is rationality-enhancing.
No, that makes it a useful factoid. I don't consider my personal rationality enhanced whenever I learn a new fact, even if it is useful, unless it will reliably improve my ability to distinguish true beliefs from false ones in the future.
↑ comment by Jiro · 2014-09-08T19:22:21.832Z · LW(p) · GW(p)
A search brings up http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=101.30 .
This seems to contradict the claim that "Sometimes there isn’t even any actual fruit in there, just chemicals that taste like fruit," since it would have to say "contains less than 1% juice" or not be described as juice at all.
comment by Salemicus · 2014-09-25T11:34:00.160Z · LW(p) · GW(p)
One of the key concepts in Common Law is that of the reasonable man. Re-reading A.P. Herbert, it struck me how his famously insulting description of the reasonable man bears a deep resemblance to that of the ideal rationalist:
It is impossible to travel anywhere or to travel for long in that confusing forest of learned judgments which constitutes the Common Law of England without encountering the Reasonable Man. He is at every turn, an ever-present help in time of trouble, and his apparitions mark the road to equity and right. There has never been a problem, however difficult, which His Majesty's judges have not in the end been able to resolve by asking themselves the simple question, 'Was this or was it not the conduct of a reasonable man?' and leaving that question to be answered by the jury.
This noble creature stands in singular contrast to his kinsman the Economic Man, whose every action is prompted by the single spur of selfish advantage and directed to the single end of monetary gain. The Reasonable Man is always thinking of others; prudence is his guide, and 'Safety First', if I may borrow a contemporary catchword, is his rule of life. All solid virtues are his, save only that peculiar quality by which the affection of other men is won. For it will not be pretended that socially he is much less objectionable than the Economic Man.
Though any given example of his behaviour must command our admiration, when taken in the mass his acts create a very different set of impressions. He is one who invariably looks where he is going, and is careful to examine the immediate foreground before he executes a leap or bound; who neither star-gazes nor is lost in meditation when approaching trap-doors or the margin of a dock; who records in every case upon the counterfoils of cheques such ample details as are desirable, scrupulously substitutes the word 'Order' for the word 'Bearer', crosses the instrument 'a/c Payee only', and registers the package in which it is despatched; who never mounts a moving omnibus, and does not alight from any car while the train is in motion; who investigates exhaustively the bona fides of every mendicant before distributing alms, and will inform himself of the history and habits of a dog before administering a caress; who believes no gossip, nor repeats it, without firm basis for believing it to be true; who never drives his ball till those in front of him have definitely vacated the putting-green which is his own objective; who never from one year's end to another makes an excessive demand upon his wife, his neighbours, his servants, his ox, or his ass; who in the way of business looks only for that narrow margin of profit which twelve men such as himself would reckon to be 'fair', contemplates his fellow-merchants, their agents, and their goods, with that degree of suspicion and distrust which the law deems admirable; who never swears, gambles, or loses his temper; who uses nothing except in moderation, and even while he flogs his child is meditating only on the golden mean.
Devoid, in short, of any human weakness, with not one single saving vice, sans prejudice, procrastination, ill-nature, avarice, and absence of mind, as careful for his own safety as he is for that of others, this excellent but odious character stands like a monument in our Courts of Justice, vainly appealing to his fellow-citizens to order their lives after his own example.
A.P. Herbert, [Uncommon Law].(http://en.wikipedia.org/wiki/Uncommon_Law). Emphasis mine.
I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.
Replies from: fubarobfusco, hairyfigment↑ comment by fubarobfusco · 2014-09-30T16:53:02.720Z · LW(p) · GW(p)
I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.
I'm not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I've heard things like —
- LW people are personally cold, or idealize being unemotional and criticize others for having emotional or aesthetic responses;
- LW teaches people to rationalize more effectively their existing prejudices — similar to Eliezer's remarks in Knowing About Biases Can Hurt People;
- LW-folk are overly defensive of LW-ideas, hold unreasonably high standards of evidence for disagreement with them, and dismiss any disagreement that can't meet those standards as a sign of irrationality;
- LW has an undercurrent of manipulation, or seems to be trying to trick people into supporting something sinister (although this person could not say what that hidden goal was, which implies that it's something less overt than "build Friendly AI and take over — er, optimize — the world");
- LW is a support network for Eliezer's approaches to superhuman AI / the Singularity, and Eliezer is personally not trustworthy as a leader of that project;
- LW-folk excessively revere intelligence over other positive traits of humans, and subscribe to the notion that more intelligent people should dominate others; or that people who don't fit a narrow definition of high intelligence are unworthy, possibly even unworthy of life;
- LW-folk seem to believe that if you buy into Bayesian epistemology, you must buy into the rest of the LW memeplex, or that all the other ideas of LW are "Bayesian";
- LW tolerates weird censorship that appears to be motivated by magical thinking;
- LW-folk just aren't very nice or pleasant to be around;
- LW openly contemplates proposals that good people consider obviously wrong and morally repulsive, and that is undesirable to be around. (This person described observing LW-folk discussing under what circumstances genocide might be moral. I don't know where that discussion took place, whether it was on this site, another site, or in person.)
↑ comment by Lumifer · 2014-09-30T17:20:49.594Z · LW(p) · GW(p)
I wonder how these people who dislike LW feel about geeks/nerds in general.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-09-30T19:49:43.989Z · LW(p) · GW(p)
Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.
Replies from: Cyan, satt↑ comment by Cyan · 2014-09-30T21:36:32.354Z · LW(p) · GW(p)
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.
There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.
Replies from: Vulture↑ comment by satt · 2014-09-30T22:26:27.393Z · LW(p) · GW(p)
That makes me more curious; I have the feeling there's quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds.
(Not sure how to write the above sentence in a way that doesn't sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)
↑ comment by hairyfigment · 2014-09-25T16:23:12.984Z · LW(p) · GW(p)
Your theory may have some value. But let's note that I don't know what it means to cross an instrument 'a/c Payee only', and I'll wager most other people don't know. Do you think most UK citizens did in 1935?
Replies from: Salemicus, Nornagest↑ comment by Salemicus · 2014-09-25T17:56:34.059Z · LW(p) · GW(p)
The use of the word "instrument" makes the phrase more obscure than it needs to be, but it refers to the word "cheque" earlier in the sentence. I suspect most modern British people probably don't know what it means, but most will have noticed that all the cheques in a chequebook have "A/C Payee only" written vertically across the middle - or at least those old enough to have used cheques will! But people in 1935 would have most likely known what it meant, because 1) in those days cheques were extremely widespread (no credit or debit cards) and 2) unlike today, cheques were frequently written by hand on a standard piece of paper (although chequebooks did exist). The very fact that the phrase was used by a popular author writing for a mass audience (the cases were originally published in Punch and The Evening Standard) should incline you in that direction anyway.
Note incidentally that Herbert's most famous case is most likely The Negotiable Cow.
Replies from: hairyfigment↑ comment by hairyfigment · 2014-09-25T20:13:05.144Z · LW(p) · GW(p)
Just fyi, my checks don't say anything like that, and the closest I can find on Google Images just says, "Account Payee."
↑ comment by Nornagest · 2014-09-25T16:37:09.532Z · LW(p) · GW(p)
I don't know for sure, but judging from context I'd say it's probably instructions as to the disposition of a check -- like endorsing one and writing "For deposit only" on the back before depositing it into the bank, as a guarantee against fraud.
Granted, in these days of automatic scanning and electronic funds transfer that's starting to look a little cobwebby itself.
comment by Azathoth123 · 2014-09-15T00:08:09.670Z · LW(p) · GW(p)
Replies from: jaime2000, CCC, soreff, AndHisHorseYou know how people are always telling you that history is actually really interesting if you don’t worry about trivia like dates? Well, that’s not history, that’s just propaganda. History is dates. If you don’t know the date when something happened, you can’t provide the single most obvious reality check on your theory of causation: if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards.
↑ comment by jaime2000 · 2014-09-15T05:57:20.131Z · LW(p) · GW(p)
Agree with the general point, though I think people complaining about dates in history are referring to the kind of history that is "taught" in schools, in which you have to e.g. memorize that the Boston Massacre happened on March 5, 1770 to get the right answer on the test. You don't need that level of precision to form a working mental model of history.
Replies from: Nornagest↑ comment by Nornagest · 2014-09-15T06:56:37.632Z · LW(p) · GW(p)
You do need to know dates at close to that granularity if you're trying to build a detailed model of an event like a war or revolution. Knowing that the attack on Pearl Harbor and the Battle of Hong Kong both happened in 1941 tells you something; knowing that the former happened on 7 December 1941 and the latter started on 8 December tells you quite a bit more.
On the other hand, the details of wars and revolutions are probably the least useful part of history as a discipline. Motivations, schools of thought, technology, and the details of everyday life in a period will all get you further, unless you're specifically studying military strategy, and relatively few of us are.
Replies from: private_messaging↑ comment by private_messaging · 2014-09-15T07:04:10.315Z · LW(p) · GW(p)
A particularly stark example may be the exact dates of bombing of Hiroshima, Nagasaki, and official surrender. Helps deal with theories such as "they had to drop a bomb on Nagasaki because Japan didn't surrender".
Replies from: None↑ comment by [deleted] · 2014-09-15T15:57:16.270Z · LW(p) · GW(p)
Be careful. That sounds reasonable until you also learn that the Japanese war leadership didn't even debate Hiroshima or Nagasaki for more than a brief status update after they happened, yet talk of surrender and the actual declaration immediately folowed declaration of war by the Soviets and landing of troops in Mancheria and the Sakhalin islands. Japan, it seems, wanted to avoid the German post-war fate of a divided people.
The general problem with causation in history is that you often don't know what you don't know. (It's a tangential point, I know.)
Replies from: Nornagest↑ comment by Nornagest · 2014-09-15T21:17:07.177Z · LW(p) · GW(p)
I'm not necessarily saying this is wrong, but I don't think it can be shown to be significantly more accurate than the "bomb ended the war" theory by looking at dates alone. The Soviet declaration of war happened on 8 August, two days after Hiroshima. Their invasion of Manchuria started on 9 August, hours before the Nagasaki bomb was dropped, and most sources say that the upper echelons of the Japanese government decided to surrender within a day of those events. However, their surrender wasn't broadcast until 15 August, and by then the Soviets had opened several more fronts. (That is, that's when Emperor Hirohito publicized his acceptance of the Allies' surrender terms. It wasn't formalized until 2 September, after Allied occupation had begun.)
Dates aside, though, it's fascinating to read about the exact role the Soviets played in the end of the Pacific War. Stalin seems to have gotten away with some spectacularly Machiavellian moves.
Replies from: None↑ comment by CCC · 2014-09-16T09:02:47.738Z · LW(p) · GW(p)
This tells me that the order of events is important, and not the actual dates themselves. It is true that, if I want to claim that X caused Y, I need to know that X happened before Y; but it does not make any difference whether they both happened in 1752 or 1923.
Replies from: Lumifer, Azathoth123, Zubon, elharo↑ comment by Azathoth123 · 2014-09-17T02:18:34.760Z · LW(p) · GW(p)
The time between them also matters. If X happened a year before Y it is more plausible that X caused Y then if X happened a century before Y.
↑ comment by Zubon · 2014-09-18T22:24:03.436Z · LW(p) · GW(p)
Great. I have approximately 6000 years worth of events here, happening across multiple continents, with overlapping events on every scale imaginable from "in this one village" to "world war." If you can keep the relationships between all those things in your memory consistently using no index value, go for it. If not, I might recommend something like a numerical system that puts those 6000 years in order.
I would not recommend putting "0" at a relatively arbitrary point several thousand years after the events in question have started.
Replies from: CCC↑ comment by CCC · 2014-09-26T08:49:30.177Z · LW(p) · GW(p)
I do agree that an index value is a very useful and intuitive-to-humans way to represent the order of events, especially given the sheer number of events that have taken place through history. However, I do think it's important to note that the index value is only present as a representation of the order of events (and of the distance between them, which, as other commentators have indicated, is also important) and has no intrinsic value in and of itself beyond that.
↑ comment by elharo · 2014-09-17T12:21:24.404Z · LW(p) · GW(p)
It's not just the order but the distance that matters. If you want to say that X caused Y, but X happened a thousand years before Y, chances are that you're at the very least ignoring a lot of additional causes.
In the end, I think, dates are important. It's only the arbitrary positioning of a starting date (e.g. Christian vs. Jewish vs. Chinese calendar) that genuinely doesn't matter; but even that much is useful for us to talk about historical events. I.e. it doesn't really matter where we put year 0, but it matters that we agree to put it somewhere. (Ideally we would have put it somewhat further back in time, maybe nearer the beginning of recorded history, so we didn't have to routinely do BCE/CE conversions in our heads, but that ship has sailed.)
↑ comment by soreff · 2014-09-15T00:44:11.711Z · LW(p) · GW(p)
Or that the interval between X and Y is spacelike, and neither is in the other's forward light cone... :)
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-15T23:52:02.394Z · LW(p) · GW(p)
Some day the light speed delay might become an issue in historical investigations, but not quite yet :) Even then in the statement "if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards" the term "before" implies that one event is in the causal future of the other.
↑ comment by AndHisHorse · 2014-09-15T01:45:14.235Z · LW(p) · GW(p)
"Dateless history" can be interesting without being accurate or informative. As long as I don't use it to inform my opinions on the modern world either way, it can be just as amusing and useful as a piece of fiction.
comment by Torello · 2014-09-01T19:06:59.676Z · LW(p) · GW(p)
Perceiving magic is precisely the same thing as perceiving the limits of your own understanding.
-Jaron Lanier, Who Owns the Future?, (e-reader does not provide page number)
Replies from: John_Maxwell_IV, None, Luke_A_Somers↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-09-02T00:00:46.463Z · LW(p) · GW(p)
That doesn't seem quite true... if I'm confused while reading a textbook, I may be perceiving the limits of my understanding but not perceiving magic.
Replies from: soreff↑ comment by soreff · 2014-09-06T05:07:45.274Z · LW(p) · GW(p)
Agreed. I think what Lanier should have said that a perception of magic is a subset of things one doesn't understand, rather than claiming that they are equal. Bugs that I am currently hunting but haven't nailed down are things I don't understand, but they certainly don't seem magical.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-08T03:10:21.844Z · LW(p) · GW(p)
Bugs that I am currently hunting but haven't nailed down are things I don't understand, but they certainly don't seem magical.
At least you hope not.
↑ comment by Luke_A_Somers · 2014-09-06T01:54:37.785Z · LW(p) · GW(p)
You could also be perceiving something way way past the limits of your own understanding, or alternately perceiving something which would be well within the limits of your understanding if you were looking at it from a different angle
comment by TeMPOraL · 2014-09-14T01:53:53.939Z · LW(p) · GW(p)
-- Mother Gaia, I come on behalf of all humans to apologize for destroying nature (...). We never meant to kill nature.
-- You're not killing nature, you're killing yourself. That's what I mean by self-centered. You think that just because you can't live, then nothing can. You're fucking yourself over big time, and won't be missed.
From a surprisingly insightful comic commenting on the whole notion of "saving the planet".
Replies from: simplicio↑ comment by simplicio · 2014-09-17T19:46:47.179Z · LW(p) · GW(p)
This framing is marginally saner, but the weird panicky eschatology of pop-environmentalism is still present. Apparently the author thinks that using up too many resources, or perhaps global warming, currently represent human extinction level threats?
comment by Jayson_Virissimo · 2014-09-18T23:38:29.268Z · LW(p) · GW(p)
You can’t see anything properly while your eyes are blurred with tears.
-- C. S. Lewis, A Grief Observed
comment by lmm · 2014-09-16T23:02:51.264Z · LW(p) · GW(p)
"... Is it wrong to hold on to that kind of hope?"
[having poisoned her] "I have not come for what you hoped to do. I've come for what you did."
- V for Vendetta (movie).
↑ comment by Azathoth123 · 2014-09-17T05:15:33.708Z · LW(p) · GW(p)
Given that you've said in another thread that you consider "blame" an incoherent concept, I don't understand what you think this quote means.
Replies from: lmm↑ comment by lmm · 2014-09-17T06:19:46.491Z · LW(p) · GW(p)
That people will judge your morality by your actions without regard to your intentions. I don't claim that V is particularly rational, but he embodies (exaggerated versions of) traits that real people have. Our moral decisions have consequences in how we are treated.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-18T02:29:18.079Z · LW(p) · GW(p)
Our moral decisions have consequences in how we are treated.
This is what most people mean by "blame".
Replies from: hairyfigment, lmm↑ comment by hairyfigment · 2014-09-18T02:48:13.586Z · LW(p) · GW(p)
Possibly in the eyes of the future, if there is one, we'll all look like brain-damaged children who aren't morally to blame for much of anything. Our actions still have consequences (for example, they might determine whether humanity has a future).
comment by hesperidia · 2014-10-11T01:19:53.061Z · LW(p) · GW(p)
Oromis asked, “Can you tell me, what is the most important mental tool a person can possess?”
[Eragon makes a few wrong guesses, like determination and wisdom.]
“A fair guess, but, again, no. The answer is logic. Or, to put it another way, the ability to reason analytically. Applied properly, it can overcome any lack of wisdom, which one only gains through age and experience.”
Eragon frowned. “Yes, but isn’t having a good heart more important than logic? Pure logic can lead you to conclusions that are ethically wrong, whereas if you are moral and righteous, that will ensure that you don’t act shamefully.”
A razor-thin smile curled Oromis’s lips. “You confuse the issue. All I wanted to know was the most useful tool a person can have, regardless of whether that person is good or evil. I agree that it’s important to be of a virtuous nature, but I would also contend that if you had to choose between giving a man a noble disposition or teaching him to think clearly, you’d do better to teach him to think clearly. Too many problems in this world are caused by men with noble dispositions and clouded minds.”
-- Eldest, by Christopher Paolini
(This is not a recommendation for the book series. The book has Science Elves, but they are not thought of rationally or worldbuilt to any logical conclusion whatsoever. The context of this quote is apparently a "science is good" professing/cheering without any actual understanding of how science or rationality works.)
(I would love a rational version of Eragon by way of steelmanning the Science Elves. But then you'd probably need to explain why they haven't taken over the world.)
Replies from: Document, ChristianKl↑ comment by Document · 2014-10-11T06:57:53.841Z · LW(p) · GW(p)
"I hate being ignorant. For me, a question unanswered is like a thorn in my side that pains me every time I move until I can pluck it out."
"You have my sympathy."
"Why is that?"
"Because if that is so, you must spend every waking hour in mortal agony, for life is full of unanswerable questions."
-- Eragon and Angela, Brisingr, by the same author
Replies from: Jiro↑ comment by Jiro · 2014-10-13T06:55:12.828Z · LW(p) · GW(p)
Someone who says something like the first sentence generally means something like "questions that are significant and in an area I am concerned with". They don't mean "I don't know exactly how many atoms are in the moon, and I find that painful" (unless they have severe OCD based around the moon), and to interpret it that way is to deliberately misinterpret what the speaker is saying so that you can sound profound.
But then, I've been on the Internet. This sort of thing is an endemic problem on the Internet, except that it's not always clear how much is deliberate misinterpretation and how much is people who just don't comprehend context and implication.
(Notice how I've had to add qualifiers like 'generally' and "except for (unlikely case)" just for preemptive defense against that sort of thing.)
Replies from: ChristianKl, Document↑ comment by ChristianKl · 2014-10-16T11:21:40.207Z · LW(p) · GW(p)
Someone who says something like the first sentence generally means something like "questions that are significant and in an area I am concerned with"
If you don't have any open questions in that category, then you aren't really living as an intellectual.
In science questions are like a hydra. After solving a scientific problem you often have more questions than you had when you started.
Schwartz's article on the issue is quite illustrative. If you can't deal with the emotional effects that come with looking at an open question and having it open for months and years you can't do science.
You won't contribute anything to the scientific world of ideas if you can only manage to concerned with an open question for an hour and not for months and years. Of course there are plenty person in the real world who don't face questions with curiosity but who in pain when dealing with them. To me that seems like a dull life to live. because the question doesn't concern themselves with living an intellectual life.
Replies from: slutbunwaller, Jiro↑ comment by slutbunwaller · 2014-10-16T14:18:24.858Z · LW(p) · GW(p)
If you don't have any open questions in that category, then you aren't really living as an intellectual.
I'm not sure that's a critical part of any definition of the word "intellectual".
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-16T15:38:00.267Z · LW(p) · GW(p)
It's not sufficient to be an intellectual but if you don't care about questions that aren't solved in short amounts of time because that's very uncomfortable for you, you won't have a deep understanding of anything. You might memorise the teacher password in many domains but that's not what being an intellectual is about.
↑ comment by Jiro · 2014-10-16T14:29:04.326Z · LW(p) · GW(p)
"You must spend every waking hour in mortal agony, for life is full of unanswerable questions." carries the connotation that someone cannot answer large numbers of every day questions, not that they can't answer a few questions in specialized areas.
But the original statement about unanswered questions being painful, in context, does connote that they are referring to a few questions in specialized areas.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-16T16:08:41.686Z · LW(p) · GW(p)
"You must spend every waking hour in mortal agony, for life is full of unanswerable questions." carries the connotation that someone cannot answer large numbers of every day questions, not that they can't answer a few questions in specialized areas.
In this case it illustrates how the character in question couldn't really imagine living a life without unanswered questions. Given that it's a Science Elf that fits.
For him daily life is about deep questions.
Replies from: Jiro↑ comment by Jiro · 2014-10-16T17:06:05.746Z · LW(p) · GW(p)
"Unanswered questions" connotes different things in the two different places, though. In one place it connotes "all unanswered questions of whatever kind" and in another it connotes "important unanswered questions". The "cleverness" of the quote relies on confusing the two.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-16T17:07:12.956Z · LW(p) · GW(p)
Important depends on whether you care about something. If you have a scientific mindset than you care about a lot of questions and want answers for them.
Replies from: Jiro↑ comment by ChristianKl · 2014-10-16T11:03:15.587Z · LW(p) · GW(p)
“A fair guess, but, again, no. The answer is logic. Or, to put it another way, the ability to reason analytically. Applied properly, it can overcome any lack of wisdom, which one only gains through age and experience.”
That's not true. Logic doesn't protect you from GIGO (garbage-in-garbage-out). Actually knowing something about the subject one is interacting with is very important.
comment by Azathoth123 · 2014-09-17T05:12:38.109Z · LW(p) · GW(p)
Replies from: simplicioMost try to take a fixed time window (say one day, one week, etc.) and try to predict events.
To predict, find events that have certain occurrence but uncertain timing (say, the fragile will break) rather than certain timing but uncertain occurence.
↑ comment by simplicio · 2014-09-18T13:32:05.038Z · LW(p) · GW(p)
I don't really get this. It seems like both types of prediction matter quite a bit.
The only way I can interpret it that makes sense to me is something like:
Thinking really hard about the infinity of things that might happen this week is an unproductive way to generate predictions, because the hypothesis space is too large and you're just going to excessively privilege some salient hypothesis.
Is he giving advice about making correct predictions given that you just randomly feel like predicting stuff? Or is he giving advice about how to predict things you actually care about?
Replies from: Azathoth123, hairyfigment↑ comment by Azathoth123 · 2014-09-19T01:09:26.780Z · LW(p) · GW(p)
Is he giving advice about making correct predictions given that you just randomly feel like predicting stuff? Or is he giving advice about how to predict things you actually care about?
The latter. Specifically predicting high impact events.
↑ comment by hairyfigment · 2014-09-18T16:14:45.846Z · LW(p) · GW(p)
People predicted the housing bubble collapse using Taleb's reasoning.
Replies from: hairyfigment↑ comment by hairyfigment · 2014-09-18T19:18:04.335Z · LW(p) · GW(p)
Did someone other that Eugene Nier downvote this? If so, how is the parent not a concrete example of "how to predict things you actually care about?"
Replies from: Lumifer, Manfred↑ comment by Lumifer · 2014-09-18T19:33:15.257Z · LW(p) · GW(p)
The parent is a concrete example of selection (or survivor) bias. Picking post factum one case which turned out to be right (and ignoring unknown but possibly large number of cases which turned out to be wrong and faded into the dark pit of obscurity) does not help you predict anything.
Consider a forecast: the stock market will crash. No idea when, but at some point it will. It is a safe prediction to make? Yes, it is. Is it a useful prediction? No, it is not.
Taleb's advice is good for burnishing one's reputation as a psychic. It's not so good for making actionable forecasts.
To recall a well-known remark by Paul Samuelson,
To prove that Wall Street is an early omen of movements still to come in GNP, commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!
ETA: So the guy sold his Washington DC condo in 2004? That looks to have been a pretty poor decision.
Replies from: hairyfigment, hairyfigment↑ comment by hairyfigment · 2014-09-18T19:55:14.648Z · LW(p) · GW(p)
Of course it's a useful bloody prediction! It means you shouldn't put yourself in a position where any stock market crash will kill you or drastically lower your standard of living.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-18T20:05:03.467Z · LW(p) · GW(p)
LOL. In that case I have a lot of useful predictions to make:
- Inflation will wipe out the savings held in cash
- There will be riots in major cities
- Public transportation will have a horrible accident with many people killed
- Some food in a supermarket will turn out to be tainted
- There will be a serial killer on the loose
- You will die
...I can easily continue...
P.S. Maybe you should mention your advice to all the financial gurus on LW who insist that the only place to put your money into is an equities index fund X-D
↑ comment by hairyfigment · 2014-09-18T20:17:02.969Z · LW(p) · GW(p)
Did you just link to the change in the housing market over the past year? Washington Post:
D.C.’s median sale price soared to $460,000 from $405,000 in March 2012, an increase of 13.6 percent year over year. (sic)
My link:
Sold: $445,000 in May 2004 He would prefer to own if it made financial sense. "I expect we'll probably end up buying again, but only when prices adjust," Baker says.
Let's subtract the $1000 he paid for the best argument against the existence of a housing bubble. On the face of it, you appear to be arguing with a man who made $39,000 cash by betting on the obvious - though the actual number may of course be less.
Your general argument seems to deny the usefulness of hedging.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-18T20:25:36.182Z · LW(p) · GW(p)
Did you just link to the change in the housing market over the past year?
There is a multi-year graph of real estate prices on that web page, if you click the "Max" button you will get a plot of prices from August 2004 till today.
Your general argument seems to deny the usefulness of hedging
No, my general argument denies the usefulness of forecasts which don't provide time estimates other than "at some point in the future".
Let me offer you three more examples of such forecasts:
- The stock market will go up 20%
- The stock market will go down 20%
- The stock market will stay flat for a while
↑ comment by hairyfigment · 2014-09-18T22:54:54.029Z · LW(p) · GW(p)
The website does work when I enable cookies, and it says he sold his apartment for much more than the median price. I think it also supports the claim that after buying a house, he had a profit left of roughly 10 percent of that house's value (the amount of equity he supposedly said he wouldn't mind losing post-purchase).
Your general argument seems to misrepresent Taleb. Again, we have here a case of someone doing pretty well by focusing on the predictions you can make. (His profit was likely sub-optimal, but that sounds like an example of a prediction you can't make.) And hedging can indeed protect you against the events you keep weirdly suggesting are useless to think about.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-19T02:10:11.801Z · LW(p) · GW(p)
Again, we have here a case of someone doing pretty well by focusing on the predictions you can make.
If I may point you to the first paragraph of this post..?
you keep weirdly suggesting are useless to think about
I don't believe I said anything at all about what's useful or useless to think about.
comment by Shmi (shminux) · 2014-09-12T17:44:52.166Z · LW(p) · GW(p)
Replies from: elharo, CoffeeStain, simplicio, bramflakesIn 2014, marriage is still the best economic arrangement for raising a family, but in most other senses it is like adding shit mustard to a shit sandwich. If an alien came to earth and wanted to find a way to make two people that love each other change their minds, I think he would make them live in the same house and have to coordinate every minute of their lives.
↑ comment by elharo · 2014-09-14T11:00:34.311Z · LW(p) · GW(p)
True or false, I'm trying but I really can't see how this is a rationality quote. It is simply a pithy and marginally funny statement about one topic.
I think it's time to add one new rule to the list, right at the top:
- All quotes should be on the subject of rationality, that is how we develop correct models of the world. Quotes should not be mere statements of fact or opinion, no matter how true, interesting, funny, or topical they may be. Quotes should teach people how to think, not what to believe.
Can anyone say that in fewer words?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-14T17:54:35.732Z · LW(p) · GW(p)
I really can't see how this is a rationality quote.
This is how:
- it exposes the common fallacy that people who love each other should get married to make their relationship last
- it uses the standard sunk-cost trap avoidance technique to make this fallacy evident
The rest of the logic in the link I gave is even more interesting (and "rational").
It is simply a pithy and marginally funny statement about one topic.
Making one's point in a memorable way is a rationality technique.
As for your rule, it appears to me so subjective as to be completely useless. For one where one sees "what to believe" another sees "how to think".
Replies from: elharo↑ comment by elharo · 2014-09-15T11:24:31.524Z · LW(p) · GW(p)
Assume for the sake of argument, the statement is correct.
This quote does not expose a fallacy, that is an error in reasoning. There is nothing in this quote to indicate the rationality shortcoming that causes people to believe the incorrect statement. Rather this exposes an error of fact. The rationality question is why do people come to believe errors of fact and how we can avoid that.
You may be reading the sunk cost fallacy into this quote, or it may be in an unquoted part of the original article, but I don't see it here. If the rest of the article better elucidates rationality techniques that led Adams to come to this conclusion, then likely the wrong extract from the article was selected to quote.
Making one's point in a memorable (including humorous) way may be an instrumental rationality technique. That is, it helps to convince other people of your beliefs. However in my experience it is a very bad epistemic rationality technique. In particular it tends to overweight the opinions of people like Adams who are very talented at being funny, while underweighting the opinions of genuine experts in a field, who are somewhat dry and not nearly as amusing.
↑ comment by CoffeeStain · 2014-09-12T23:38:23.220Z · LW(p) · GW(p)
Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-13T00:28:35.892Z · LW(p) · GW(p)
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-13T18:40:26.362Z · LW(p) · GW(p)
That's because modern marriage is different from how it traditionally worked:
Replies from: gjmMarriage: Originally, within the lives of older married people, an irrevocable commitment to live together and raise the resulting children. Now the point of marriage is divorce, the legal authority of the wife over a husband on pain of confiscation of his assets and income. Some people attempt to use Church and social pressure to enforce old type marriage, but hard to find an old type church.
↑ comment by gjm · 2014-09-28T13:44:44.838Z · LW(p) · GW(p)
That would be an interesting point, if it weren't batshit insane.
- "The point of marriage is divorce"? Really?
- If Jim's account were right, then to a very good first approximation no man would ever choose to marry.
- "Confiscation of his assets"? A large part of the point of the "old type marriage" (and for that matter the not-so-old) is that the partners' assets are shared.
(In any case, even pretending that Jim's correct, it's not clear to me how that explains the alleged problem with marriage, namely that it makes it harder for couples to stay in love. Suppose we have two kinds of marriage: "Old", completely irrevocable, and "New", open to divorce on terms ruinous to the husband. Why would a "New" marriage do more to make it harder to stay in love than an "Old" marriage? Especially if, as Scott Adams suggests, the actual mechanism by which marriage allegedly makes it harder to stay in love is by requiring the couple to coordinate every minute of their lives.)
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-28T17:21:46.496Z · LW(p) · GW(p)
If Jim's account were right, then to a very good first approximation no man would ever choose to marry.
Well a lot fewer men are marrying these days. Most of the ones who are either expect to get the old definition or haven't yet realized the definition has changed on them.
"Confiscation of his assets"? A large part of the point of the "old type marriage" (and for that matter the not-so-old) is that the partners' assets are shared.
In practice the husband is the one who is providing most of said assets. Also in the old definition the assets were shared but since the marriage was irrevocable no one was going to confiscate anyone else's assets.
Replies from: gjm↑ comment by gjm · 2014-09-28T18:10:09.921Z · LW(p) · GW(p)
Most of the ones who are either expect [...] or haven't yet realised [...]
Evidence? (I have noticed that on past occasions when you've made confident pronouncements -- in some cases, ones that seemed to imply being in possession of quantitative data -- you've been curiously reluctant to disclose the evidence that supports them.)
In practice the husband is the one who is providing most of said assets.
Sometimes, at least nominally. But ...
Imagine a family in which the husband works full-time at a difficult, hard-working, high-status, high-income job, and the wife looks after the house and the children. (The neo-reactionaries' ideal, right?) At least part of what's happening here is that the wife is foregoing money-earning opportunities in favour of work that doesn't receive any direct financial compensation, and by doing so she enables her husband to focus on that tough job of his. All else being equal, he will have more time and energy for work if he doesn't have to do the cooking and laundry and childcare. And that is likely to lead to better success at work, promotions, and higher income.
Now, of course the income from that is nominally his, not hers. And if you choose to say that everything that comes in from his employer, and any gains on investments made with that money, are "his assets", then indeed you'll see what happens in a divorce as "confiscation of his assets". But I think that's a superficial view.
↑ comment by simplicio · 2014-09-12T18:28:43.651Z · LW(p) · GW(p)
What if he wanted to make them stay in love?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-12T19:31:59.089Z · LW(p) · GW(p)
Then he would let them work out a custom solution free of societal expectations, I suspect. Besides, an average romantic relationship rarely survives more than a few years, unless both parties put a lot of effort into "making it work", and there is no reason beyond prevailing social mores (and economic benefits, of course) to make it last longer than it otherwise would.
Replies from: simplicio↑ comment by simplicio · 2014-09-12T19:56:40.746Z · LW(p) · GW(p)
Just to clarify, you figure the optimal relationship pattern (in the absence of societal expectations, economic benefits, and I guess childrearing) is serial monogamy? (Maybe the monogamy is assuming too much as well?)
Replies from: shminux, Lumifer↑ comment by Shmi (shminux) · 2014-09-12T20:59:45.081Z · LW(p) · GW(p)
Certainly serial monogamy works for many people, since this is the current default outside marriage. I would not call it "optimal", it seems more like a decent compromise, and it certainly does not work for everyone. My suspicion is that those happy in a life-long exclusive relationship are a minority, as are polyamorists and such.
I expect domestic partnerships to slowly diverge from the legal and traditional definition of marriage. It does not have to be about just two people, about sex, or about child raising. If 3 single moms decide to live together until their kids grow up, or 5 college students share a house for the duration of their studies, they should be able to draw up a domestic partnership contract which qualifies them for the same assistance, tax breaks and next-of-kin rights married couples get. Of course, this is a long way away still.
Replies from: simplicio↑ comment by simplicio · 2014-09-17T19:38:59.477Z · LW(p) · GW(p)
To my mind, the giving of tax breaks etc. to married folks occurs because (rightly or wrongly) politicians have wanted to encourage marriage.
I agree that in principle there is nothing wrong with 3 single moms or 5 college students forming some sort of domestic partnership contract, but why give them the tax breaks? Do college kids living with each other instead of separately create some sort of social benefit that "we" the people might want to encourage? Why not just treat this like any other contract?
Apart from this, I think the social aspect of marriage is being neglected. Marriage for most people is not primarily about joint tax filing, but rather about publicly making a commitment to each other, and to their community, to follow certain norms in their relationship (e.g., monogamy; the specific norms vary by community). This is necessary because the community "thinks" pair bonding and childrearing are important/sacred/weighty things. In other words, "married" is a sort of honorific.
Needless to say, society does not think 5 college students sharing a house is an important/sacred/weighty thing that needs to be honoured.
This thick layer of social expectations is totally absent for the kind of arm's-length domestic partnership contract you propose, which makes me wonder why anybody would either want to call it marriage or frame it as being an alternative to marriage.
Replies from: therufs, Lumifer, army1987↑ comment by therufs · 2014-09-17T21:05:45.294Z · LW(p) · GW(p)
why anybody would either want to call it marriage
I don't think anyone suggested that?
or frame it as being an alternative to marriage.
Some marriages are of convenience, and the honorific sense doesn't apply as well to people who don't fit the romantic ideal of marriage.
↑ comment by Lumifer · 2014-09-17T20:29:27.516Z · LW(p) · GW(p)
which makes me wonder why anybody would either want to call it marriage
I could make exactly the same argument about divorce-able marriage and wonder why would anyone call this get-out-whenever-you-want-to arrangement "marriage" :-D
The point is, the "thick layer of social expectations" is not immutable.
Replies from: simplicio, Azathoth123↑ comment by simplicio · 2014-09-17T20:50:24.947Z · LW(p) · GW(p)
If traditional marriage is a sparrow, then marriage with no-fault divorce is a penguin, and 5 college kids sharing a house is a centipede. Type specimen, non-type specimen, wrong category.
Social expectations are mutable, yes - what of it? Do you think it's desirable or inevitable that marriage just become a fancy historical legal term for income splitting on one's tax return? Do you think sharing a house in college is going to be, or ought to be, hallowed and encouraged?
↑ comment by Azathoth123 · 2014-09-18T02:32:51.009Z · LW(p) · GW(p)
I could make exactly the same argument about divorce-able marriage and wonder why would anyone call this get-out-whenever-you-want-to arrangement "marriage" :-D
Agreed, no fault divorce laws were a huge mistake.
Replies from: Lumifer↑ comment by A1987dM (army1987) · 2014-09-18T08:20:10.344Z · LW(p) · GW(p)
Do college kids living with each other instead of separately create some sort of social benefit that "we" the people might want to encourage?
It reduces the demand for real estate, which lowers its price. Of course this is a pecuniary externality so the benefit to tenants is exactly counterbalanced by the harm to landlords, but given that landlords are usually much wealthier than tenants...
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-09-19T01:05:45.478Z · LW(p) · GW(p)
Yes and the social benefit is already captured by the roommates in the form of paying less rent.
↑ comment by bramflakes · 2014-09-12T18:33:41.572Z · LW(p) · GW(p)
The idea that marriage is purely about love is a recent one.
Adams' lifestyle might work for a certain kind of wealthy high IQ rootless cosmopolitan but not for the other 95% of the world.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-09-12T19:26:59.043Z · LW(p) · GW(p)
If this is a criticism, it's wide off the mark.
Note his disclaimer about "the best economic arrangement". And he certainly speaks about the US only.
Replies from: bramflakes↑ comment by bramflakes · 2014-09-12T19:35:30.418Z · LW(p) · GW(p)
And it speaks volumes that he views it as an "economic arrangement", like he's channeling Bryan Caplan.
Replies from: gjm↑ comment by gjm · 2014-09-28T13:49:03.883Z · LW(p) · GW(p)
I don't understand.
It looks to me as if Adams's whole point is that marriage isn't supposed to be primarily an economic arrangement, it's supposed to be an institution that provides couples with a stable context for loving one another, raising children, etc., but in fact (so he says) the only way in which it works well is economically, and in any other respect it's a failure.
It's as if I wrote "Smith's new book makes a very good doorstop, but in all other respects I have to say it seems to me an abject failure". Would you say it speaks volumes that I view Smith's book as a doorstop? Surely my criticism only makes sense because I think a book is meant to be other things besides a doorstop.
comment by NancyLebovitz · 2014-09-21T05:38:13.634Z · LW(p) · GW(p)
Replies from: arundelo"The spatial anomaly has interacted with the tachyonic radiation in the nebula, it's interfering with our sensors. It's impossible to get a reading."
"There's no time - we'll have to take the ship straight through it!"
"Captain, I advise against this course of action. I have calculated the odds against our surviving such an action at three thousand, seven hundred and forty-five to one."
"Damn the odds, we've got to try... wait a second. Where, exactly, did you get that number from?"
"I hardly think this is the time for-"
"No. No, fuck you, this is exactly the time. The fate of the galaxy is at stake. Trillions of lives are hanging in the balance. You just pulled four significant digits out of your ass, I want to see you show your goddamn work."
"Well, I used the actuarial data from the past fifty years, relating to known cases of ships passing through nebulae that are interacting with spatial anomalies. There have been approximately two million such incidents reported, with only five hundred and forty-two incidents in which the ship in question survived intact."
"And did you at all take into account that ship building technology has improved over the past fifty years, and that ours is not necessarily an average ship?"
"Indeed I did, Captain. I weighted the cases differently based on how recent they were, and how close the ship in question was in build to our own. For example, one of the incidents with a happy ending was forty-seven years ago, but their ship was a model roughly five times our size. As such, I counted the incident as having twenty-four percent of the relevance of a standard case."
"But what of our ship's moxie? Can you take determination and drive and the human spirit into account?"
"As a matter of fact I can, Captain. In our three-year history together, I have observed that both you and this ship manage to beat the odds with a measurable regularity. To be exact, we tend to succeed twenty-four point five percent more often than the statistics would otherwise indicate - and, in fact, that number jumps to twenty-nine point two percent specifically in cases where I state the odds against our success to three significant digits or greater. I have already taken that supposedly 'unknowable' factor into account with my calculations."
"And you expect me to believe that you've memorized all these case studies and performed this ridiculously complicated calculation in your head within the course of a normal conversation?"
"Yes. With all due respect to your species, I am not human. While I freely admit that you do have greater insight into fields such as emotion, interpersonal relations, and spirituality than I do, in the fields of memory and calculation, I am capable of feats that would be quite simply impossible for you. Furthermore, if I may be perfectly frank, the entire purpose of my presence on the bridge is to provide insights such as these to help facilitate your command decisions. If you're not going to heed my advice, why am I even here?"
"Mm. And we're still sitting at three thousand seven hundred to one against?"
"Three thousand, seven hundred and forty five to one."
"Well, shit. Well, let's go around, then.
comment by elharo · 2014-09-22T11:02:58.175Z · LW(p) · GW(p)
Economics is the study of how to utilize limited resources to achieve good ends. And good, of course, is in the eye of the beholder, defined by humans. But economists don’t agree with each other about ends or means. They can’t agree on the efficacy of money printing or austerity. They keep changing their minds every few years about conventional wisdom while at every instant appearing to be certain that they are right. My gripe with economists is not that their models don’t work well – they don’t, look at the role of central banks in the financial crisis – but that they seem so reluctant to acknowledge the riskiness of their advice. And yet, beware their fearsome unelected power. Anyone visiting from Mars last year and asking to be taken to our leader would undoubtedly expect to meet Bernanke.
As a result their public arguments have an incestuous yet masturbatory quality that is exhausting to follow. The only field more self-confidently but just as regularly wrong as economics is nutrition, whose recommendations to shun butter/margarine or red meat/carbohydrates regularly reverse themselves.
Natural scientists (physicists, chemists, biologists) have had frightful power, and not always used it well. But at least they can more or less agree about truth and efficacy. Economists cannot, except by using statistical regressions which are often flawed and prove little.
-- Emanuel Derman, Columbia University, Why is Thomas Piketty's 700-page book a bestseller?
Replies from: None, sixes_and_sevens↑ comment by [deleted] · 2014-09-22T17:51:18.229Z · LW(p) · GW(p)
The preference of economists, not to mention other kinds of experts, to attribute China’s high savings rates to Confucian values is all the more strange when we remember that just fifty years ago East Asian countries tended to have very low savings rates, and moralizing economists then had no trouble blaming Asia’s seemingly intractable poverty and low savings on Confucian values, which for much of Chinese history had been criticized for encouraging laziness and spendthrift habits. Economists want eagerly to assign virtue or vice, but sometimes it is easier simply to stick with arithmetic.
Michael Pettis -- Not with a Bank but with a Whimper
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2014-09-22T19:28:53.078Z · LW(p) · GW(p)
I am unable to find any data on the Chinese national savings rate before 1980, and I am surprised by the claim that it was remarkably low. Does the author cite data to that effect (or any authors blaming that on Confucianism)?
[edit]Found household data for Japan going back to 1962, when it was solidly higher than European numbers.
Replies from: None, bramflakes↑ comment by [deleted] · 2014-09-22T20:45:16.245Z · LW(p) · GW(p)
Ummm... household data for Japan isn't household data for China. But your surprise is your prior, unfortunately. I can't say I know something particular here, as I know damn well I could waste my evening gathering real evidence, rendering any opinion I might give right now uselessly uninformed.
What I had wanted to emphasize from both the quote and the interview, though, was the degree to which we can be far more rational about economics simply by acknowledging the realities of basic accounting: surpluses somewhere must equal deficits elsewhere. Before this reality, huge quantities of implicitly normative "economics" dissolve.
Replies from: Vaniver, ChristianKl↑ comment by Vaniver · 2014-09-23T21:18:33.103Z · LW(p) · GW(p)
Ummm... household data for Japan isn't household data for China.
The quote said "East Asian" countries, hence Japan.
What I had wanted to emphasize from both the quote and the interview, though, was the degree to which we can be far more rational about economics simply by acknowledging the realities of basic accounting: surpluses somewhere must equal deficits elsewhere. Before this reality, huge quantities of implicitly normative "economics" dissolve.
It's not at all obvious to me that the quote has anything to do with that. The quote seems to be arguing "if you use current data, you would think X->Y. But if you used past data, you would think X->~Y, so we should be reluctant to use causal reasoning like this."
Because of an argument I came across earlier which failed spectacularly badly (which also had to do with China), I pattern-matched and said "wait, would we actually think X->~Y if we looked at past data?". According to the paper Lumifer found, at least, the answer is "no, we would think X->Y."
↑ comment by ChristianKl · 2014-09-24T10:11:02.141Z · LW(p) · GW(p)
What I had wanted to emphasize from both the quote and the interview, though, was the degree to which we can be far more rational about economics simply by acknowledging the realities of basic accounting: surpluses somewhere must equal deficits elsewhere.
Reminds me of: http://xkcd.com/793/
In general it's not very rational to take a look at a complex field and simply posit that if they would follow some simple rules that you consider reasonable they would just do better.
↑ comment by bramflakes · 2014-09-23T17:50:11.843Z · LW(p) · GW(p)
I'm surprised by your surprise. It might be hindsight bias on my part, but not wanting to save money in a totalitarian Communist society makes perfect sense to me. Live for today, because tomorrow you might be locked up or the government might embark on some bone-headed 5-year economic plan and your money would be worthless.
Replies from: Vaniver↑ comment by Vaniver · 2014-09-23T20:56:49.117Z · LW(p) · GW(p)
I'm surprised by your surprise. It might be hindsight bias on my part, but not wanting to save money in a totalitarian Communist society makes perfect sense to me. Live for today, because tomorrow you might be locked up or the government might embark on some bone-headed 5-year economic plan and your money would be worthless.
When you look at national savings, 5-year plans suggest that the savings rate should be higher, because the government is forcing people to invest money that they otherwise might have consumed. [Edit] Also, a feature of econ textbooks in the US from ~1950 to ~1990 were dire predictions that the USSR would overtake the US because the national savings rate in the USSR was higher than the US, and the Solow growth model suggests that GDP growth is proportional to savings rate. (Turns out, socialist countries are better at saving at the national level but worse at using those savings to achieve real growth.)
The actual heuristic that generated my surprise had to do with personality rather than incentives, though: I didn't think that even communism was enough to defeat the Chinese propensity to save, and it struck me as unlikely that the savings rate in China would be high from antiquity to 1900, drop for a few decades, and jump back to high.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-23T21:05:02.195Z · LW(p) · GW(p)
When you look at national savings, 5-year plans suggest that the savings rate should be higher, because the government is forcing people to invest money that they otherwise might have consumed.
As far as I understand command economies, it's not like the government is forcing people to invest their money, it's more like the government decides how much to allocate for consumption and pays it out as salaries -- and the rest it wastes... err, I mean uses for capital spending. In this case the "national saving rate" does not reflect any population preferences, only Politbureau considerations.
I would also be quite wary of savings statistics coming from communist times as I would expect most savings of actual people to live in shadow/underground economy and not be seen officially.
Replies from: Vaniver↑ comment by Vaniver · 2014-09-23T21:23:12.418Z · LW(p) · GW(p)
As far as I understand command economies, it's not like the government is forcing people to invest their money, it's more like the government decides how much to allocate for consumption and pays it out as salaries -- and the rest it wastes... err, I mean uses for capital spending. In this case the "national saving rate" does not reflect any population preferences, only Politbureau considerations.
Agreed; I think the grandparent refers to money earned by people and spent by the Politbureau as "their" (i.e. the people's) money, which is an implicitly political use of language (because I'm libertarian).
I would also be quite wary of savings statistics coming from communist times as I would expect most savings of actual people to live in shadow/underground economy and not be seen officially.
Agreed.
↑ comment by sixes_and_sevens · 2014-09-22T12:50:39.952Z · LW(p) · GW(p)
I'd like to nominate "economics" and "economists" as a non-coherent category for making value judgements against.