Rationality Quotes Thread December 2015
post by elharo · 2015-12-02T11:28:55.845Z · LW · GW · Legacy · 78 commentsContents
78 comments
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
78 comments
Comments sorted by top scores.
comment by [deleted] · 2015-12-03T08:45:53.952Z · LW(p) · GW(p)
Reasoning can take us to almost any conclusion we want to reach, because we ask “Can I believe it?” when we want to believe something, but “Must I believe it?” when we don’t want to believe. The answer is almost always yes to the first question and no to the second.
--Jon Haidt, The Righteous Mind
Replies from: Sarunas↑ comment by Sarunas · 2015-12-05T12:37:08.501Z · LW(p) · GW(p)
I remember reading the idea expressed in this quote in an old LW post, older than Haidt's book which was published in 2012, and it is probably older than that.
In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.
Replies from: Unnamed↑ comment by Unnamed · 2015-12-05T20:28:33.993Z · LW(p) · GW(p)
I think psychologist Tom Gilovich is the original source of the "Can I?" vs. "Must I?" description of motivated reasoning. He wrote about it in his 1991 book How We Know What Isn't So.
For desired conclusions, we ask ourselves, "Can I believe this?", but for unpalatable conclusions we ask, "Must I believe this?
comment by Curiouskid · 2015-12-04T20:37:46.174Z · LW(p) · GW(p)
Probably people have seen this before, but I really like it:
Replies from: lmmPeople often say that motivation doesn't last. Well, neither does bathing, that's why we recommend it daily.
comment by 27chaos · 2015-12-02T18:50:59.888Z · LW(p) · GW(p)
Replies from: Benito, NoneThe key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.
↑ comment by Ben Pace (Benito) · 2015-12-02T21:34:33.130Z · LW(p) · GW(p)
Having recently watched a few of these discussions/debates in the commons (watched via youtube) it is noticeable how the speaker is able to temper the mood and add a little levity.
There is one popular political youtube account called 'Incorrigible Delinquent' and he begins each of his uploads with the speaker quite humorously saying " You are an incorrigible delinquent! "
comment by aausch · 2015-12-24T20:37:22.235Z · LW(p) · GW(p)
"Update: many people have read this post and suggested that, in the first file example, you should use the much simpler protocol of copying the file to modified to a temp file, modifying the temp file, and then renaming the temp file to overwrite the original file. In fact, that’s probably the most common comment I’ve gotten on this post. If you think this solves the problem, I’m going to ask you to pause for five seconds and consider the problems this might have. (...) The fact that so many people thought that this was a simple solution to the problem demonstrates that this problem is one that people are prone to underestimating, even they’re explicitly warned that people tend to underestimate this problem!" -- @danluu, "Files are hard"
comment by Lumifer · 2015-12-21T22:24:48.406Z · LW(p) · GW(p)
If anyone is trying to tell you it’s not complicated, be very, very suspicious
-- Tyler Cowen
Replies from: Richard_Kennaway, dxu, VoiceOfRa↑ comment by Richard_Kennaway · 2015-12-22T20:35:46.127Z · LW(p) · GW(p)
Replies from: Good_Burning_Plastic, LumiferSeek simplicity and distrust it.
↑ comment by Good_Burning_Plastic · 2015-12-27T10:43:04.606Z · LW(p) · GW(p)
There's this guy called William of Occam who must really be spinning in his grave right now.
Replies from: g_pepper↑ comment by g_pepper · 2015-12-27T14:45:08.660Z · LW(p) · GW(p)
I interpreted the Whitehead quote to mean that you should seek the simplest explanation that explains whatever it is you are trying to explain. This is consistent with Occam's Razor. I assumed that "distrust it" meant subject the explanation to additional tests to confirm or falsify the explanation. So, I didn't see this quote as contradicting William of Occam; instead it built on Occam's Razor to describe the essence of the scientific method.
This interpretation is supported if you look at the context of the quote:
Replies from: Richard_KennawayThe aim of science is to seek the simplest explanations of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be, "Seek simplicity and distrust it."
↑ comment by Richard_Kennaway · 2015-12-27T17:23:10.256Z · LW(p) · GW(p)
Here also is Einstein:
It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.
Or in the pithier paraphrase usually quoted:
Everything should be made as simple as possible, but no simpler.
comment by ike · 2015-12-29T15:31:01.113Z · LW(p) · GW(p)
Each individual instance of outperformance can be put into its own coherent narrative, can be made to look logical and earned on its own terms. But when you throw them together it's hard to escape the impression of a coin-flipping contest with a song and dance at the end.
comment by Richard_Kennaway · 2015-12-26T09:46:19.619Z · LW(p) · GW(p)
If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.
comment by [deleted] · 2015-12-14T14:03:50.792Z · LW(p) · GW(p)
I want you to be the Admiral Nagumo of my staff. I want your every thought, every instinct as you believe Admiral Nagumo might have them. You are to see the war, their operations, their aims, from the Japanese viewpoint and keep me advised what you are thinking about, what you are doing, and what purpose, what strategy, motivates your operations. If you can do this, you will give me the kind of information needed to win this war.
-Admiral Nimitz from Edwin Layton, And I Was There, 1985, p. 357.
comment by flexive · 2015-12-05T10:17:02.710Z · LW(p) · GW(p)
"Direct action is not always the best way. It is a far greater victory to make another see through your eyes than to close theirs forever."
Kreia, KOTOR 2
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2015-12-11T12:04:23.737Z · LW(p) · GW(p)
We should have a thread for anti-rationality quotes some time. Kotor 2 would be a gold mine. :)
"There is a faction of meatbags called the Sith. They want what any rational meatbag would want: the power to assassinate anyone they choose at any time."
HK-47, assassin droid.
comment by dspeyer · 2015-12-03T05:35:44.839Z · LW(p) · GW(p)
I think the common thread in a lot of these [horrible] relationships is people who have managed to go through their entire lives without realizing that “Person did Thing, which caused me to be upset” is not the same thing as “Person did something wrong”, much less “I have a right to forbid Person from ever doing Thing again”.
--Ozymandias (most of the post is unrelated)
comment by common_law · 2015-12-12T04:57:58.993Z · LW(p) · GW(p)
Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 - 517.
comment by VoiceOfRa · 2015-12-03T23:29:52.507Z · LW(p) · GW(p)
Replies from: DanArmak, GlenHarms take longer to show up & disprove than benefits. So evidence-based medicine disproportionately channels optimism
↑ comment by DanArmak · 2015-12-06T15:35:27.591Z · LW(p) · GW(p)
That seems like selection bias.
You do a lot of studies and experiments, and filter out most proposed medicine because it causes harm quickly, or doesn't cause benefits quickly enough or at all. Then you market whatever survived testing. Obviously, if it's still harmful, the harms will show up only slowly, while the benefits will show up quickly - otherwise you would have filtered it out before it reached the consumer.
This is like saying engineering disproportionately channels optimism, because almost all the appliances you buy in the store work now and only fail later. If they had failed immediately, they would have been flagged in QC and never got to the shop.
Replies from: ChristianKl, VoiceOfRa↑ comment by ChristianKl · 2015-12-06T21:42:39.155Z · LW(p) · GW(p)
If an appliance you buy fails than you know that it fails. If a drug reduces your IQ by 5 points you won't know. Drugs also don't get tested for whether or not they reduce your IQ by 5 points.
↑ comment by VoiceOfRa · 2015-12-06T21:14:27.031Z · LW(p) · GW(p)
That seems like selection bias.
Yes, it's still a bias.
This is like saying engineering disproportionately channels optimism, because almost all the appliances you buy in the store work now and only fail later. If they had failed immediately, they would have been flagged in QC and never got to the shop.
The difference is, if they fail, you can always buy a new appliance. You can't buy a new body.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-12-13T18:54:48.207Z · LW(p) · GW(p)
The difference is, if they fail, you can always buy a new appliance.
For some underwhelming value of "always", and anyway appliances aren't all that engineering makes.
Off the top of my head, cases when "harms take longer to show up & disprove than benefits" outside medicine included leaded gasoline, chlorofluorocarbons, asbestos, cheap O-rings in space shuttles, the 1940 Tacoma Narrows Bridge, the use of two-digit year numbers...
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-12-13T19:20:32.515Z · LW(p) · GW(p)
cheap O-rings in space shuttles
Look at Feynman's analysis. I'd say this is a good example of disproportionate channeling of optimism.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-12-14T03:16:38.620Z · LW(p) · GW(p)
Yes. My point was that disproportionate channeling of optimism isn't something specific to medicine (let alone to evidence-based medicine).
EDIT: Hmm, I guess I originally took "disproportionally" to mean "compared to how much other things channel optimism" whereas it'd make more sense to interpret it as "compared to how much medicine channels pessimism".
↑ comment by Glen · 2015-12-04T19:40:59.907Z · LW(p) · GW(p)
Are there any other systems for judging medicine that more accurately reflects reality? I know very little about medicine in general, but it would be interesting to hear about any alternate methods that get good results.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-04T20:03:41.543Z · LW(p) · GW(p)
It's hard to say how effective various alternative styls of medicine happen to be.
There's research that suggests Mormon's can recognize other Mormon from non-Mormons by looking at whether the skin of the other person looks healthy. Then Mormon's seem to live 6 to 10 years longer than other Americans.
On the other hand the nature of claims like this is that it's hard to have reliable knowledge about it.
comment by [deleted] · 2015-12-24T14:40:55.741Z · LW(p) · GW(p)
Replies from: None"The first step is to establish that something is possible; then probability will occur."
↑ comment by [deleted] · 2015-12-24T14:51:24.644Z · LW(p) · GW(p)
"It is a mistake to hire huge numbers of people to get a complicated job done. Numbers will never compensate for talent in getting the right answer (two people who don't know something are no better than one), will tend to slow down progress, and will make the task incredibly expensive."
Merry Christmas beloved LessWrong family. I think I finally get the format of these threads. How did I not read them properly earlier!
Replies from: None↑ comment by [deleted] · 2015-12-24T15:03:22.836Z · LW(p) · GW(p)
"My biggest mistake is probably weighing too much on someone's talent and not someone's personality. I think it matters whether someone has a good heart."
I recently watched a company go from a billion in revenues to zero when a founder stole $90 million from the company.
Integrity, humility, and doing your best is by far the most important consideration when evaluating whether to work for someone.
comment by [deleted] · 2015-12-03T08:58:18.912Z · LW(p) · GW(p)
If you want to understand another group, follow the sacredness.
--Jon Haidt, The Righteous Mind
comment by roland · 2015-12-10T15:44:10.725Z · LW(p) · GW(p)
If the rule you followed brought you to this, of what use was the rule?
-- The killer shortly before killing his victim in No Country for Old Men
Replies from: gwern, Glen↑ comment by gwern · 2016-01-02T19:04:13.351Z · LW(p) · GW(p)
"A well-laid plan is always to my mind most profitable; even if it is thwarted later, the plan was no less good, and it is only chance that has baffled the design; but if fortune favors one who has planned poorly, then he has gotten only a prize of chance, and his plan was no less bad."
--Artabanus, uncle of Xerxes; book 7 of Herodotus's Histories (I could swear I'd seen this on a LW quote thread before, but searching turns up nothing.)
↑ comment by Glen · 2015-12-10T15:54:34.385Z · LW(p) · GW(p)
(To make it clear: I have never seen the movie in question, so this is not a comment on the specifics of what happened) Just because it turned out poorly doesn't make it a bad rule. It could have had a 99% chance to work out great, but the killer is only seeing the 1% where it didn't. If you're killing people, then you can't really judge their rules, since it's basically a given that you're only going to talk to them when the rules fail. Everything is going to look like a bad rule if you only count the instances where it didn't work. Without knowing how many similar encounters the victim avoided with their rule, I don't see how you can make a strong case that it's a bad (or good) rule.
Replies from: Lumifer, roland↑ comment by Lumifer · 2015-12-10T16:19:48.324Z · LW(p) · GW(p)
Just because it turned out poorly doesn't make it a bad rule.
That kinda depends on the point of view.
If you take the frequentist approach and think about limits as n goes to infinity, sure, a single data point will tell you very little about the goodness of the rule.
But if it's you, personally you, who is looking at the business end of a gun, the rule indeed turned out to be very very bad. I think the quote resonates quite well with this.
Besides, consider this. Let's imagine a rule which works fine 99% of the time, but in 1% of the cases it leaves you dead. And let's say you get to apply this rule once a week. Is it a good rule? Nope, it's a very bad rule. Specifically, your chances of being alive at the end of the year are only 0.99^52 = about 60%, not great. Being alive after ten years? About half a percent.
↑ comment by roland · 2015-12-10T16:14:27.388Z · LW(p) · GW(p)
I agree. But this is not how I saw the quote. For me it is just a cogent way of asking "is your application of rationality leading to success"?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-12-13T14:05:26.245Z · LW(p) · GW(p)
Shorn of context, it could be. But what is the context? I gather from the Wikipedia plot summary that Chigurh (the killer) is a hit-man hired by drug dealers to recover some stolen drug money, but instead kills his employers and everyone else that stands in the way of getting the money himself. To judge by the other quotes in IMDB, when he's about to kill someone he engages them in word-play that should not take in anyone in possession of their rational faculties for a second, in order to frame what he is about to do as the fault of his victims.
Imagine someone with a gun going out onto the street and shooting at everyone, while screaming, "If the rule you followed brought you to this, of what use was the rule?" Is it still a rationality quote?
Replies from: rolandcomment by [deleted] · 2015-12-02T23:20:43.833Z · LW(p) · GW(p)
Replies from: ChristianKl, NoneAnd we must study through reading, listening, discussing, observing and thinking. We must not neglect any one of those ways of study. The trouble with most of us is that we fall down on the latter -- thinking -- because it's hard work for people to think, And, as Dr. Nicholas Murray Butler said recently, 'all of the problems of the world could be settled easily if men were only willing to think.'
↑ comment by ChristianKl · 2015-12-06T15:45:57.739Z · LW(p) · GW(p)
These days we often have people who do think but don't do the other well enough.
comment by 27chaos · 2015-12-02T18:49:46.756Z · LW(p) · GW(p)
The basic key that I follow when engaging with antagonistic individuals is to recognize that we will always tend to imitate someone. In mimetic rivalries, the antagonism can come to dominate so much that the third pole (and there is always a third pole – a relationship, an issue, a symptom, etc.) becomes interchangeable. The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.
comment by redlizard · 2015-12-02T19:08:40.164Z · LW(p) · GW(p)
Replies from: gjmConsensus tends to be dominated by those who will not shift their purported beliefs in the face of evidence and rational argument.
↑ comment by gjm · 2015-12-02T23:06:06.891Z · LW(p) · GW(p)
This appears to be empirically incorrect, at least in some fields. A few examples:
- Creationists are much less willing to adjust their beliefs on the basis of evidence and argument than scientifically-minded evolutionists, but evolution rather than special creation is the consensus position these days.
- It looks to me (though I confess I haven't looked super-hard) as if the most stubborn-minded economists are the adherents of at-least-slightly-fringey theories like "Austrian" economics rather than the somewhere-between-Chicago-and-Keynes mainstream.
- Consensus views in hard sciences like physics are typically formed by evidence and rational argument.
↑ comment by Viliam · 2015-12-06T21:52:46.703Z · LW(p) · GW(p)
Depends on what you mean by "consensus". For example, in some organizations it means "we will not make a decision until literally everyone agrees with it". In which case, stubborn people make all the decisions (until the others get sufficiently pissed off and fire them).
Replies from: gjm↑ comment by gjm · 2015-12-06T22:33:18.398Z · LW(p) · GW(p)
Probably true. But I don't think that's the sort of thing Jim is talking about in the post redlizard was quoting from; do you?
Replies from: Viliam↑ comment by Viliam · 2015-12-06T23:13:07.335Z · LW(p) · GW(p)
Oh. I haven't followed the link before commenting.
Now I did... and I don't really see the connection between the article and consensus. The most prominent example is how managers misunderstood the technical issues with Challenger: but that's about putting technically unsavvy managers into positions of power over engineers, not about consensus.
(I wonder if this is an example of a pattern: "Make a statement. Write an article mostly about something else, using arguments that a reader will probably agree with. At the end, a careless reader is convinced about the statement.")
Replies from: VoiceOfRa, gjm↑ comment by VoiceOfRa · 2015-12-07T19:17:38.843Z · LW(p) · GW(p)
but that's about putting technically unsavvy managers into positions of power over engineers,
Technically unsavy manages who insisted that the engineers tell them what they wanted to hear, i.e., who insisted that they be included in the consensus and then refused to shift their position.
↑ comment by DanArmak · 2015-12-06T15:42:04.456Z · LW(p) · GW(p)
We have a special name for this; it's called science, and it's rather rare. It might still be a pretty good generalization of all human behavior to say that consensus tends to be dominated by those who won't change their opinion.
Actually, I don't think it's a good generalization for reasons other than science. Most conflicts or debates devolve to politics, where people support someone instead of some opinion or position. And in politics, the top person or party is often replaced by a different one.
Replies from: VoiceOfRacomment by [deleted] · 2015-12-19T02:38:46.168Z · LW(p) · GW(p)
"For it is easy to criticise and break down the spirit of others, but to know yourself takes maybe a lifetime" Bruce Lee
"Remember, my friend to enjoy your planning as well as your accomplishment, for life is too short for negative energy". -Bruce Lee
"We should devote outselves to being self-sufficient and must not depend upon the external rating by others for our happiness" -Bruce
"Remember, my friend, to ejoy your plannng as well as your accomplishment, for life is too short for negative neergy" Lee *Just realised after writing that I already copied out this quote. Keeping it here as a record of my failing memory :(
comment by [deleted] · 2015-12-26T02:38:43.418Z · LW(p) · GW(p)
Because we live in a culture that fears being alone, being rejected, feeling unworthy and unlovable, we confuse love with attachment, dependency, sexual attraction, romantic illusion, lust, infatuation, or obligation.
comment by [deleted] · 2015-12-31T15:30:14.796Z · LW(p) · GW(p)
How to predict if bombing ISIS in Syria is a good idea:
Draw up a comprehensive spreadsheet of every 'Western' intervention (and almost-but-not-quite-intervention) in a >foreign country.
Rate each case by how similar it is to the present case (e.g. location, how long ago it was, civil war vs no civil war, >religious war vs non-religious war, how many countries support the intervention, cultural differences between the >countries involved, level of involvement, etc).
Rate how much each intervention (or decision not to intervene) helped or hurt the situation, in retrospect, on a >scale from -10 to +10.
Take a weighted average.
If on average intervention makes things worse, do nothing. If it makes things better, decide if the level of improvement created in such cases is worth the cost in dollars and dead people.
-robert wiblin
Replies from: Anders_H, Richard_Kennaway, Jiro↑ comment by Anders_H · 2016-01-01T00:16:10.285Z · LW(p) · GW(p)
Rate how much each intervention (or decision not to intervene) helped or hurt the situation, in retrospect, on a >scale from -10 to +10.
How do you plan to do this without counterfactual knowledge?
Replies from: None↑ comment by [deleted] · 2016-01-01T06:28:40.681Z · LW(p) · GW(p)
https://en.wikipedia.org/wiki/Quasi-experiment
take your pick
it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.
Replies from: Anders_H, ChristianKl↑ comment by Anders_H · 2016-01-01T18:19:55.895Z · LW(p) · GW(p)
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Replies from: None↑ comment by [deleted] · 2016-01-02T05:00:50.878Z · LW(p) · GW(p)
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it's the best available means to make such an inference.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It's also used by earth scientists, but I haven't seen it used elsewhere. Based on this approach, analysts can:
- make a prediction about outcomes without interventions in libya with and without intervention
- when they choose to intervene on non-intervene, calculate those outcomes
- over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
under profound uncertainty about what would have happened if the alternative decision had been made?
I'm not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
↑ comment by ChristianKl · 2016-01-01T21:14:25.874Z · LW(p) · GW(p)
biostatisticians
Just for completion, Anders_H is one of those guys.
Replies from: None↑ comment by [deleted] · 2016-01-02T04:50:49.700Z · LW(p) · GW(p)
How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn't in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain.
Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification.
It wouldn't be the epimenides paradox since that refers to an individual making a self-referentially absurd claim:
The Epimenides paradox is the same principle as psychologists and sceptics using arguments from psychology claiming humans to be unreliable. The paradox comes from the fact that the psychologists and sceptics are human themselves, meaning that they state themselves to be unreliable
Anyone?
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-02T09:29:22.200Z · LW(p) · GW(p)
More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.)
Yes, Anders_H is Doctor of Science in Epidemiology. He's someone worth listening to when he tells you about what can and can't be done with experiment design.
Replies from: None↑ comment by [deleted] · 2016-01-03T09:17:10.040Z · LW(p) · GW(p)
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
This is a text conversation, so rhetorical questions aren't immediately apparent. Moreover, we're in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-03T09:49:02.121Z · LW(p) · GW(p)
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
Yes, but if you disagree you can't simply point to biostatisticians do this day in day out
and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.
↑ comment by Richard_Kennaway · 2015-12-31T19:16:08.804Z · LW(p) · GW(p)
If it makes things better, decide if the level of improvement created in such cases is worth the cost in dollars and dead people.
That is a tendentious way of comparing the two: a cold, abstract "level of improvement" against the more concrete "dollars" and very concrete "dead people". It suggests the writer is predisposed to find that intervention is a bad idea.
But what is improvement, but resources then available to apply to better things, and live people living better lives?
And why the reference class "Western"?
Replies from: Vaniver↑ comment by Vaniver · 2015-12-31T20:16:38.865Z · LW(p) · GW(p)
And why the reference class "Western"?
Presumably, Wiblin is talking about Western bombing of ISIS in Syria. If one finds that Turkish interventions have been effective and American interventions haven't, say, then that's an argument that Americans shouldn't intervene now (but Turks should).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2016-01-01T13:54:18.332Z · LW(p) · GW(p)
Presumably, Wiblin is talking about Western bombing of ISIS in Syria. If one finds that Turkish interventions have been effective and American interventions haven't, say, then that's an argument that Americans shouldn't intervene now (but Turks should).
Choose your reference class, get the result you want. Is Turkey "Western" or not? It wants to join the EU (but hasn't been admitted yet). Russia is bombing Syria. Why exclude it from the class of foreign interventions? For that matter, I don't know what military actions, if any, Turkey has taken in Syria, but that would also be a foreign intervention.
Not to mention the the smallness of N in the proposed study and the elastic assessment.
I googled some of the phrases in the OP but only got hits to the OP. Is this even a quote?
↑ comment by Jiro · 2015-12-31T16:13:49.760Z · LW(p) · GW(p)
Rating each decision on a scale of 1 to 10 and then taking a weighted average is a recipe for biasing the result against intervention, since you've created a hard upper limit for how much you count an intervention as helping, so you'll count a successful intervention as 10 and be unable to count a successful intervention that does even more good as more than 10. (This has a similar problem at the low end of the scale, but that doesn't affect the final result since you can't go below zero intervention.)
This also produces bad results in cases where the intervention failed because it was insufficient. You'd end up concluding that intervention is bad when it may just be that insufficient intervention is bad. This method has clause 2 to cover similarity of case, but not similarity of intervention, and at any rate "similarity" is a fuzzy concept. If bombing half the country is a disaster and bombing a whole country succeeds, is bombing half a country "similar" to bombing a whole country? (Actually, you usually end up compressing all the dispute over intervention into a dispute over how similar two cases are.)
And it's generally a bad idea to put on a numerical scale things that you can't actually measure numerically. It gives a false appearance of accuracy and precision, like a company executive who wants to see figures for his company improve but doesn't actually care where the figures come from.
Also, "level of improvement created" is subject to noise. It is possible for an improvement to fail for reasons unrelated to the effectiveness of the intervention, like if the country gets hit by a meteor the next day (or more realistically, gets invaded or attacked the next day).
Replies from: The_Lion↑ comment by The_Lion · 2015-12-31T23:31:14.026Z · LW(p) · GW(p)
Basically one huge problem here is that there isn't enough data compared to the number of variables involved.
Not to mention that this is a problem in what Taleb would call extremistan, i.e., the distribution of possible outcomes from intervening, or not-intervening, are fat-tailed and include a lot of rare possibilities that haven't yet shown up in the data at all.
comment by [deleted] · 2015-12-26T02:22:30.514Z · LW(p) · GW(p)
" 'Bill is wrong, but bill works hard, so even though its the wrong solution, he's likely to succeed', and that the best compliment I ever received"
-Bill Gates, quoting someone else
If you were to rank order and say, I'm going to start a company, what's the highest return on investment for the risk? Space and Cars would be at the bottom.
-Elon Musk in the same vid