Rationality Quotes October 2014
post by Tyrrell_McAllister · 2014-10-01T23:02:20.410Z · LW · GW · Legacy · 238 commentsContents
238 comments
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
238 comments
Comments sorted by top scores.
comment by Jack_LaSota · 2014-10-11T04:18:21.496Z · LW(p) · GW(p)
A novice asked master Banzen: “What separates the monk from the master?”
Banzen replied: “Ten thousand mistakes!”
The novice, not understanding, sought to avoid all error. An abbot observed and brought the novice to Banzen for correction.
Banzen explained: “I have made ten thousand mistakes; Suku has made ten thousand mistakes; the patriarchs of Open Source have each made ten thousand mistakes.”
Asked the novice: “What of the old monk who labors in the cubicle next to mine? Surely he has made ten thousand mistakes.”
Banzen shook his head sadly. “Ten mistakes, a thousand times each.”
comment by lukeprog · 2014-10-04T21:28:52.536Z · LW(p) · GW(p)
Replies from: gjmProminent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters... Nobody has [a care-o-meter] capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.
comment by Salivanth · 2014-10-02T23:49:31.028Z · LW(p) · GW(p)
The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, "I see that you possess half of courage. That is good. Few achieve that."
"Half?" Fred asked, too awed to be truly offended.
"Yes," said the Wolf, "You know how to heroically defy, but you do not know how to heroically submit. How to say to another, 'You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.' And there are those in your lives wiser than you, to whom you could say that."
"But what if they're wrong?" George said.
"If they are wrong, you die," the Wolf said plainly, "Horribly. And for nothing. That is why it is an act of courage."
- HPMOR omake by Daniel Speyer.
↑ comment by MarkusRamikin · 2014-10-03T16:58:16.771Z · LW(p) · GW(p)
Nice. Where did you find that? Either Uncle Google is failing me, or I am failing Uncle Google.
Replies from: Salivanth, Flipnash↑ comment by [deleted] · 2014-10-04T02:44:18.408Z · LW(p) · GW(p)
I honestly cannot see how the mere existence of people wiser than myself constitutes a valid reason to turn off my brain and obey blindly. The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas.
Replies from: Salivanth, dspeyer, DanielLC, Emile↑ comment by Salivanth · 2014-10-04T05:55:52.350Z · LW(p) · GW(p)
I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander's judgement over his own.
Under normal circumstances, there is of course no reason why a subordinate shouldn't be encouraged to ask why they're doing something.
Replies from: elharo↑ comment by elharo · 2014-10-04T14:39:49.951Z · LW(p) · GW(p)
I'm not sure that's the real reason a soldier, or someone in a similar position, should obey their leader. In circumstances that rely on a group of individuals behaving coherently, it is often more important that they work together than that they work in the optimal way. That is, action is coordinated by assigning one person to make the decision. Even if this person is not the smartest or best informed in the situation, the results achieved by following orders are likely to be better than by each individual doing what they personally think is best.
In less pressing situations, it is of course reasonable to talk things out amongst a team and see if anyone has a better idea. However even then it's common for there to be more than one good way to do something. It is usually better to let the designated leader pick an acceptable solution rather than spend a lot of time arguing about the best possible solution. And unless the chosen solution is truly awful (not just worse but actively wrong) it is usually better to go along with the leader designated solution than to go off in a different direction.
↑ comment by dspeyer · 2014-10-06T18:16:28.123Z · LW(p) · GW(p)
"It can get worse, though, can't it?" Fred said, "Isn't that sort of following how people wound up working for Grindlewald?"
"I am talking to you, not to those people. Have you ever come close to doing evil through excess obedience?" the Wolf asked.
"We've hardly ever obeyed at all," George said.
The Wolf waited for the words to sink in.
"But not every act of courage is right," Fred said, "Just because someone is wiser than us doesn't seem like a reason to obey them blindly."
"If one who is wiser than you tells you to do something you think is wrong, what do you conclude?" the Wolf asked patiently.
"That they made a mistake," George said, as if it were obvious.
"Or?" the wolf said.
There was silence. The Wolf's eyes bore into the twins. It was clearly prepared to wait until they found the answer or the castle collapsed.
"Or it could... conceivably... mean we've made... some kind of mistake," Fred muttered at last.
"And which seems more likely?"
"Wisdom isn't everything," George rallied, "maybe we know something they don't, or they got careless --"
"Good things to think about," the Wolf interrupted, "but are you capable of thinking about them?"
"What do you mean?" Fred asked.
"Can you take seriously the idea that you might be wrong? Can you even think of it without my help?"
"We'll try," George said.
"There's more options, though," Fred though aloud, "We don't have to decide on our own whether we're wrong or they are -- we could talk to them. Couldn't we?"
"Sometimes you can," the Wolf said, "and the benefits are obvious. Can you see the costs?"
"It takes time, that we sometimes don't have" George said.
"It could give you all away -- if you're trying to sneak past somebody and you start whispering, I mean," Fred said.
"And it makes extra work for the leader. Overwhelming work if there are many followers," the Wolf added.
"So it's another tradeoff," George said.
"Now you understand. But understanding now and in this place is easy. What is hard is to continue to understand. To make the best choice you can, when all paths may run ill, and one ill fills you with fear but another is only words to you. You have the understanding to make that choice, but do you have the courage?
Replies from: None↑ comment by [deleted] · 2014-10-06T20:25:20.806Z · LW(p) · GW(p)
Unfortunately, the Courage Wolf's existence proof for "people wiser than you" is nonconstructive: he has failed to give evidence that any particular person is wiser, and thus should be trusted.
Replies from: dspeyer↑ comment by dspeyer · 2014-10-06T20:44:38.119Z · LW(p) · GW(p)
How to recognize someone wiser than you is indeed left as an exercise for the reader. And, yes, there will always be uncertainty, but you handle uncertainty in tradeoffs all the time.
Are you seriously claiming the Weasely twins are the wisest characters in HPMoR?
Replies from: None↑ comment by [deleted] · 2014-10-07T11:19:57.920Z · LW(p) · GW(p)
Are you seriously claiming the Weasely twins are the wisest characters in HPMoR?
They already listen to Dumbledore and McGonnagal, they're already wary of Quirrell, and frankly my actual wisdom rating for Harry (as opposed to raw intelligence that might eventually become wisdom with good training) is quite low.
(You know that the only statements Eliezer himself actually endorses are those made about science and those made by Godric Gryffindor, right?)
↑ comment by Emile · 2014-10-04T18:31:33.179Z · LW(p) · GW(p)
Do you have evidence to back that up? Seems to me that organisations with obedient members usually outperform those whose members question every decision; the exception being possibly those organisation who depend on their (non-leader) members being creative (e.g. software development), but those are a pretty recent development.
Replies from: None, Strange7↑ comment by [deleted] · 2014-10-05T03:16:27.617Z · LW(p) · GW(p)
the exception being possibly those organisation who depend on their (non-leader) members being creative (e.g. software development), but those are a pretty recent development.
No, they are not a pretty recent development at all. The historical common-case is leaders taking credit for the good thinking of their underlings.
And, frankly, your underestimation of the necessary intelligent thought to run most organizations is kinda... ugh.
Replies from: Emile↑ comment by Emile · 2014-10-09T17:08:49.965Z · LW(p) · GW(p)
No, they are not a pretty recent development at all. The historical common-case is leaders taking credit for the good thinking of their underlings.
I agree that there are (probably a lot of) cases where creative thinking from rank-and-file members helps the organization as a whole; however my claim is that obedience also helps the organisation in other ways (coordinated action, less time spent on discussion, less changes of direction), and cases where the first effect is stronger than the second are rare until recently.
i.e. (content warning: speculation and simplification!) you may have had medieval construction companies/guilds where low-level workers were told to Just Obey Or Else, and when they had good ideas supervisors took credit, but it's likely that if you had switched there organization to a more "democratic" one like (some) modern organisations, the organization as a whole would have performed less well.
I don't have any in-depth knowledge of the history of organization, I just think that "The vast majority of all historical incidences of blind obedience have ended up being Bad Ideas" is a nice-sounding slogan but not historically true.
And, frankly, your underestimation of the necessary intelligent thought to run most organizations is kinda... ugh.
I specifically referred to non-leader members, i.e. rank-and-file. Which is, like, the opposite of what you seem to be reading into my comment.
Replies from: None↑ comment by [deleted] · 2014-10-10T15:00:49.984Z · LW(p) · GW(p)
I specifically referred to non-leader members, i.e. rank-and-file. Which is, like, the opposite of what you seem to be reading into my comment.
No, I was referring to the rank-and-file as well.
I don't have any in-depth knowledge of the history of organization,
Then we should ask someone who does.
it's likely that if you had switched there organization to a more "democratic" one like (some) modern organisations, the organization as a whole would have performed less well.
Then why did we switch, and why are our organizations more efficient in correlation with being more democratic?
Replies from: Emile, Azathoth123↑ comment by Emile · 2014-10-10T16:43:19.397Z · LW(p) · GW(p)
Then why did we switch, and why are our organizations more efficient in correlation with being more democratic?
More education and literacy; a more complex world (required paperwork for doing anything...); more knowledge work.
↑ comment by Azathoth123 · 2014-10-11T01:43:33.370Z · LW(p) · GW(p)
Then why did we switch, and why are our organizations more efficient in correlation with being more democratic?
Truth of claim not in evidence.
Replies from: None↑ comment by [deleted] · 2014-10-12T06:56:03.625Z · LW(p) · GW(p)
Claim at least partially in evidence. Methinks your prior doth protest too much.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-10-12T19:46:03.454Z · LW(p) · GW(p)
Then why haven't worker cooperatives replaced corporations as the main economic form?
Replies from: None↑ comment by [deleted] · 2014-10-14T14:39:23.931Z · LW(p) · GW(p)
Because the correct trade-off between ability to raise expansion capital via selling stock and maintaining worker control has not yet been achieved. Most current worker coops, for instance, do not have any structure for selling nonvoting stock, so they face a lot of difficulty in raising capital to expand.
Replies from: Lumifer, Azathoth123↑ comment by Azathoth123 · 2014-10-14T23:48:45.140Z · LW(p) · GW(p)
so they face a lot of difficulty in raising capital to expand.
How would a worker controlled coop expand? Would the new workers be given the same voting rights as the original workers? If so you have to ensure that the new workers have the same vision for how the coop should be run. Also, what do you do if market conditions require a contraction?
Replies from: Nonecomment by JQuinton · 2014-10-07T16:41:37.326Z · LW(p) · GW(p)
Replies from: LumiferWhen I was 16, I wanted to follow in my grandfathers footsteps. I wanted to be a tradesman. I wanted to build things, and fix things, and make things with my own two hands. This was my passion, and I followed it for years. I took all the shop classes at school, and did all I could to absorb the knowledge and skill that came so easily to my granddad. Unfortunately, the handy gene skipped over me, and I became frustrated. But I remained determined to do whatever it took to become a tradesman.
One day, I brought home a sconce from woodshop that looked like a paramecium, and after a heavy sigh, my grandfather told me the truth. He explained that my life would be a lot more satisfying and productive if I got myself a different kind of toolbox. This was almost certainly the best advice I’ve ever received, but at the time, it was crushing. It felt contradictory to everything I knew about persistence, and the importance of “staying the course.” It felt like quitting. But here’s the “dirty truth,” Stephen. “Staying the course” only makes sense if you’re headed in a sensible direction. Because passion and persistence – while most often associated with success – are also essential ingredients of futility.
That’s why I would never advise anyone to “follow their passion” until I understand who they are, what they want, and why they want it. Even then, I’d be cautious. Passion is too important to be without, but too fickle to be guided by. Which is why I’m more inclined to say, “Don’t Follow Your Passion, But Always Bring it With You.”
comment by Torello · 2014-10-02T01:39:31.443Z · LW(p) · GW(p)
"While there are problems with what I have proposed, they should be compared to the existing alternatives, not to abstract utopias."
Jaron Lanier, Who Owns the Future (page number not provided by e-reader)
Replies from: cousin_it, None↑ comment by [deleted] · 2014-10-04T02:42:29.927Z · LW(p) · GW(p)
That's just an argument for letting the status quo impose the Anchoring Effect on us.
Replies from: DanielLC, Richard_Kennaway↑ comment by DanielLC · 2014-10-04T17:07:28.741Z · LW(p) · GW(p)
It's an argument against the Nirvana fallacy. It's not saying that we should accept the status quo. Quite the opposite. It's saying that we should reject the status quo as soon as we have a better alternative, rather than waiting for a perfect one.
Replies from: None↑ comment by [deleted] · 2014-10-05T03:20:53.448Z · LW(p) · GW(p)
This depends on whether you are dealing with processes subject to entropic decay (they break apart and "die" without effort-input) or entropic growth (they optimize under their own power). For the former case, the Nirvana fallacy remains a fallacy; for the latter case, you are in deep trouble if you try to go with the first "good enough" alternative rather than defining a unique best solution and then trying to hit it as closely as possible.
↑ comment by Richard_Kennaway · 2014-10-04T18:17:18.970Z · LW(p) · GW(p)
Maybe it should. That's what Chesterton's Fence is.
comment by Stabilizer · 2014-10-03T22:24:12.604Z · LW(p) · GW(p)
Replies from: V_V, VAuroch, roystgnr, johnlawrenceaspden, ChristianKl, army1987The version of Windows following 8.1 will be Windows 10, not Windows 9. Apparently this is because Microsoft knows that a lot of software naively looks at the first digit of the version number, concluding that it must be Windows 95 or Windows 98 if it starts with 9.
Many think this is stupid. They say that Microsoft should call the next version Windows 9, and if somebody’s dumb code breaks, it’s their own fault.
People who think that way aren’t billionaires. Microsoft got where it is, in part, because they have enough business savvy to take responsibility for problems that are not their fault but that would be perceived as being their fault.
↑ comment by V_V · 2014-10-03T22:55:22.420Z · LW(p) · GW(p)
The version of Windows following 8.1 will be Windows 10, not Windows 9. Apparently this is because Microsoft knows that a lot of software naively looks at the first digit of the version number, concluding that it must be Windows 95 or Windows 98 if it starts with 9.
Except that Windows 95 actual version number is 4.0, and Windows 98 version number is 4.1.
It seems that Microsoft has been messing with version numbers in the last years, for some unknown (and, I would suppose, probably stupid) reason: that's why Xbox One follows Xbox 360 which follows Xbox, so that Xbox One is actually the third Xbox, the Xbox with 3 in the name is the second one, and the Xbox without 1 is the first one. Isn't it clear?
Maybe I can't understand the logic behind this because I'm not a billionarie, but I'm inclined to think this comes from the same geniuses who thought that the design of Windows 8 UI made sense.
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2014-10-04T22:16:44.999Z · LW(p) · GW(p)
Except that Windows 95 actual version number is 4.0, and Windows 98 version number is 4.1.
The programs causing the problem are reading the version name string, not the version number.
Examples: https://searchcode.com/?q=if%28version%2Cstartswith%28%22windows+9%22%29
Replies from: V_V↑ comment by roystgnr · 2014-10-04T16:13:44.507Z · LW(p) · GW(p)
Microsoft got where it is, in part, by relying on the exact opposite user psychology. "What the guy is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS."
↑ comment by johnlawrenceaspden · 2014-10-04T12:30:16.776Z · LW(p) · GW(p)
Crikey, how does the dumb software react to running on Windows 1?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-10-04T14:25:19.793Z · LW(p) · GW(p)
I am rather doubtful that a noticeable number of programs are actually capable of running on both Windows 1 and Windows 10.
↑ comment by ChristianKl · 2014-10-07T12:42:48.987Z · LW(p) · GW(p)
I think the core reason is marketing. Windows 10 sounds more revolutionary then switching from 8 to 9.
↑ comment by A1987dM (army1987) · 2014-10-06T17:24:23.314Z · LW(p) · GW(p)
Why not “Windows Nine”? :-)
comment by dspeyer · 2014-10-02T14:53:36.481Z · LW(p) · GW(p)
Lord Vetinari, as supreme ruler of Ankh-Morpork, could in theory summon the Archchancellor of Unseen University to his presence and, indeed, have him executed if he failed to obey.
On the other hand Mustrum Ridcully, as head of the college of wizards, had made it clear in polite but firm ways that he could turn him into a small amphibian and, indeed, start jumping around the room on a pogo stick.
Alcohol bridged the diplomatic gap nicely. Sometimes Lord Vetinari invited the Archchancellor to the palace for a convivial drink. And of course the Archchancellor went, because it would be bad manners not to. And everyone understood the position, and everyone was on their best behaviour, and thus civil unrest and slime on the carpet were averted.
-- Interesting Times, Terry Pratchett
comment by James_Miller · 2014-10-02T00:31:25.777Z · LW(p) · GW(p)
I want to say "live and let live" about non-scientific views. But, then I read about measles outbreaks in countries where vaccines are free.
Zach Weinersmith (Twitter)
Related:
Rather than panicking about the single patient known to have Ebola in the US, protect yourself against a virus that kills up to 50,000 Americans every year. It's the flu, and simply getting the shot dramatically reduces your chances of becoming ill.
Erin Brodwin Business Insider
Replies from: Lumifer, bramflakes↑ comment by Lumifer · 2014-10-02T17:30:50.704Z · LW(p) · GW(p)
That article about the flu "forgets" to mention a rather important fact: the effectiveness of the flu vaccine is only about 60%.
In particular, with this effectiveness there will be no herd immunity even if you vaccinate 100% of the population.
Replies from: DanielLC, Kyre↑ comment by DanielLC · 2014-10-02T17:53:08.495Z · LW(p) · GW(p)
So? A 60% reduction in the chances of getting the flu is still orders of magnitude better than a 100% reduction in the chances of getting ebola. Also, herd immunity isn't all-or-nothing. I'd expect giving everyone a 60% effective flu vaccine would still reduce the the probability of getting the flu by significantly more than 60%.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-03T07:22:13.868Z · LW(p) · GW(p)
I hear that herd immunity only really works when the percentage of people vaccinated is in the high 90s, but IANAD.
Replies from: DanielLC, army1987↑ comment by DanielLC · 2014-10-03T20:23:12.272Z · LW(p) · GW(p)
According to the Wikipedia page on herd immunity, it seems to be that it generally has to be at about the 80s. But my point is that it's somewhat of a false dichotomy. Herd immunity is a sliding scale. Someone chose an arbitrary point to say that it happens or it doesn't happen. But there still is an effect at any size. IANAD, but I would expect a 60% reduction would still be enough for a significant amount of the disease to be prevented in the non-immune population. In fact, I wouldn't be surprised if it was higher. If you vaccinate 90% of the population, then herd immunity can't protect more than the remaining 10%.
Replies from: Lumifer, IlyaShpitser, army1987↑ comment by Lumifer · 2014-10-03T20:32:06.923Z · LW(p) · GW(p)
Herd immunity is a sliding scale.
You can treat herd immunity as a sliding scale, but you can treat it as a hard threshold as well.
In the hard threshold sense it means that if you infect a random individual in the immune herd, the disease does not spread. It might infect a few other people, but it will not spread throughout the entire (non-immunized) herd, it will die out locally without any need for a quarantine.
Mathematically, you need a model that describes how the disease spreads in a given population. Plug in the numbers and calculate the expected number of people infected by a sick person. If it's greater than 1, the disease will spread, if it's less then 1, the disease will die out locally and the herd is immune.
Replies from: TheMajor↑ comment by TheMajor · 2014-10-04T09:34:13.035Z · LW(p) · GW(p)
The spreading of deseases sounds like it would be modeled quite well using Percolation Theory, although on the applications page there is mention but no explanation of epidemic spread.
The interesting thing about percolation theory is that in that model both DanielLC and Lumifer would be right: there is a hard cutoff above which there is zero* chance of spreading, and below that cutoff the chance of spreading slowly increases. So if this model is accurate there is both a hard cutoff point where the general population no longer has to worry as well as global benefits from partial vaccination (the reason for this is that people can be ordered geographically, so many people will only get a chance to infect people that were already infected. Therefore treating each new person as an independent source, as in Lumifer's expected newly infected number of people model, will give wrong answers).
*Of course the chance is only zero within the model, the actual chance of an epidemic spread (or anything, for that matter) cannot be 0.
Replies from: othercriteria↑ comment by othercriteria · 2014-10-06T05:13:45.028Z · LW(p) · GW(p)
I think percolation theory concerns itself with a different question: is there a path from starting point to the "edge" of the graph, as the size of the graph is taken to infinity. It is easy to see that it is possible to hit infinity while infecting an arbitrarily small fraction of the population.
But there are crazy universality and duality results for random graphs, so there's probably some way to map an epidemic model to a percolation model without losing anything important?
Replies from: TheMajor↑ comment by TheMajor · 2014-10-07T05:47:09.071Z · LW(p) · GW(p)
The main question of percolation theory, whether there exists a path from a fixed origin to the "edge" of the graph, is equivalently a statement about the size of the largest connected cluster in a random graph. This can be intuitively seen as the statement: 'If there is no path to the edge, then the origin (and any place that you can reach from the origin, traveling along paths) must be surrounded by a non-crossable boundary'. So without such a path your origin lies in an isolated island. By the randomness of the graph this statement applies to any origin, and the speed with which the probability that a path to the edge exists decreases as the size of the graph increases is a measure (not in the technical sense) of the size of the connected component around your origin.
I am under the impression that the statements '(almost) everybody gets infected' and 'the largest connected cluster of diseased people is of the size of the total population' are good substitutes for eachother.
Replies from: othercriteria↑ comment by othercriteria · 2014-10-07T22:28:19.153Z · LW(p) · GW(p)
In something like the Erdös-Rényi random graph, I agree that there is an asymptotic equivalence between the existence of a giant component and paths from a randomly selected points being able to reach the "edge".
On something like an n x n grid with edges just to left/right neighbors, the "edge" is reachable from any starting point, but all the connected components occupy just a 1/n fraction of the vertices. As n gets large, this fraction goes to 0.
Since, at least as a reductio, the details of graph structure (and not just its edge fraction) matters and because percolation theory doesn't capture the idea of time dynamics that are important in understanding epidemics, it's probably better to start from a more appropriate model.
Maybe look at Limit theorems for a random graph epidemic model (Andersson, 1998)?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-10-15T03:58:24.495Z · LW(p) · GW(p)
The statement about percolation is true quite generally, not just for Erdős-Rényi random graphs, but also for the square grid. Above the critical threshold, the giant component is a positive proportion of the graph, and below the critical threshold, all components are finite.
Replies from: othercriteria↑ comment by othercriteria · 2014-10-15T13:42:06.526Z · LW(p) · GW(p)
The example I'm thinking about is a non-random graph on the square grid where west/east neighbors are connected and north/south neighbors aren't. Its density is asymptotically right at the critical threshold and could be pushed over by adding additional west/east non-neighbor edges. The connected components are neither finite nor giant.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-10-20T01:15:21.917Z · LW(p) · GW(p)
If all EW edges exist, you're really in a 1d situation.
Models at criticality are interesting, but are they relevant to epidemiology? They are relevant to creating a magnet because we can control the temperature and we succeed or fail while passing through the phase transition, so detail may matter. But for epidemiology, we know which direction we want to push the parameter and we just want to push it as hard as possible.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-10-21T00:16:59.308Z · LW(p) · GW(p)
But for epidemiology, we know which direction we want to push the parameter and we just want to push it as hard as possible.
Not, quite, there are costs associated with pushing the parameter. We want to know at what point we hit diminishing returns.
↑ comment by IlyaShpitser · 2014-10-04T08:19:20.697Z · LW(p) · GW(p)
Herd immunity is a sliding scale.
How do you know there is no phase transition?
↑ comment by A1987dM (army1987) · 2014-10-04T08:47:06.348Z · LW(p) · GW(p)
But my point is that it's somewhat of a false dichotomy. Herd immunity is a sliding scale.
And indeed the table you mention does shows ranges rather than points. But even the bottom of those ranges are far above 60%.
↑ comment by A1987dM (army1987) · 2014-10-10T21:17:03.990Z · LW(p) · GW(p)
Retracted after reading Kyre's comment that what applies to measles doesn't necessarily apply to flu.
↑ comment by Kyre · 2014-10-04T09:13:03.740Z · LW(p) · GW(p)
I believe this is incorrect. The required proportion of the population that needs to be immune to get a herd immunity effect depends on how infectious the pathogen is. Measles is really infectious with an R0 (number of secondary infections caused by a typical infectious case in a fully susceptible population) of over 10, so you need 90 or 95% vaccination coverage to stop it speading - and why it didn't much of a drop in vaccination before we saw new outbreaks.
R0 estimates for seasonal influenza are around 1.1 or 1.2. Vaccinating 100% of the population with a vaccine with 60% efficacy would give a very large herd immunity effect (toy SIR model I just ran says starting with 40% immune reduces attack rate from 35% to less than 2% for R0 1.2).
(Typo edit)
↑ comment by bramflakes · 2014-10-02T16:43:49.326Z · LW(p) · GW(p)
I feel the Ebola article makes a false comparison. We have highly competent disease control measures that keeps Influenza's death toll bounded around the 50k order of magnitude per year. With Ebola, the curve still looks exponential rather than logistic - if the trend continues we'll have a 6-figure bodycount by January.
A fairer comparison would be Ebola to 1918 Spanish Flu.
(Oh and that isn't even taking into account that the officials have been feeding the media absolute horseshit about the "single patient" with Ebola)
Replies from: shminux, James_Miller↑ comment by Shmi (shminux) · 2014-10-02T17:59:19.642Z · LW(p) · GW(p)
Downvoted for mindless panic.
There are no measures to speak of to control the flu. It goes through the world every year and we just live with it because it's rarely fatal.
The Ebola curve is not exponential in the countries where appropriate measures were taken, Nigeria and Senegal: http://www.usatoday.com/story/news/nation/2014/09/30/ebola-over-in-nigeria/16473339/ Clearly the US can do at least as well.
While Ebola might mutate to become airborne and spread like flu, and there is a real risk of that, there is little indication of it having happened. Until then the comparison with the Spanish Flu is silly. It's not nearly as contagious.
Your linked post in the underground medic is pretty bad. The patient contracted Ebola on Sep 15, most people become contagious 8-10 days later, so the flight passengers on Sep 20 are very likely OK. There is no indication that the official story is grossly misleading. There are bound to be a few more cases showing up in the next week or so, just as there were with SARS, but with the aggressive approach taken now the odds of it spreading wide are negligible, given that Nigeria managed to contain a similar incident.
My guess is that the total number of cases with the Dallas vector will be under a dozen or so, with <40% fatalities. I guess we'll see.
Replies from: johnlawrenceaspden, gsgs↑ comment by johnlawrenceaspden · 2014-10-04T12:11:31.771Z · LW(p) · GW(p)
Upvoted for the firm prediction. Confidence level?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-04T22:12:18.665Z · LW(p) · GW(p)
I would say 90% or so.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-05T17:52:50.478Z · LW(p) · GW(p)
My guess is that the total number of cases with the Dallas vector will be under a dozen or so, with <40% fatalities. I guess we'll see.
... And it looks like I was right, if unduly pessimistic. Total new cases: 2, total new fatalities: 0. I expected at least some of the patient 0's relatives to get infected, and I did not expect the hospital's protection measures to be so bad. It looks like the strain they got there is not particularly infectious, which saved their asses.
↑ comment by gsgs · 2014-11-05T09:58:27.623Z · LW(p) · GW(p)
the numbers of ebola cases were no longer exponential since mid Sept. instead they stay almost constant with ~900 new cases per week since Sep.14 This should have been clear to WHO and researchers at least since mid-Oct. Still they publically repeated the "exponential" forecasts , based on papers using old data. Ban ki Moon (on 2014/10/09) and Chan(on 2014/10/14) and Aylward said it. WHO until now puts forward their containment plan based on 5000-10000 new cases in the first week of december. They didn't correct it yet.
according to Fukuda on 2014/10/23, the WHO-committee on 2014/10/22 on the third meeting of the International Health Regulations Emergency Committee regarding the 2014 Ebola outbreak in West Africa stated that there continued to be exponential increase of cases in Guinea,Liberia,Sierra Leone
↑ comment by James_Miller · 2014-10-02T17:35:31.017Z · LW(p) · GW(p)
I'm far from an expert myself but unless, as you say, the experts are feeding us via the media "absolute horseshit" the expected number of U.S. deaths from Ebola is way below 50K.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-10-03T09:21:55.932Z · LW(p) · GW(p)
the expected number of U.S. deaths from Ebola is way below 50K.
What countermeasures is that number conditional on being taken?
Replies from: James_Miller↑ comment by James_Miller · 2014-10-03T13:33:27.633Z · LW(p) · GW(p)
What we seem to be doing but with significantly more countermeasures if the number of U.S. victims increases. Obama would suffer a massive political hit if > 1000 Americans die from Ebola and I trust that this is a sufficient condition to motivate the executive branch if things start to look like they could get out of control.
Replies from: Lumifercomment by Lumifer · 2014-10-17T16:12:43.598Z · LW(p) · GW(p)
"You know, esoteric, non-intuitive truths have a certain appeal – once initiated, you’re no longer one of the rubes. Of course, the simplest and most common way of producing an esoteric truth is to just make it up."
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-31T21:07:50.307Z · LW(p) · GW(p)
If it's so simple... mind making one up?
comment by johnlawrenceaspden · 2014-10-04T12:05:52.614Z · LW(p) · GW(p)
To stay young requires unceasing cultivation of the ability to unlearn old falsehoods
-- Robert Heinlein (http://tmaas.blogspot.co.uk/2008/10/robert-heinlein-quotes.html)
comment by Torello · 2014-10-02T01:22:37.413Z · LW(p) · GW(p)
"Put simply, the truth about all those good decisions you plan to make sometime in the future, when things are easier, is that you probably won't make them once that future rolls around and things are tough again."
Sendhil Mullaainathan and Eldar Shafir, Scarcity, p. 215
comment by BenSix · 2014-10-07T21:12:09.224Z · LW(p) · GW(p)
“Nobody supposes that the knowledge that belongs to a good cook is confined to what is or may be written down in a cookery book.” - Michael Oakeshott, "Rationalism in Politics"
comment by Sabiola (bbleeker) · 2014-10-04T09:10:19.882Z · LW(p) · GW(p)
"What we assume to be 'normal consciousness' is comparatively rare, it's like the light in the refrigerator: when you look in, there you are ON but what's happening when you don't look in?"
Keith Johnstone, Impro - Improvisation and the Theatre
comment by James_Miller · 2014-10-14T14:42:01.735Z · LW(p) · GW(p)
To summarize Twitter and my Facebook feed this morning: “The Ebola virus proves everything I already believed about politics.” You might find this surprising. The Ebola virus is not running for office. It does not have a policy platform, or any campaign white papers on burning issues. It doesn’t even vote. So how could it neatly validate all our preconceived positions on government spending, immigration policy, and the proper role of the state in our health care system? Stranger still: How could it validate them so beautifully on both left and right?
comment by Shmi (shminux) · 2014-10-03T21:53:10.290Z · LW(p) · GW(p)
The words out of your mouth will literally be ignored, misheard, or even contorted to the opposite of what they mean, if that’s what it takes to preserve the listener’s misconception
Scott Aaronson on why quantum computers don't speed up computations by parallelism, a popular misconception.
Replies from: gjm↑ comment by gjm · 2014-10-04T02:19:49.052Z · LW(p) · GW(p)
The misconception isn't exactly that quantum computers speed up computations by parallelism. They kinda do. The trouble is that what they do isn't anything so simple as "try all the possibilities and report on whichever one works" -- and the real difference between that and what they can actually do is in the reporting rather than the trying.
Of course that means that useful quantum algorithms don't look like "try all the possibilities", but they can still be viewed as working by parallelism. For instance, Grover's search algorithm starts off with the system in a superposition that's symmetrical between all the possibilities, and each step changes all those amplitudes in a way that favours the one we're looking for.
For the avoidance of doubt, I'm not in any way disagreeing with Scott Aaronson here: The naive conception of quantum computation as "just like parallel processing, but the other processors are in other universes" is too naive and leads people to terribly overoptimistic expectations of what quantum computers can do. I just think "quantum computers don't speed up computations by parallelism" is maybe too simple in the other direction.
[EDITED to remove a spurious "not"]
Replies from: IlyaShpitser, johnlawrenceaspden, soreff↑ comment by IlyaShpitser · 2014-10-11T15:03:39.834Z · LW(p) · GW(p)
They kinda do.
I agree that "parallelism but in other universes" is a weird phrasing.
What happens with quantum computation is cancellation due to having negative probabilities. The closest classical analogue seems to me to be dynamic programming, not parallel programming -- you have a seemingly large search space that in fact can be made to reduce into a smaller search space by e.g. cleverly caching things. In other words, this is about how the math of the search space works out.
If your parallelism relies on invoking MWI, then it's not "real" parallelism because MWI is observationally indistinguishable from other stories where there aren't parallel worlds.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-10-12T02:00:34.363Z · LW(p) · GW(p)
What happens with quantum computation is cancellation due to having negative probabilities.
Negative (and imaginary) phase. The probability is the norm of the phase and is always positive.
↑ comment by johnlawrenceaspden · 2014-10-04T12:20:17.000Z · LW(p) · GW(p)
I just don't think <-> I just think, or is this one of those American/British differences? Also, nice recursion in the grandparent.
Replies from: gjm, gjm, Luke_A_Somers↑ comment by gjm · 2014-10-04T14:37:41.648Z · LW(p) · GW(p)
No, it's one of those right/wrong differences. I changed my mind about how to structure the sentence -- from "I don't think X is quite right" to "I think X is not quite right" -- and failed to remove a word I should have removed. (I seem to be having trouble with negatives at the moment: while trying the last sentence, my fingers attempted to add "n't" to both "should" and "have"!)
↑ comment by gjm · 2014-10-04T14:39:38.892Z · LW(p) · GW(p)
Wait, American/British? I think we live within 10 miles of one another. Admittedly, I was born in the US, but I haven't lived there since I was about 4.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-04T19:06:52.204Z · LW(p) · GW(p)
Ahh, the mysterious 'g'. Hi there. We really should have lunch sometime!
Replies from: gjm↑ comment by gjm · 2014-10-04T22:00:18.915Z · LW(p) · GW(p)
Yup, 'tis I. (No, wait, I'm two letters of the alphabet off.)
Yes, we should. At weekday lunchtimes I'm near the Science Park; how about you?
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-08T18:47:19.275Z · LW(p) · GW(p)
Consulting for the engineering department at the moment, but my time's my own, and I'm intrigued enough to put myself out. You choose place and time, and I'll try to be there.
It may even be that we have better ways of communicating than blog comments! I am lesswrong@aspden.com, 07943 155029.
↑ comment by Luke_A_Somers · 2014-10-04T14:18:22.123Z · LW(p) · GW(p)
Inserting a 'not' where it shouldn't be is not an American/British difference.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-04T14:22:16.204Z · LW(p) · GW(p)
But is it not possible that whether it should or shouldn't be there is a matter of the dialect of the speaker?
Replies from: gjm↑ comment by gjm · 2014-10-04T14:41:21.392Z · LW(p) · GW(p)
In general, of course it is. (I think "couldn't care less" / "could care less" is an example, though my Inner Pedant gets very twitchy at the latter.) But I think it's unusual to have such big differences in idiom, and I suspect they generally arise from something that was originally an outright mistake (as I think "could care less" was).
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-10-05T00:39:01.521Z · LW(p) · GW(p)
And in particular, such a twisted usage does not fall neatly across the America/Britain divide.
Especially in this particular case where it was pretty clearly an editing error.
comment by hairyfigment · 2014-10-02T20:58:57.338Z · LW(p) · GW(p)
Still, it was possible that he could close in and thus block the Frenchman's blade.
No. Would he consider such a move if he did not have three ounces of fifteen-percent-alcohol purple passion in his bloodstream? No. Forget it.
Philip Jose Farmer's character, "Richard Francis Burton," The magic labyrinth
comment by Stabilizer · 2014-10-03T19:48:47.932Z · LW(p) · GW(p)
The chief trick to making good mistakes is not hide them -- especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. The fundamental reaction to any mistake ought to be this: "Well, I won't do that again!" Natural selection doesn't actually think this thought; it just wipes out the goofers before they can reproduce; natural selection won't do that again, at least not as often. Animals that can learn -- learn not to make that noise, touch that wire, eat that food -- have something with a similar selective force in their brains. We human beings carry matters to a much more swift and efficient level. We can actually think that thought, reflecting on what we have just done: "Well, I won't do that again!" And when we reflect, we confront directly the problem that must be solved by any mistake-maker: what, exactly, is that? What was it about what I just did that got me into all this trouble? The trick is to take advantage of the particular details of the mess you've made, so that your next attempt will be informed by it and not just another blind stab in the dark.... The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when are angry at ourselves), and you to work hard to overcome these emotional reactions. Try to acquire the weird practice of savoring your mistakes, delighting in uncovering the strange quirks that led you astray. Then once you have sucked out all the goodness to be gained from having made them, you can cheerfully set them behind you, and go on to the next big opportunity. But that this not enough; you should actively seek out opportunities to make grand mistakes; just so you can recover from them.
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Replies from: Lumifer, johnlawrenceaspden↑ comment by Lumifer · 2014-10-03T20:05:34.276Z · LW(p) · GW(p)
But that this not enough; you should actively seek out opportunities to make grand mistakes; just so you can recover from them.
Think he's a bit too enthusiastic about that X-D
Making more grand mistakes in addition to my usual number doesn't look appealing to me :-/
Replies from: Stabilizer↑ comment by Stabilizer · 2014-10-03T21:57:45.630Z · LW(p) · GW(p)
I think he's implicitly restricting himself to philosophy. A "grand mistake" in philosophy has little ill effects.
Replies from: Azathoth123, Lumifer, Emile↑ comment by Azathoth123 · 2014-10-04T19:24:59.262Z · LW(p) · GW(p)
A "grand mistake" in philosophy has little ill effects.
Um, they've been known to result in up to a quarter of the world's population living under totalitarian dictatorships.
Replies from: Stabilizer↑ comment by Stabilizer · 2014-10-08T07:43:34.060Z · LW(p) · GW(p)
Fair enough. Good examples: Hegel --> Marx --> Soviet Union/China. Hegel --> Husserl --> Heidegger <---> Nazism.
↑ comment by johnlawrenceaspden · 2014-10-04T12:26:43.295Z · LW(p) · GW(p)
Not disagreeing, but "The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when are angry at ourselves)" is weird.
Why is the natural...anger?
Also, is that even true for everyone? I make mistakes all the time and don't feel that, so I'm thinking he means "to publically taking a strong position and then being made to look like a fool", which I certainly do feel. But maybe not?
Replies from: gjmcomment by Shmi (shminux) · 2014-10-31T21:05:27.017Z · LW(p) · GW(p)
The winner worldview is that you have responsibility for your own life and it is irrelevant who is at fault if the people at fault can't or won't fix the problem. I've noticed over the course of my life that winners ignore questions of blame and fault and look for solutions they can personally influence. Losers blame others for their problems and expect that to produce results.
Scott Adams musing on what that woman in the Manhattan harassment video could do.
This actually clashes with the idea of heroic responsibility, a popular local notion. I guess it depends on what your values are.
Replies from: DeterminateJacobian, somnicule, Nornagest, ChristianKl↑ comment by DeterminateJacobian · 2014-10-31T21:51:10.889Z · LW(p) · GW(p)
Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others' behavior as "blame" and thus doomed to fail, just because trying to change others' behavior doesn't usually succeed for them.
What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn't worth it to her personally, leave as Adams suggests. She isn't making it Scott Adam's problem, she's making it the problem of anybody who actually wants it to also be their problem. That's how cooperation works, and people can be good or bad at encouraging cooperation, in completely measurable ways. Assigning irremediable blame, or refusing to encourage change at all are both losing solutions.
↑ comment by somnicule · 2014-10-31T22:06:42.592Z · LW(p) · GW(p)
I don't exactly see how it clashes with heroic responsibility?
"When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and blaming gravity."
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-31T23:11:57.460Z · LW(p) · GW(p)
Because it might seem to you that you cannot change it, but if you have Eliezer's do the impossible attitude, then maybe you can.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-02T00:50:43.483Z · LW(p) · GW(p)
I can't tell if your misinterpreting him or if he real meant something that stupid. The problem with "doing the impossible" is that it amounts to an injunction to use all available and potentially available resources to address the problem. Of course, its impossible to do this for every problem.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-04T22:04:23.116Z · LW(p) · GW(p)
Of course, its impossible to do this for every problem.
I don't think anyone implied "every problem". Only the one you think is really worth the trouble. Like FAI for Eliezer (or the AI-box toy example), or the NSA spying for Snowden. The risk, of course, is that the problem might be too hard and you fail, after potentially wasting a lot of resources, including your life.
↑ comment by Nornagest · 2014-10-31T21:53:03.589Z · LW(p) · GW(p)
I think I buy this line of reasoning in general, but I don't think Adams is applying it correctly in this case. If group A is doing something that makes you unhappy because group B is rewarding them for it, then it is no more "winner behavior" to go after group B than group A: in both cases you're trying to get others to fix your problems for you, by adding a negative incentive in one case and by removing a positive incentive in another.
I can make sense of this in a few ways: maybe Adams thinks at some level that B has agency as a group but A doesn't. (This is, clearly, wrong.) Or maybe he thinks that you're just more likely to convince members of B than members of A, which at least isn't obviously wrong but still requires assumptions not in evidence.
↑ comment by ChristianKl · 2014-11-09T14:02:21.402Z · LW(p) · GW(p)
I think taking responsibility for everything whether or not you caused in is exactly what heroic responsibility is about.
Apart from that Scotts get's a lot in the article wrong. In particular Scott argues:
The men in the street video are presumably repeat offenders. And that means they are getting a reward, at least occasionally, from their shouted public compliments. And I assume the reward comes from the occasional women who appreciate the compliments and smile back.
That's a naive view. It's probably wrong.
To the extend that Eliezer argues "Do the impossible" he doesn't argue doing things that literally have 0% of success. TDT discourages doing things with 0% of success. Eliezer doesn't argue virtue ethics where it matters that you try regardless of whether you succeed.
Not stopping with a naive view and actually working on the problem is something that Eliezer advocates and that's useful in cases like this. Even if it leads to questions that are even more politically incorrect then the ones Scott is asking.
comment by elharo · 2014-10-21T20:54:44.689Z · LW(p) · GW(p)
if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best.
--Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
comment by Dan_Moore · 2014-10-14T20:17:40.248Z · LW(p) · GW(p)
My greatest inspiration is a low bank balance.
Ludwig Bemelmens
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-10-24T10:11:03.261Z · LW(p) · GW(p)
A similar thought from Heinlein:
To me, the acme of prose style is exemplified by that simple, graceful clause: "Pay to the order of. . . ."
I have heard both my father and my brother, professional musicians, mention the tremendous difference between professionals and amateurs. There is their differing levels of skill, of course, but the more fundamental difference is the seriousness that a professional brings to the work. There's nothing like having to put food on the table and a roof over your head, to give yourself that seriousness and get the work done, no matter what.
comment by aphyer · 2014-10-10T00:04:02.239Z · LW(p) · GW(p)
Replies from: NoneThe humans aren't doing what the math says. The humans must be broken.
↑ comment by [deleted] · 2014-10-12T19:02:57.203Z · LW(p) · GW(p)
Specifically, the human economists.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-15T15:56:34.732Z · LW(p) · GW(p)
But spherical cows of uniform density are so much easier to model.
Replies from: Lumifercomment by B_For_Bandana · 2014-10-04T21:57:52.503Z · LW(p) · GW(p)
When you get to a fork in the road, take it.
(I will keep doing this. I have no shame.)
comment by LawrenceC (LawChan) · 2014-10-15T03:10:40.546Z · LW(p) · GW(p)
"... beware of false dichotomies. Though it's fun to reduce a complex issue to a war between two slogans, two camps, or two schools of thought, it is rarely a path to understanding. Few good ideas can be insightfully captured in a single word ending with -ism, and most of our ideas are so crude that we can make more progress by analyzing and refining them than by pitting them against each other in a winner-take-all contest."
- Steven Pinker, on page 345 of The Sense of Style.
↑ comment by 27chaos · 2014-10-15T07:44:33.993Z · LW(p) · GW(p)
Practically everyone is wary of false dichotomies. The trick is recognizing them. This quote doesn't help much with that.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2014-10-16T18:53:23.175Z · LW(p) · GW(p)
Practically everyone can be relied upon to go from "That's a false dichotomy" to "Therefore, I should be wary of it."
However, being wary of false dichotomies means thinking, "That's a dichotomy. Therefore, the probability that it is false is sufficient to justify my thinking it through carefully and analytically." That is not something that practically everyone can be relied upon to do in general.
Replies from: 27chaos↑ comment by 27chaos · 2014-10-17T04:23:19.973Z · LW(p) · GW(p)
I don't think the quote significantly increases the probability someone will have that thought. I think practically everyone here already has that habit of wariness. Maybe I'm wrong, typical mind fallacy, but identifying false dichotomies has always been rather automatic for me and I thought that was true for everyone (except when other biases are involved as well).
comment by AshwinV · 2014-10-07T05:31:28.663Z · LW(p) · GW(p)
But philosophers share the general human weakness for explanations of what is incomprehensible in terms suited for what is familiar and well understood, though entirely different.
Originally said by Thomas Nagel (I got it from Hofstadter and Dennett here )
comment by JQuinton · 2014-10-21T18:18:27.492Z · LW(p) · GW(p)
This is a quote from memory from one of my professors in grad school:
Last quarble, the shanklefaxes ulugled the flurxurs. The flurxurs needed ulugled because they were mofoxiliating, which caused amaliaxas in the hurble-flurble. The shakletfaxes domonoxed a wokuflok who ulugles flurxurs, because wokuflok nuxioses less than iliox nuxioses.
- When did the shaklefaxes ulugle the flurxurs?
- Why did the shaklefaxes ulugle the flurxurs?
- Who did they get to ulugle the flurxurs?
- If you were the shaklefaxes, would you have your ulugled flurxurs? Why or why not?
- Would you domonox a wokuflok who ulugles flurxurs instead of an iliox? Why or why not?
Notice how if you only memorize things, you can reasonably answer the first three questions but not the last two. But if you actually understand things, you can answer all five. Instead of memorizing things, you will get a lot further in life if you actually understand the reasoning behind them.
comment by ike · 2014-10-12T18:49:17.438Z · LW(p) · GW(p)
Physicists, in contrast with philosophers, are interested in determining observable consequences of the hypothesis that we are a simulation.
http://arxiv.org/abs/1210.1847 , Constraints on the Universe as a Numerical Simulation
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-12T19:11:55.522Z · LW(p) · GW(p)
The LW software thinks the comma is part of the URL. Try escaping it with a backslash.
Also, limits of Lorentz invariance violations from the ultra-high-energy cosmic ray spectrum are much weaker if you take into account the possibility that some of them are heavier nuclei rather than protons, as various lines of evidence suggest. There are very few solid conclusions we can draw from the experimental data we have.
(This is what I am working on, BTW!)
comment by Torello · 2014-10-02T01:37:57.973Z · LW(p) · GW(p)
"Information always underrepresents reality."
Jaron Lanier, Who Owns the Future? (page number not provided by e-reader)
Replies from: None↑ comment by [deleted] · 2014-10-02T09:21:33.524Z · LW(p) · GW(p)
What does this mean?
Replies from: Lumifer, devas, khafra↑ comment by devas · 2014-10-02T10:48:21.587Z · LW(p) · GW(p)
The map is smaller than the territory? I think?
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-04T13:31:25.134Z · LW(p) · GW(p)
I bet there are big maps of small territories somewhere.
Replies from: devas↑ comment by devas · 2014-10-05T10:22:53.293Z · LW(p) · GW(p)
Physically? Maybe. information-wise? I heavily doubt it.
If the map is bigger than the territory, why not go live in the map? :-/
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-08T17:32:43.137Z · LW(p) · GW(p)
Physically's easy enough, but even information-wise, I had a guide to programming the Z80 that wouldn't have fit in the addressable memory of a Z80, let alone the processor. Will that do? If not, we should probably agree definitions before debating.
Replies from: Strange7↑ comment by Strange7 · 2014-10-18T00:10:36.974Z · LW(p) · GW(p)
Would it have fit into less space than the set of possible programs for the Z80?
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-23T14:05:47.115Z · LW(p) · GW(p)
That is a great point! I am grudgingly prepared to concede that sets are smaller than their power sets.
comment by AshwinV · 2014-10-14T04:00:31.348Z · LW(p) · GW(p)
Holmes: "What's the matter? You're not looking quite yourself. This Brixton Road affair has upset you."
Watson: "To tell the truth, it has," I said. "I ought to be more case-hardened after my Afghan experiences. I saw my own comrades hacked to pieces in Maiwand without losing my nerve."
Holmes: "I can understand. There is a mystery about this which stimulates the imagination; where there is no imagination there is no horror ."
- From Conan Doyle's "a study in scarlet" (bold added by me for emphasis)
comment by Gunnar_Zarncke · 2014-10-08T08:51:16.926Z · LW(p) · GW(p)
Chesterton's fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood. The quotation is from Chesterton’s 1929 book The Thing, in the chapter entitled "The Drift from Domesticity":
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it. 1)
1) "Taking a Fence Down". American Chesterton Society. Retrieved 21 June 2014.
Prompted by this comment; curiously this appears to be lacking from rationality quotes threads despite some references to the fence around here.
Replies from: fubarobfusco, gjm, BenSix, Richard_Kennaway↑ comment by fubarobfusco · 2014-10-08T16:55:43.608Z · LW(p) · GW(p)
I've seen Chesterton's quote used or misused in ways that assume that an extant fence must have some use that is both ① still existent, and ② beneficial; and that it can only be cleared away if that use is overbalanced by some greater purpose.
But some fences were created to serve interests that no longer exist: Hadrian's Wall, for one. The fact that someone centuries ago built a fence to keep the northern barbarians out of Roman Britain does not mean that it presently serves that purpose. Someone who observed Hadrian's Wall without knowledge of the Roman Empire, and thus the wall's original purpose, might correctly conclude that it serves no current military purpose to England.
For that matter, some fences exist to serve invidious purposes. To say "I don't see the use of this" is often a euphemism for "I see the harm this does, and it does not appear to achieve any counterbalancing benefit. Indeed, its purpose appears to have always been to cause harm, and so it should be cleared away expeditiously."
Replies from: Jiro, shminux, VAuroch, PeerGynt, Azathoth123↑ comment by Jiro · 2014-10-08T23:24:15.924Z · LW(p) · GW(p)
One big problem with Chesterton's Fence is that since you have to understand the reason for something before getting rid of it, if it happens not to have had a reason, you'll never be permitted to get rid of it.
Replies from: fubarobfusco, Jack_LaSota, wadavis↑ comment by fubarobfusco · 2014-10-09T01:52:07.838Z · LW(p) · GW(p)
Good point. Some properties of a system are accidental.
"We don't know why this wall is here, but we know that it is made of gray stone. We don't know why its builders selected gray stone. Therefore, we must never allow its color to be changed. When it needs repair we must make sure to use gray stone."
"But gray stone is now rare in our country and must be imported at great expense from Dubiously Allied Country. Can't we use local tan stone that is cheap?"
"Maybe gray stone suppresses zombie hordes from rising from the ground around the wall. We don't know, so we must not change it!"
"Maybe they just used gray stone because it used to be cheap, but the local supplies are now depleted. We should use cheap stone, as the builders did, not gray stone, which was an accidental property and not a deliberate design."
"Are you calling yourself an expert on stone economics and on zombie hordes, too!?"
"No, I'd just like to keep the wall up without spending 80% of our defense budget on importing stone from Dubiously Allied Country. I'm worried they're using all the money we send them to build scary battleships."
"The builders cared not for scary battleships! They cared for gray stone!"
"But it's too expensive!"
"But zombies!"
"Superstition!"
"Irresponsible radicalism!"
"Aaargh ... just because we don't have the builders here to answer every question about their design doesn't mean that we can't draw our own inferences and decide when to change things that don't make sense any more."
"Are you suggesting that the national defense can be designed by human reason alone, without the received wisdom of tradition? That sort of thinking led to the Reign of Terror!"
↑ comment by Jack_LaSota · 2014-10-09T17:19:23.658Z · LW(p) · GW(p)
That, and for certain kinds of fences, if there is an obvious benefit to taking one down, it's better to just take it down and see what breaks, then maybe replace it if it wasn't worth it, than to try and figure out what the fence is for without the ability to experiment.
↑ comment by wadavis · 2014-10-21T20:20:48.987Z · LW(p) · GW(p)
Devils advocating that somethings are without reason and that is an exception to the rule is a fairly weak straw man.
Not having a reason is a simplification that does not hold up: Incompetence, apathy, out of date thinking, because grey was the factory default colour palette(credit to fubarobfusco), are all reasons. It is a mark of expertise in your field to recognize these reasonless reasons.
Seriously, this happens all the time! Why did that guy driving beside me swerve wildly, is he nodding off, texting, or are there children playing around that blind corner? Why did this specification call for a impossible to source part, because the drafter is using european software with european part libraries in north america, or the design has a tight tolerance and the minor differences between parts matter.
Replies from: Jiro↑ comment by Jiro · 2014-10-21T21:28:52.349Z · LW(p) · GW(p)
Not having a reason is a simplification that does not hold up:
What Chesterton actually said is that he wants to know something's use, and if you read the whole quote it's clear from context that he really does mean what one would consider as a use in the ordinary sense. Incompetence and apathy don't count.
"Not having a reason" is a summary; summaries by necessity gloss over details.
↑ comment by Shmi (shminux) · 2014-10-09T17:54:25.588Z · LW(p) · GW(p)
I've seen Chesterton's quote used or misused in ways that assume that an extant fence must have some use that is both ① still existent, and ② beneficial; and that it can only be cleared away if that use is overbalanced by some greater purpose.
Right, this is indeed a misuse. The intended meaning is obviously that you ought to figure out the original reason for the fence and whether it is still valid before making changes. It's a balance between reckless slash-and-burn and lost purposes. This is basic hygiene in, say, software development, where old undocumented code is everywhere.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-10-10T01:55:45.157Z · LW(p) · GW(p)
This is basic hygiene in, say, software development, where old undocumented code is everywhere.
Yep. On the other hand, in well-tested software you can make a branch, delete a source file you think might be unused, and see if all the binaries still build and the tests still pass. If they do, you don't need to know the original reason for that source file existing; you've shown that nothing in the current build depends on it.
This is a bit of a Chinese Room example, though — even though you don't know that the deleted file no longer served any purpose, the tests know it.
Replies from: shminux, Azathoth123↑ comment by Shmi (shminux) · 2014-10-10T02:36:04.147Z · LW(p) · GW(p)
even though you don't know that the deleted file no longer served any purpose, the tests know it.
Yes, if you solve the Chesterton fence of figuring out why certain tests are in the suite to begin with. Certainly an easier task than with the actual code, but still a task. I recall removing failed (and poorly documented) unit and integration tests I myself put in a couple of years earlier without quite recalling why I thought it was a valid test case.
↑ comment by Azathoth123 · 2014-10-10T02:14:33.715Z · LW(p) · GW(p)
On the other hand, in well-tested software you can make a branch, delete a source file you think might be unused, and see if all the binaries still build and the tests still pass.
Unfortunately, this doesn't work outside software. And even in software most of it isn't well tested.
Replies from: Lumifer↑ comment by VAuroch · 2014-10-08T23:11:59.133Z · LW(p) · GW(p)
I agree that the quote is vague, but I think it's pretty clear how he intended it to be parsed: Until you understand why something was put there in the past, you shouldn't remove it, because you don't sufficiently understand the potential consequences.
In the Hadrian's Wall example, while it's true that the naive wall-removing reformer reaches a correct conclusion, they don't have sufficient information to justify confidence in that conclusion. Yes, it's obviously useless for military purposes in the modern day, but if that's true, why hasn't anyone else removed it? Until you understand the answer to that question (and yes, sometimes it's "because they are stupid"), it would be unwise to remove the wall. And indeed, here, the answer is "it's preserved for its historical value", and so it should be kept.
↑ comment by PeerGynt · 2014-10-08T23:47:01.194Z · LW(p) · GW(p)
But some fences were created to serve interests that no longer exist: Hadrian's Wall, for one. The fact that someone >centuries ago built a fence to keep the northern barbarians out of Roman Britain does not mean that it presently >serves that purpose. Someone who observed Hadrian's Wall without knowledge of the Roman Empire, and thus the >wall's original purpose, might correctly conclude that it serves no current military purpose to England.
At the risk of generalizing from fictional evidence: This line of reasoning falls apart when it turns out that the true reason for the wall is to keep Ice Zombies out of your kingdom. Chesterton would surely have seen the need be damn sure that the true purpose is to keep the wildlings out, before agreeing to reduce the defense at the wall.
↑ comment by Azathoth123 · 2014-10-08T22:49:21.478Z · LW(p) · GW(p)
To say "I don't see the use of this" is often a euphemism for "I see the harm this does, and it does not appear to achieve any counterbalancing benefit. Indeed, its purpose appears to have always been to cause harm, and so it should be cleared away expeditiously."
Um, people generally don't build fences to gratuitously cause harm.
Replies from: Jiro↑ comment by Jiro · 2014-10-08T23:18:57.310Z · LW(p) · GW(p)
Um, people generally don't build fences to gratuitously cause harm.
That's either trivial, or false.
It's trivial if you define "gratuitously cause harm" such that wanting someone else to be harmed always benefits oneself either directly or by satisfying a preference, and that counts as non-gratuitous.
It's false if you go by most modern Westerners' standard of harm.
There was no reason to limit Jews to ghettos in the Middle Ages except to cause harm (in sense 2).
Replies from: Vaniver, Nornagest↑ comment by Vaniver · 2014-10-09T15:35:31.962Z · LW(p) · GW(p)
There was no reason to limit Jews to ghettos in the Middle Ages except to cause harm (in sense 2).
Er, this looks like a great example of not looking things up. Having everyone in a market dominant minority live in a walled part of town is great when the uneducated rabble decides it's time to kill them all and take their things, because you can just shut the gates and man the walls. Consider the Jewish ghettoes in Morocco:
Replies from: JiroUsually, the Jewish quarter was situated near the royal palace or the residence of the governor, in order to protect its inhabitants from recurring riots.
↑ comment by Jiro · 2014-10-10T01:14:15.216Z · LW(p) · GW(p)
When you tell people to look things up, be sure you first looked it up correctly yourself. That link says that ghettoes were used to protect Jews in the manner you describe. It does not say that that is why ghettoes were created.
Replies from: Jiro↑ comment by Jiro · 2014-10-10T14:09:10.500Z · LW(p) · GW(p)
Since I lost karma for that, I'd better elaborate. Your specific quoted line shows that protection was the reason for the ghetto's placement, given that they were going to have one. It does not say that protection was the reason for having a ghetto.
Your own link says that "Jewish ghettoes in Europe existed because Jews were viewed as alien due to their non-Christian beliefs in a Christian environment". The only mentionthat is anything like what you claim is halfway down the page, has no reference, does not name the location of the ghetto, and neither 1) says whether Jews could live only there or 2) if so, gives a reason for why they were prevented from living anywhere else.
Replies from: Vaniver↑ comment by Vaniver · 2014-10-10T17:56:42.587Z · LW(p) · GW(p)
That link says that ghettoes were used to protect Jews in the manner you describe. It does not say that that is why ghettoes were created.
It seems to me that we should separate the claim that the actual historical motivation of creating ghettoes was to cause harm to Jews, and the claim that there was no reason to make them besides causing harm to Jews. If there is one reason that Jews benefit from living separately from Christians or Muslims, then we can't make the second argument.
But I don't think we can make the first argument, because we can't generalize across all Jewish quarters. In some cities, the rulers had to establish an exclusive zone for Jews in order to attract the Jews to move in, which suggests to me that this is a thing that Jews actively wanted. It makes sense that they would: notice that a function of many Jewish religious practices is to exclude outsiders and make it more likely for Jews to marry other Jews. Given the fact that Jews were on average wealthier than the local population and wealth played a part in how many of your grandchildren would survive to reproductive age, that's not just raw ingroup preference. (Indeed, Jews moving from a city where a Jew-hating ruler had set up a ghetto to keep them separate might ask a Jew-loving ruler to set them up a ghetto, because they noticed all the good things that a ghetto got them and thought they were worth the costs.)
As for whether or not people voluntarily choose to segregate themselves, consider, say, Chinatowns in the US. Many might have been caused by soft (or hard) restrictions on where Asians could live, but I imagine that most residents stay in them now because they prefer living around people with the same culture, having access to a Chinese-language newspaper, and so on.
Replies from: Jiro↑ comment by Jiro · 2014-10-13T06:21:35.399Z · LW(p) · GW(p)
Notice what I said: to limit Jews to ghettoes. Voluntary segregation and creating Jewish areas to attract Jews does not limit Jews to ghettoes. In general, creating ghettoes to benefit Jews is not a reason to limit them to ghettoes. Furthermore, since I was using ghettoes as a counterexample, even if I had not phrased it that way voluntary segregation still wouldn't count, because in order to have a counterexample it only need be true that some ghettoes were created to harm Jews, even if others were not.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-13T06:49:18.416Z · LW(p) · GW(p)
have a counterexample
Azathoth123 said that people generally don't build fences to gratuitously cause harm, not that they never ever do.
Replies from: Jiro↑ comment by Jiro · 2014-10-13T07:56:45.242Z · LW(p) · GW(p)
The word "generally" in there is another of those things which makes a statement true and trivial at the same time. For one thing, it depends on how you count the fences (When you have a fence about not being a gay male and another about not being a lesbian, does that count as one or two fences?)
A more reasonable interpretation is to take "generally" as a qualifier for how wide the support is for the fence rather than for how common such fences are among the population of all fences--that is, there aren't fences with wide support, the majority of whose supporters wish to cause harm. "Mandatory ghettoes" are indeed a counterexample to the statement when read that way.
↑ comment by Nornagest · 2014-10-09T00:37:55.738Z · LW(p) · GW(p)
There was no reason to limit Jews to ghettos in the Middle Ages except to cause harm (in sense 2).
The medieval allegations against Jews were so persistent and so profoundly nasty that they constitute a genre of their own; we still use the phrase "blood libel". It seems plausible that some of the people responsible for the ghetto laws believed them.
They were entirely wrong, of course, but by the same token it may well turn out that Chesterton's fence was put there to keep out chupacabras. That still counts as knowing the reason for it.
Replies from: Jiro↑ comment by Jiro · 2014-10-09T01:57:03.355Z · LW(p) · GW(p)
That falls under case 1. It is always possible to answer (given sufficient knowledge) "why did X do Y". Y can then be called a reason, so in a trivial sense, every action is done for a reason.
Normally, "did they do it for a reason" means asking if they did it for a reason that is not just based on hatred or cognitive bias. Were blacks forced to use segregated drinking fountains for a "reason" within the meaning of Chesterton's fence?
Replies from: Nornagest↑ comment by Nornagest · 2014-10-09T02:15:53.055Z · LW(p) · GW(p)
That falls under case 1.
No, I don't think it does. We can consider that particular cases of what we now see as harm may have been inspired by bias or ignorance or mistaken premises without thereby concluding that every case must have similar inspirations. Sometimes people really are just spiteful or sadistic. This just isn't one of those times.
It seems clear to me, though, that Chesterton doesn't require the fence to have originally been built for a good reason. Pure malice doesn't strike me as a likely reason unless it's been built up as part of an ideology (and that usually takes more than just malice), but cognitive bias does; how many times have you heard someone say "it seemed like a good idea at the time"?
↑ comment by gjm · 2014-10-10T00:48:43.929Z · LW(p) · GW(p)
Has been posted before, more than once.
↑ comment by BenSix · 2014-10-10T00:43:18.103Z · LW(p) · GW(p)
It strikes me that one might simply presume the worst of whoever put up the fence. It was a farmer, for example, with a malicious desire to keep hill-walkers from enjoying themselves. I would extend the principle of Chesterton’s fence, then, to Chesterton’s farm: one should take care to assess the possible uses that it might have served for the whole institution around it as well as the motives of the man.
↑ comment by Richard_Kennaway · 2014-10-08T10:33:51.433Z · LW(p) · GW(p)
It has appeared before, twice. Maybe it should have a Wiki article here.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-08T20:46:10.966Z · LW(p) · GW(p)
Appears every two years... when tho old quotes are two far down in the search results I guess.
Maybe it should have a Wiki article here.
comment by James_Miller · 2014-10-02T01:59:57.259Z · LW(p) · GW(p)
Germany’s plans in the event of a two front war [WW I] were the results of years of study on the part of great soldiers, the German General Staff. That those plans failed was not due to any unsoundness on the part of the plans, but rather due to the fact that the plans could not be carried out by the field armies.
An official Army War College publication, 1923
While reverse stupidity isn't intelligence, learning how others rationalize failure can help us recognize our own mistakes.
Edited to reflect hydkyll's comment.
Replies from: hydkyll, shminux, ChristianKl↑ comment by hydkyll · 2014-10-03T13:30:59.364Z · LW(p) · GW(p)
How do you know it's a German Army War College publication? Reasons for my doubt:
"Ellis Bata" doesn't sound at all like a German name.
There was no War College in Germany in 1923. There were some remains of the Prussian Military Academy, but the Treaty of Versailles forbid work being done there. The academy wasn't reactivated until 1935.
The academy in Prussia isn't usually called "Army War College". However, there are such academies in Japan, India and the US.
↑ comment by James_Miller · 2014-10-03T22:42:26.013Z · LW(p) · GW(p)
The link is from Strategy Page. I have listened to a lot of their podcasts and greatly respect them.
Replies from: gjm↑ comment by gjm · 2014-10-04T14:49:05.686Z · LW(p) · GW(p)
But the link doesn't say it was from a German Army War College publication. It just says "In an official Army War College publication". All hydkyll's reasons for thinking it likely to be from another country seem strong to me.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-04T17:07:13.313Z · LW(p) · GW(p)
You are right, I added "German" for clarity because I assumed it was true given the context then forgot I had done this. Sorry.
↑ comment by Shmi (shminux) · 2014-10-02T17:33:02.140Z · LW(p) · GW(p)
This is a common failure mode, where the risk analysis is ignored completely. Falling in love with a perfect plan happens all the time in industry. Premortem analysis was not a thing back then, and is exceedingly rare still.
↑ comment by ChristianKl · 2014-10-02T11:26:48.978Z · LW(p) · GW(p)
The context in with the sentence stands is that around that time there was the believe that the Germany army counted on being supported by other German institutions and those institutions didn't support the army but failed the army.
This is commonly known as the stab-in-the-back myth. "Myth" as the winners of WWII wrote our history books. There nothing inherently irrational about that sentiment even though it might have been wrong.
It's not about blaming the troops. If something seems so stupid that it doesn't make sense to you, it might be that the problem is on your own end.
Replies from: NancyLebovitz, DanArmak, chaosmage↑ comment by NancyLebovitz · 2014-10-02T13:42:41.001Z · LW(p) · GW(p)
I read the quote to mean that it's silly to claim that a plan is perfect when it's actually unworkable.
Replies from: James_Miller, taelor↑ comment by James_Miller · 2014-10-02T14:18:30.181Z · LW(p) · GW(p)
This is my interpretation, similar to a teacher saying he gave a great lecture that his students were not smart enough to understand.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-02T15:16:34.398Z · LW(p) · GW(p)
Given German thought at the time I find that unlikely.
The author could have written: "We lost the war because Jews, Social Democrats and Communists backstepped us and not because we didn't have a good plan to fight two sides at once." He isn't that direct, but it's still the most reasonable reading for someone who writes that sentence in 1923 at a military academy in Germany.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-02T18:47:22.722Z · LW(p) · GW(p)
I don't think I said what I meant, which is that the quote is a good example of irrational thinking.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-02T19:17:57.437Z · LW(p) · GW(p)
ChristianKI's point is that this quote is a good example of coded language (aka dogwhistle) and while it looks irrational on the surface, it's likely that it means "That those plans failed was not due to any unsoundness on the part of the plans, but rather due to the fact that we were betrayed".
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-04T13:25:44.320Z · LW(p) · GW(p)
Or it could be read ironically. It would be hard for anyone to disagree with it without looking bad, allowing the writer to say what he really thought (as in Atheism Conquered)
↑ comment by taelor · 2014-10-02T19:19:19.172Z · LW(p) · GW(p)
Of note, Alfred von Schlieffen, the architect of the original deployment plan for war against France, was on record as recommending a negotiated peace in the event that the German Army fail to quickly draw the French into a decisive battle. Obviously, this recommendation was not followed. Also of note, Schlieffen's plan was explicitly for a one-front war; the bit with the Russians was hastily tacked on by Schlieffen's successors at the General Staff.
↑ comment by DanArmak · 2014-10-02T20:52:28.074Z · LW(p) · GW(p)
No plans were made for a war even one year long (although highly placed individuals had their doubts and are now widely quoted about it). No German (or other) plans which existed at the start of WW1 were relevant to the way the war ended many years later. Conversely, whatever accusations were made about betrayal in the later years of the war were clearly irrelevant to the way those plans played out in 1914 when all Germans were united behind the war effort, including Socialists.
↑ comment by chaosmage · 2014-10-02T14:00:36.017Z · LW(p) · GW(p)
While you're right, this all happened after Bismarck and the pre-WWI German government had put a lot of effort into avoiding a two-front war because they did not share the General Staff's optimism about being able to handle it. So this constitutes failing to admit losing a very high stakes bet, and does seem inherently irrational.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-03T00:26:03.535Z · LW(p) · GW(p)
My impression is that the German military was never optimistic concerning winning vs England, France, and Russia. Those that claimed WWI was deliberately initiated by Germany, however, had to falsely claim that the German military was optimistic.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-03T17:41:13.236Z · LW(p) · GW(p)
Is it plausible that the German politicians ignored the German military?
Replies from: James_Miller↑ comment by James_Miller · 2014-10-03T22:53:36.569Z · LW(p) · GW(p)
It's theoretically plausible, but from my understanding of WWI once the Russians mobilized the Germans justifiably believed that they either had to fight a two front war or allow the Russians to get into a position that would have made it extremely easy for Russia+France to conquer Germany.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-10-04T14:31:04.622Z · LW(p) · GW(p)
Right. The 'Blank Check' was the major German diplomatic screwup. Once the Austro-Hungarian Empire issued its ultimatum, they were utterly stuck.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-04T16:15:12.998Z · LW(p) · GW(p)
Agreed, although German further diplomatic errors contributed to England going against them. What they should have done is offer to let England take possession of the German fleet in return for England not fighting Germany and protecting Germany's trade routes.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-10-05T00:37:16.050Z · LW(p) · GW(p)
Ummmmm. That seems rather drastic, and would go over like something that doesn't go over.
Replies from: Protagoras↑ comment by Protagoras · 2014-10-05T03:02:20.577Z · LW(p) · GW(p)
Indeed. A more plausible alternative strategy for Germany would be to forget the invading Belgium plan, fight defensively on the western front, and concentrate their efforts against Russia at the beginning. Britain didn't enter the war until the violation of Belgian neutrality. Admittedly, over time French diplomats might have found some other way to get Britain into the war, but Britain was at least initially unenthusiastic about getting involved, so I think Miller is on the right track in thinking Germany's best hope was to look for ways to keep Britain out indefinitely.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2014-10-05T06:42:53.854Z · LW(p) · GW(p)
concentrate their efforts against Russia at the beginning.
Eh, with perfect hindsight, maybe. The thing about Russia is, it has often been possible to inflict vast defeats on its armies in the field; but how do you knock it out of a war? Sure, in the Great War it did happen eventually - but the Germans weren't planning on multiple years of war that would stretch societies past their breaking point. (For that matter, in 1917 Germany was itself feeling the strain; it's called the "Turnip Winter" for a reason.) There were vast slaughters and defeats on the Eastern Front, true; but the German armies were never anywhere near Moscow - not even after the draconian peace signed at Brest-Litovsk. The German staff presumably didn't think there was any chance of getting a reasonably quick decision in Russia.
Do note, when a different German leader made the opposite assumption, "it is only a question of kicking in the door, and the whole rotten structure will come tumbling down"... that didn't go so well either; and he didn't even have a Western front to speak of. It seems to me that Germany's "problems" in 1914 just didn't have a military solution; I put problems in scare quotes because they did have the excellent peaceful solution of keeping your mouth shut and growing the economy. It's not as though France was going to start anything.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-05T18:17:22.350Z · LW(p) · GW(p)
It's not as though France was going to start anything.
Not by itself, but France was very willing to support Russian aggression against the central powers.
comment by Tyrrell_McAllister · 2014-10-01T23:06:18.477Z · LW(p) · GW(p)
The characteristic feature of all ethics is to consider human life as a game that can be won or lost and to teach man the means of winning.
Simone de Beauvoir, The Ethics of Ambiguity, Part I (trans. by Bernard Frechtman).
Cf. Rationality is Systematized Winning and Rationality and Winning.
comment by James_Miller · 2014-10-15T15:53:43.922Z · LW(p) · GW(p)
Replies from: AshwinVWhat if the polls prove to have no bias? Our model shows Republicans as about 75 percent likely to win a Senate majority. This may seem confusing: Doesn’t the official version of FiveThirtyEight’s model have Republicans as about 60 percent favorites instead? Yes, but some of the 40 percent chance it gives Democrats reflects the possibility that the polls will have a Republican bias. If the polls were guaranteed to be unbiased, that would would make Republicans more certain of winning.
comment by Aiyen · 2014-10-05T21:02:02.550Z · LW(p) · GW(p)
"If we take everything into account — not only what the ancients knew, but all of what we know today that they didn't know — then I think that we must frankly admit that we do not know. But, in admitting this, we have probably found the open channel."
Richard Feynman, "The Value of Science," public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-10-06T08:29:20.844Z · LW(p) · GW(p)
I found the "open channel" metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
It is our responsibility to leave the men of the future with a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we, so young and ignorant, say we have the answers now, if we suppress all discussion, all criticism, saying, 'This is it, boys! Man is saved!' Thus we can doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.
This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Replies from: Vaniver, Aiyen↑ comment by Vaniver · 2014-10-06T17:57:59.879Z · LW(p) · GW(p)
This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
↑ comment by Aiyen · 2014-10-06T20:40:27.839Z · LW(p) · GW(p)
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is "growing up" (recursively self-improving).
comment by Dan_Moore · 2014-10-14T20:16:35.286Z · LW(p) · GW(p)
Thankfully, they have ways of verifying historical facts so this [getting facts wrong] doesn't happen too much. One of them is Bayes' Theorem, which uses mathematical formulas to determine the probability that an event actually occurred. Ironically, the method is even useful in the case of Bayes' Theorem itself. While most people attribute it to Thomas Bayes (1701 - 1761), there are a significant number who claim it was discovered independently of Bayes - and some time before him - by a Nicholas Saunderson. This gives researchers the unique opportunity to use Bayes' Theorem to determine who came up with Bayes' Theorem. I love science.
John Cadley, Funny You Should Say That - Toastmaster magazine
comment by 27chaos · 2014-10-05T20:05:34.326Z · LW(p) · GW(p)
Slytherin, the hat had almost put him in, and his similarity to Slytherin's heir Riddle himself had commented on. But he was beginning to think this wasn't because he had "un-Gryffindor" qualities that fit only in Slytherin, but because the two houses - normally pictured as opposites - were in some fundamental ways quite similar.
Ravenclaws in battle, he had no doubt, would cooly plan the sacrifice of distant strangers to achieve an important objective, though that cold logic could collapse in the face of sacrificing family instead. Hufflepuffs would sacrifice no one, though it means they sacrifice an objective in its place.
Only Gryffindors and Slytherins were good at sacrificing those they loved.
But with one friend who had lost weeks to the hospital wing and who could so easily have lost her life instead, with another mourning..., with himself going into battles he barely survived, and making decisions he should not have to make, he dreaded what they might be called upon to sacrifice next.
And he decided: he would do much, to see that it did not happen.
From Myst Shadow's excellent fanfiction, Forging the Sword.
Replies from: elharo↑ comment by elharo · 2014-10-05T22:07:15.292Z · LW(p) · GW(p)
I understand the sentiment and why it's quoted. In fanboy mode though, I think Gryffindor and Ravenclaw are reversed here. I.e. a Gryffindor might sacrifice themself, but would not sacrifice a friend or loved one. They would insist that there must be a better way, and strive to find it. In fiction (as opposed to real life) they might even be right.
The Ravenclaw is the one who does the math, and sacrifices the one to save the many, even if the one is dear to them. More realistically, the Ravenclaw is the effective altruist who sees all human life as equally valuable, and will spend their money where it can do the most good, even if that's in a far away place and their money helps only people they will never meet. A Ravenclaw says the green children being killed by our blue soldiers are just as deserving of life as our own blue children; and a Ravenclaw will say this even when he or she personally feels far more attached to blue children. The Ravenclaw is the one who does not reject the obvious implications of clear logic, just because they are unpopular at rallies to support the brave blue soldiers.
comment by Azathoth123 · 2014-10-04T19:49:03.941Z · LW(p) · GW(p)
Nobody panics when things go "according to plan." Even if the plan is horrifying! If, tomorrow, I tell the press that, like, a gang banger will get shot, or a truckload of soldiers will be blown up, nobody panics, because it's all "part of the plan". But when I say that one little old mayor will die, well then everyone loses their minds!
-- Joker, The Dark Knight
[T]here are several references to previous flights; the acceptance and success of these flights are taken as evidence of safety. But erosion and blowby are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in the unexpected and not thoroughly understood way. The fact that this danger did not lead to catastrophe before is no guarantee that it will not the next time, unless it is completely understood. (..) The origin and consequences of the erosion and blowby were not understood. Erosion and blowby did not occur equally on all flights or in all joints: sometimes there was more, sometimes less. Why not sometime, when whatever conditions determined it were right, wouldn't there be still more, leading to catastrophe?
Richard Feynman, Appendix F: Personal Observations on the Reliability of the Shuttle
comment by dspeyer · 2014-10-02T15:48:49.437Z · LW(p) · GW(p)
One of the things about the online debate over e-piracy that particularly galled me was the blithe assumption by some of my opponents that the human race is a pack of slavering would-be thieves held (barely) in check by the fear of prison sentences.
Oh, hogwash.
Sure, sure - if presented with a real "Devil's bargain," most people will at least be tempted. Eternal life. . . a million dollars found lying in the woods. . .
Heh. Many fine stories have been written on the subject! But how many people, in the real world, are going to be tempted to steal a few bucks?
-- Introducing the Baen Free Library, Eric Flint
(Which I can no longer find at Baen, but copies are scattered across the internet, including here)
Replies from: Salemicus, Larks↑ comment by Salemicus · 2014-10-03T09:38:09.048Z · LW(p) · GW(p)
How many people, in the real world, are going to be tempted to steal a few bucks?
Quite a lot, in my experience. I've seen so many well-paid people fired for fiddling their expenses over trivial amounts. Eric Flint, as befits a fiction author, makes a rhetorically compelling case though!
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-03T09:41:23.452Z · LW(p) · GW(p)
Even more take home with them papers or pens from their workplace and don't get punished for it.
Replies from: gjm↑ comment by gjm · 2014-10-03T12:52:17.305Z · LW(p) · GW(p)
Quite right, too.
Being able to take paper and pens home from the workplace to work is clearly useful and beneficial to the business. It's plainly not worth a business's time to track such things punctiliously unless its employees are engaging in large-scale pilfering (e.g., selling packs of printer paper) because the losses are so small. It's plainly not worth an employee's time to track them either for the same reason. (And similarly not worth an employee's time worrying about whether s/he has brought papers or pens into work from home and left them there.)
The optimal policy is clearly for no one to worry about these things except in cases of large-scale pilfering.
(In large businesses it may be worth having a formal rule that just says "no taking things home from the office" and then ignoring small violations, because that makes it feasible to fight back in cases of large-scale pilfering without needing a load of lawyering over what counts as large-scale. Even then, the purpose of that rule should be to prevent serious violations and no one should feel at all guilty about not keeping track of what paper and pens are whose. I suspect the actual local optimum in this vicinity is to have such a rule and announce explicitly that no one will be looking for, or caring about, small benign violations. But that might turn out to spoil things legally in the rare cases where it matters.)
Lest I be thought self-serving, I will remark that I'm pretty sure my own net flux of Stuff is very sizeably into, not out of, work.
Replies from: William_Quixote, VAuroch↑ comment by William_Quixote · 2014-10-03T13:34:48.189Z · LW(p) · GW(p)
This post is right on the money. Transaction costs are real and often wind up being deceptively higher than you anticipate.
↑ comment by VAuroch · 2014-10-06T04:13:55.913Z · LW(p) · GW(p)
I suspect the actual local optimum in this vicinity is to have such a rule and announce explicitly that no one will be looking for, or caring about, small benign violations. But that might turn out to spoil things legally in the rare cases where it matters.
Including legal concerns, the local optimum is probably officially stating that response will be proportional to seriousness of the 'theft', with a stated possible maximum. This essentially dog-whistles that small items are free to take, without giving an explicit pass.
A better optimum might be what some tech company (I thought Twitter but can't find my source) that changed their policy on expense accounts for travel/food/etc. to 'use this toward the best interests of the company', to significant positive results. But some of the incentives there (in-house travel-agent arrangements are grotesquely inefficient) are missing here.
Replies from: gjm↑ comment by Larks · 2014-10-04T02:31:08.623Z · LW(p) · GW(p)
How is this a rationality quote? I can see people thinking this is a good argument, especially if you politically agreed with the author, but it doesn't seem to be about rationality, or demonstrating an unusually great deal of rationality
Replies from: sketerpot, dspeyer↑ comment by sketerpot · 2014-10-04T02:48:19.054Z · LW(p) · GW(p)
It would definitely be a rationality quote if it went on to quote the part where Eric Flint decided to test his hypothesis by putting some of his books online, for free, and watching his sales numbers.
Replies from: DanielLC↑ comment by DanielLC · 2014-10-04T17:20:11.532Z · LW(p) · GW(p)
Does he say what the results were anywhere?
Replies from: dspeyer↑ comment by dspeyer · 2014-10-06T21:08:59.809Z · LW(p) · GW(p)
Huge success. Sales jumped up in ways that are hard to explain as anything other than the free library's effect.
↑ comment by dspeyer · 2014-10-06T21:14:17.171Z · LW(p) · GW(p)
It expresses two ideas:
- Reduction to incentives is such a useful hammer that it's tempting to think of the world as homo economus nails. Like all simplified models, that can be useful, but it can also be dangerously wrong.
- It isn't very much information to say that people have a price. The real information lies in what that price is. It may be true to say "people are dishonest", but if you want to win, you need to specify which people and how dishonest.
comment by ChristianKl · 2014-11-09T13:31:08.975Z · LW(p) · GW(p)
If you kick a ball, about the most interesting way you can analyze the result is in terms of the mechanical laws of force and motion. The coefficients of inertia, gravity, and friction are sufficient to determine its reaction to your kick and the ball's final resting place, even if you can 'bend it like Beckham'. But if you kick a large dog, such a mechanical analysis of vectors and resultant forces may not prove as salient as the reaction of the dog as a whole. Analyzing individual muscles biomechanically likewise yields an incomplete picture of human movement experience.
Thomas W. Myers in Anatomy Trains - Page 3
comment by johnlawrenceaspden · 2014-10-04T12:06:33.283Z · LW(p) · GW(p)
"What you can do, or dream you can do, begin it! / Boldness has genius, power and magic in it."
-- John Anster in a "very free translation" of Faust from 1835. (http://www.goethesociety.org/pages/quotescom.html)
comment by Salemicus · 2014-10-03T16:59:44.744Z · LW(p) · GW(p)
Replies from: johnlawrenceaspdenTime is precious, but truth is more precious than time.
↑ comment by johnlawrenceaspden · 2014-10-04T13:29:54.077Z · LW(p) · GW(p)
In what units?
Replies from: kpreid↑ comment by kpreid · 2014-10-22T02:54:53.519Z · LW(p) · GW(p)
Choice of units does not change relative magnitudes.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-10-23T14:01:45.958Z · LW(p) · GW(p)
quite..
comment by Salemicus · 2014-10-20T17:15:54.677Z · LW(p) · GW(p)
The fact that censorship is progressivism’s default position regarding so many things is evidence of progressives’ pessimism about the ability of their agenda to advance under a regime of robust discussion.
George Will, writing in the Washington Post.
The quote illustrates rationality with a particular example from a political subject, which we all know can be mind-killing. For the avoidance of doubt, I would therefore note that the lesson in rationality from the quote applies equally to anyone, regardless of their politics, who is keen to censor discussions.
Replies from: Manfred, ChristianKl↑ comment by Manfred · 2014-10-20T19:27:42.638Z · LW(p) · GW(p)
I disagree with the object level of this quote. Censorship can achieve multiple goals, and a lack of censorship does not necessarily imply "a regime of robust discussion."
Examples of the first would be using the censorship itself as the action (e.g. a despot censoring religious minorities doesn't just limit discussion, it's an active method of subjugation), or protecting people from messages with annoying content or form (e.g. regulations on advertising).
The second is nearly a human universal, but is especially clear in propaganda situations - if we're at war, and someone is spreading slanderous enemy propaganda, and I destroy their materials and arrest them, this is censorship. But it also increases the robustness of discussion, because they were trying to inject falsehoods into the discussion. Or for another example - sometimes you have to ban trolls from your message board.
I also dislike the implications of this quote for any discussion where it shows up. Some times ad hominem arguments are right. But they're almost never productive, especially when cast in such general terms.
Replies from: AndHisHorse↑ comment by AndHisHorse · 2014-10-20T23:16:26.337Z · LW(p) · GW(p)
I wouldn't say that it's an ad hominem quote. I disagree with the premise - that censorship is a "default position regarding so many things" within progressivism - but I think that the link between censorship as a default position and a fear of the survivability-under-discussion of one's own ideas is a rationally visible one. Unlike a typical example of an ad hominem attack, the undesirable behavior (fiat elimination of competing ideas as a default response) is related to the conclusion (that the individual is afraid of the effects of competing ideas). It's oversimplified, but one can say only so much in a short quip.
Replies from: Manfred↑ comment by Manfred · 2014-10-21T00:33:21.927Z · LW(p) · GW(p)
Would the term "genetic argument" be better, do you think? Fewer emotional associations, certainly :P
Anyhow, what I meant to indicate is arguments of the form "Person or group X's argument is wrong because X has trait Y." Example: "Rossi's claims of fusion are wrong because he's been shown to be a fraud before." fits this category. Rather than examining any specific argument, we are taking it "to the man."
And I agree that these arguments can absolutely be valid. But if there is any kind of emotionally-charged disagreement, then not only is making this sort of "rhetorical move" not going to help you persuade anybody (it can be fine as a way to preach to the choir of course), but if someone presents an argument like this to you, you should give it much less credence than if you were discussing a trivial matter. I think "fallacy" can also mean a knife that people often cut themselves with.
↑ comment by ChristianKl · 2014-10-20T23:31:37.270Z · LW(p) · GW(p)
The fact that I use knifes and forks to eat my meal isn't evidence for my pessimism to successfully eat my meal without knifes and forks. It's just evidence that I consider those tools useful.
comment by Fluttershy · 2014-10-05T04:57:35.619Z · LW(p) · GW(p)
"Dammit, Roselyn, you've done enough. If you keep putting it off, you could end up in a desperate place one day!" Caprice nuzzled Roselyn's leg. "You always act as if you are trying to make up for something. If you'd just take the serum, you'd find that Celestia forgives humans. She knows humans can't help what they are." Caprice looked deeply into Roselyn's eyes, and Roselyn felt that somehow Caprice was speaking from personal experience. "Just do it, Ros. Run and grab a cup and then get on the boat. It's alright. You have my permission."
-Jennifer Reitz, 27 Ounces, Final Chapter.
Context: Taking ponification serum increases the expected value of one's lifespan by 300 years, though Roselyn is averse to taking the serum, because she feels that existing in human form serves as penitence for wrongs she has committed in the past. There is a tenuous connection between Roselyn's act of procrastinating on taking the ponification serum, and the practice of cryocrastinating in real life.
Edit: I'm sorry that nopony liked the above quote; my intent in posting it was to cheer for the sentiment that living for a long time is a good thing. I, um, guess that I did a bad job, sorry. I will leave the test of my original post unedited, so that everypony can see what I originally wrote.
comment by ChristianKl · 2014-10-16T21:53:42.858Z · LW(p) · GW(p)
Truth is the invention of a liar
Heinz von Foerster (he founded the Biological Computer Lab in 1958)