Rationality Quotes May 2012
post by OpenThreadGuy · 2012-05-01T23:37:37.317Z · LW · GW · Legacy · 694 commentsContents
694 comments
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
694 comments
Comments sorted by top scores.
comment by Grognor · 2012-05-02T03:42:19.258Z · LW(p) · GW(p)
Tags like "stupid," "bad at __", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why." Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It's not you, it's the bug.
-celandine13 (Hat-tip to Frank Adamek. In addition, the linked article is so good that I had trouble picking something to put in rationality quotes; in other words, I recommend it.)
Replies from: fubarobfusco, MBlume, None, jooyous, kdorian, tgb↑ comment by fubarobfusco · 2012-05-05T21:04:29.161Z · LW(p) · GW(p)
Another quote from the same piece, just before that para:
Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as "stupid."
I really, really like this. Thanks for posting it!
To elucidate the "bug model" a bit, consider "bugs" not in a single piece of software, but in a system. The following is drawn from my professional experience as a sysadmin for large-scale web applications, but I've tried to make it clear:
Suppose that you have a web server; or better yet, a cluster of servers. It's providing some application to users — maybe a wiki, a forum, or a game. Most of the time when a query comes in from a user's browser, the server gives a good response. However, sometimes it gives a bad response — maybe it's unusually slow, or it times out, or it gives an error or an incomplete page instead of what the user was looking for.
It turns out that if you want to fix these sorts of problems, considering them merely to be "flakiness" and stopping there is not enough. You have to actually find out where the errors are coming from. "Flaky web server" is an aggregate property, not a simple one; specifically, it is the sum of all the different sources of error, slowness, and other badness — the disk contention; the database queries against un-indexed tables; the slowly failing NIC; the excess load from the web spider that's copying the main page ten times a second looking for updates; the design choice of retrying failed transactions repeatedly, thus causing overload to make itself worse.
There is some fact of the matter about which error sources are causing more failures than others, too. If 1% of failed queries are caused by a failing NIC, but 90% are caused by transactions timing out due to slow database queries to an overloaded MySQL instance, then swapping the NIC out is not going to help much. And two flaky websites may be flaky for completely unrelated reasons.
Talking about how flaky or reliable a web server is lets you compare two web servers side-by-side and decide which one is preferable. But by itself it doesn't let you fix anything. You can't just point at the better web server and tell the worse one, "Why can't you be more like your sister?" — or rather, you can, but it doesn't work. The differences between the two do matter, but you have to know which differences matter in order to actually change things.
To bring the analogy back to human cognitive behavior: yes, you can probably measure which of two people is "more rational" than the other, or even "more intelligent". But if someone wants to become more rational, they can't do it by just trying to imitate an exemplary rational person — they have to actually diagnose what kinds of not-rational they are being, and find ways to correct them. There is no royal road to rationality; you have to actually struggle with (or work around) the specific bugs you have.
Replies from: Desrtopa, rocurley↑ comment by Desrtopa · 2012-05-23T18:04:41.797Z · LW(p) · GW(p)
Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as "stupid."
I agree with the general thrust of the essay (that broad, fuzzy labels like "bad at" are more useful if reduced to specific bug descriptions,) but I'll note that being aware of the specific bugs that cause people to make the mistakes they're making does not stop me from thinking of people as stupid. If a person's bugs are numerous, obtrusive, and difficult to correct, I'm going to end up thinking of them as stupid even if I can describe every bug.
↑ comment by [deleted] · 2012-05-05T04:11:11.865Z · LW(p) · GW(p)
I already upvoted this but want to emphasize that the article is really good.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-05-05T07:18:52.931Z · LW(p) · GW(p)
My favorite sentence in it: "Are there no stupid people left?"
↑ comment by jooyous · 2013-01-27T00:26:02.540Z · LW(p) · GW(p)
I've been trying to change my impulse to think "this person is an idiot!" into "this person is a noob," because the term still kinda has that slightly useful predictive meaning that suggests incompetence, but it also contains the idea that they have the potential to get better, rather than being inherently incompetent.
↑ comment by tgb · 2012-05-05T14:57:04.589Z · LW(p) · GW(p)
Great article. One thing:
If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian "rational mind" will conclude that the urn is entirely full of blue balls. ... [But it's] not approaching the most important job of teachers, which is to figure out why you're getting things wrong -- what conceptual misunderstanding, or what bad study habit, is behind your problems.
I don't know much about Knewton, but it seems like it could address this - at least in some cases - and possibly better than teachers. Knewton and programs like it can keep track of success rates at the individual problem level, rather than the test or semester level. Such data could be used to identify the 'bugs' the author speaks of. All Knewton needs is knowledge of common 'bugs' and what problems they make students get wrong.
This article also recalls to mind http://lesswrong.com/lw/6ww/when_programs_have_to_work_lessons_from_nasa/, specifically the part where problems are considered to be the fault of the system, not of the people involved and are treated by changing the system, not by criticizing the people.
comment by gRR · 2012-05-01T12:10:20.689Z · LW(p) · GW(p)
Once upon a time, there was a man who was riding in a horse drawn carriage and traveling to go take care of some affairs; and in the carriage there was also a very big suitcase. He told the driver to of the carriage to drive non-stop and the horse ran extremely fast.
Along the road, there was an old man who saw them and asked, “Sir, you seem anxious, where do you need to go?”
The man in the carriage then replied in a loud voice, “I need to go to the state of Chu.” The old man heard and laughing he smiled and said, “You are going the wrong way. The state of Chu is in the south, how come you are going to to the north?”
“That’s alright,” The man in the carriage then said, “Can you not see? My horse runs very fast.”
“Your horse is great, but your path is incorrect.”
“It’s no problem, my carriage is new, it was made just last month.”
“Your carriage is brand new, but this is not the road one takes to get to Chu.”
“Old Uncle, you don’t know,” and the man in the carriage pointed to the suitcase in the back and said, “In that suitcase there’s alot of money. No matter how long the road is, I am not afraid.”
“You have lots of money, but do not forget, The direction which you are going is wrong. I can see, you should go back the direction which you came from.”
The man in the carriage heard this and irritated said, “I have already been traveling for ten days, how can you tell me to go back from where I came?” He then pointed at the carriage driver and said, “Take a look, he is very young, and he drives very well, you needn’t worry. Goodbye!”
And then he told the driver to drive forward, and the horse ran even faster.
--Chinese Tale
Replies from: RomeoStevens, NancyLebovitz, Thomas↑ comment by RomeoStevens · 2012-05-01T22:42:37.632Z · LW(p) · GW(p)
I always use the metaphor of the fast car to distinguish between intelligence and rationality.
↑ comment by NancyLebovitz · 2012-05-01T21:15:56.505Z · LW(p) · GW(p)
That's a very handy assortment of fallacies. Where did you find it?
Replies from: gRR↑ comment by gRR · 2012-05-01T22:48:16.830Z · LW(p) · GW(p)
I first saw the story in "School in Carmarthen", which I would absolutely recommend to everyone, except it's in Russian. I thought there should probably be an English translation of the Chinese tale, so I googled it up by keywords.
The tale is apparently the origin story behind a common Chinese idiom that literally translates as "south house north rut”, and which means acting in a way that defeats one's purpose.
↑ comment by Thomas · 2012-05-03T08:59:02.064Z · LW(p) · GW(p)
They had too much time to talk, if one of them was that fast. Can't help, but this technicality bothers me.
Replies from: thomblake, Richard_Kennaway↑ comment by thomblake · 2012-05-03T16:23:48.595Z · LW(p) · GW(p)
It was not said how the old man was travelling, and I doubt the horse was at a literal run. A carriage can go as fast as about 30 miles an hour on a modern road, but even in those conditions you should expect to break your carriage. On ancient roads, depending on condition, the speed limit for going "very fast" in a carriage could easily have been as low as about 10 miles per hour. If the old man was riding on an animal, or walking very fast, then he could have kept up for some time.
We at least know that the carriage wasn't moving at its top speed because at the end of the story the horse sped up.
↑ comment by Richard_Kennaway · 2012-05-03T12:34:36.642Z · LW(p) · GW(p)
The carriage stopped while the two conversed. Or am I misunderstanding your objection?
Replies from: Thomas↑ comment by Thomas · 2012-05-03T13:39:53.128Z · LW(p) · GW(p)
He told the driver to of the carriage to drive non-stop and the horse ran extremely fast. Along the road, there was an old man who saw them and asked, “Sir, you seem anxious, where do you need to go?”
Non stop and extremely fast, the story says. Well must be something lost in the translation.
Replies from: Richard_Kennaway, thomblake↑ comment by Richard_Kennaway · 2012-05-03T13:50:08.800Z · LW(p) · GW(p)
Lost somewhere, I suppose. It seems clear to me that the carriage stopped. Just as it would not have carried on literally non-stop for ten days, 24 hours a day. These details are not stated; they do not need to be. And at the end, the man tells the driver to drive on. If this is an imperfection in the story, it is nothing more than a hyperbolic use of "non-stop", as trifling as the extraneous "to" in the passage you quoted, which does not seem to have held you up.
↑ comment by thomblake · 2012-05-03T16:16:12.579Z · LW(p) · GW(p)
Even in conventional English, "Non-stop" doesn't necessarily mean without stopping at all. The express train from New Haven to Grand Central, for example, is called express because it doesn't stop between Connecticut and New York City, though there are several stops in Connecticut and one stop in Harlem.
"Non-stop" in context could just mean that they were not stopping in any towns they passed.
comment by [deleted] · 2012-05-01T13:06:48.634Z · LW(p) · GW(p)
For example, in many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army.
--Mencius Moldbug, on belief as attire and conspicuous wrongness.
Replies from: Waldheri, NancyLebovitz, Eugine_Nier, chaosmosis, nykos, Multiheaded↑ comment by Waldheri · 2012-05-03T17:57:46.609Z · LW(p) · GW(p)
This reminds me of the following passage from We Need to Talk About Kevin by Lionel Shriver:
But keeping secrets is a discipline. I never use to think of myself as a good liar, but after having had some practice I had adopted the prevaricator's credo that one doesn't so much fabricate a lie as marry it. A successful lie cannot be brought into this world and capriciously abandoned; like any committed relationship it must be maintained, and with far more devotion than the truth, which carries on being carelessly true without any help. By contrast, my lie needed me as much as I needed it, and so demanded the constancy of wedlock: Till death do us part.
↑ comment by NancyLebovitz · 2012-05-02T04:54:25.610Z · LW(p) · GW(p)
Possible additional factor: The truth is frequently boring-- it helps to add some absurdity just to get people's attention. Once you've got people's attention, proof of loyalty can come into play.
↑ comment by Eugine_Nier · 2012-05-02T04:25:04.355Z · LW(p) · GW(p)
Also relevant.
↑ comment by chaosmosis · 2012-05-01T21:28:40.717Z · LW(p) · GW(p)
This reminds me of Baudrillard, I might come back in a few days with a Baudrillard rationality quote.
↑ comment by nykos · 2012-05-03T12:21:41.571Z · LW(p) · GW(p)
More quotes by Mencius Moldbug:
When they say things like "in cognitive science, Bayesian reasoner is the technically precise codeword that we use to mean rational mind," they really do mean it. Move over, Aristotle!
Of course, in Catholicism, Catholic is the technically precise codeword that they use to mean rational mind. I am not a Catholic or even a Christian, but frankly, I think that if I had to vote for a dictator of the world and the only information I had was whether the candidate was an orthodox Bayesian or an orthodox Catholic, I'd go with the latter.
The only problem is that this little formula is not a complete, drop-in replacement for your brain. If a reservationist is skeptical of anything on God's green earth, it's people who want to replace his (or her) brain with a formula.
To make this more concrete, let's look at how fragile Bayesian inference is in the presence of an attacker who's filtering our event stream. By throwing off P(B), any undetected pattern of correlation can completely foul the whole system. If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian "rational mind" will conclude that the urn is entirely full of blue balls. And Bayesian inference certainly does not offer any suggestion that you should look at who's pulling balls out of the urn and see what he has up his sleeves. Once again, the problem is not that Bayesianism is untrue. The problem is that the human brain has a very limited capacity for analytic reasoning to begin with.
They are all from the article A Reservationist Epistemology
Replies from: tgb, Multiheaded↑ comment by tgb · 2012-05-03T13:00:10.939Z · LW(p) · GW(p)
If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian "rational mind" will conclude that the urn is entirely full of blue balls.
Surely the actual Bayesian rational mind's conclusion is that the attacker will (probably) always show a blue ball, nothing to do with the urn at all.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-05-04T08:19:12.071Z · LW(p) · GW(p)
Solomonoff prior gives nonzero probability to the attacker deceiving us. But humans are not very good at operating with such probabilities precisely.
↑ comment by Multiheaded · 2012-05-07T05:39:52.782Z · LW(p) · GW(p)
And Bayesian inference certainly does not offer any suggestion that you should look at who's pulling balls out of the urn and see what he has up his sleeves.
I just facepalmed the hardest I've ever done while reading Unqualified Reservations. That is, not very hard - Mencius is nothing if not a charming and polite author - but still. Maybe he really ought to read at least one Sequence!
Replies from: roystgnr, arundelo, private_messaging↑ comment by roystgnr · 2012-05-17T04:07:43.269Z · LW(p) · GW(p)
Could we start that reading with the classic Bayes' Theorem example? Suppose 1% of women have breast cancer, 80% of mammograms on a cancerous woman will detect it, 9.6% on an uncancerous woman will be false positives. Suppose woman A gets a mammogram which indicates cancer. What are the odds she has cancer?
p(A|X) = p(X|A)p(A)/(p(X|A)p(A)+p(X|~A)p(~A)) = 7.8% Hooray?
Now suppose women B, C, D, E, F... Z, AA, AB, AC, AD, etc., the entire patient list getting screened today, all test positive for cancer. Is the probability that woman A has cancer still 7.8%? Bayes' rule, with the priors above, still says "yes"! You need more complicated prior probabilities (e.g. what are the odds that the test equipment is malfunctioning?) before your evidence can tell you what's actually likely to be happening. But those more complicated, more accurate priors would have (very slightly) changed our original p(A|X) as well!
It's not that Bayesian updating is wrong. It's just that Bayes' theorem never allows you to have a non-zero posterior probability coming from a zero prior, and to make any practical problem tractable everybody ends up implicitly assuming huge swaths of zero prior probability.
Replies from: DanielLC↑ comment by DanielLC · 2013-05-07T04:15:29.748Z · LW(p) · GW(p)
It's not assuming zero probability. It's assuming independence. Under the original model, it's possible for all the women to get positives, but only 1% to actually have breast cancer. It's just that a better prior would give a much higher probability.
Replies from: roystgnr↑ comment by roystgnr · 2013-05-07T13:00:21.881Z · LW(p) · GW(p)
Is there any practical difference between "assuming independent results" and "assuming zero probability for all models which do not generate independent results"? If not then I think we've just been exposed to people using different terminology.
Replies from: Richard_Kennaway, DanielLC↑ comment by Richard_Kennaway · 2013-05-07T16:22:01.105Z · LW(p) · GW(p)
Is there any practical difference between "assuming independent results" and "assuming zero probability for all models which do not generate independent results"?
No.
If not then I think we've just been exposed to people using different terminology.
I think it's more than terminology. And if Mencius can be dismissed as someone who does not really get Bayesian inference, one can surely not say the same of Cosma Shalizi, who has made the same argument somewhere on his blog. (It was a few years ago and I can't easily find a link. It might have been in a technical report or a published paper instead.) Suppose a Bayesian is trying to estimate the mean of a normal distribution from incoming data. He has a prior distribution of the mean, and each new observation updates that prior. But what if the data are not drawn from a normal distribution, but from the sum of two such distributions with well separated peaks? The Bayesian (he says) can never discover that. Instead, his estimate of the position of the single peak that he is committed to will wander up and down between the two real peaks, like the Flying Dutchman cursed never to find a port, while the posterior probability of seeing the data that he has seen plummets (on the log-odds scale) towards minus infinity. But he cannot avoid this: no evidence can let him update towards anything his prior gives zero probability to.
What (he says) can save the Bayesian from this fate? Model-checking. Look at the data and see if they are actually consistent with any model in the class you are trying to fit. If not, think of a better model and fit that.
Andrew Gelman says the same; there's a chapter of his book devoted to model checking. And here's a paper by both of them on Bayesian inference and philosophy of science, in which they explicitly describe model-checking as "non-Bayesian checking of Bayesian models". My impression (not being a statistician) is that their view is currently the standard one.
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process. (I'm distancing myself from this claim, because as a non-statistician, I don't need to have any position on this. I just want to see the position stated here.) The single-peaked prior in Shalizi's story was merely a conditional one: supposing the true distribution to be in that family, the Bayesian estimate does indeed behave in that way. But all we have to do to save the Bayesian from a fate worse than frequentism is to widen the picture. That prior was merely a subset, worked with for computational convenience, but in the true prior, that prior only accounted for some fraction p<1 of the probability mass, the remaining 1-p being assigned to "something else". Then when the data fail to conform to any single Gaussian, the "something else" alternative will eventually overshadow the Gaussian model, and will need to be expanded into more detail.
"But," the soft Bayesians might say, "how do you expand that 'something else' into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn't fit looks the same as what we do, why pretend it's Bayesian inference?"
I suppose this would be Eliezer's answer to that last question.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
Replies from: Matt_Simpson, gwern, Mass_Driver, khafra, Vaniver↑ comment by Matt_Simpson · 2013-05-09T16:39:23.986Z · LW(p) · GW(p)
In response to:
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process.
and
"But," the soft Bayesians might say, "how do you expand that 'something else' into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn't fit looks the same as what we do, why pretend it's Bayesian inference?"
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology - though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I'm not saying this is the only use for statistics, but I can't seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct - i.e. let's say Cox's theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that's not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don't have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update - you don't look at the data twice, and you're actually performing a Bayesian update. But you don't get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there's always some chance that the class of models you've chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won't give the best computationally feasible approximation in some situation.
So, basically, a "hardline Bayesian" should do model checking and sometimes even frequentist statistics. (Similarly, a "hardline frequentist" in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
↑ comment by gwern · 2013-05-07T17:38:51.034Z · LW(p) · GW(p)
And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
I find this a curious thing to say. Isn't this an argument against every possible remotely optimal computable form of induction or decision-making? Of course a good computable approximation may wind up spending lots of resources solving a problem if that problem is important enough, this is not a blackmark against it. Problems in the real world can be hard, so dealing with them may not be easy!
"Omega flies up to you and hands you a box containing the Secrets of Immortality; the box is opened by the solution to an NP problem inscribed on it." Is the optimal solution really to not even try the problem - because then you're trying "brute-forcing an NP-hard problem"! - even if it turns out to be one of the majority of easily-solved problems? "You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses."
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
The right answer is simply to point out that the worst case of the optimal algorithm is going to be the worst case of all possible problems presented, and this is exactly what we would expect since there is no magic fairy dust which will collapse all problems to constant-time solutions.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-07T18:20:48.901Z · LW(p) · GW(p)
I find this a curious thing to say. Isn't this an argument against every possible remotely optimal computable form of induction or decision-making?
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that's the way it is, well, that's the way it is.
"Omega flies up to you
This is an argument of the form "Suppose X were true -- then X would be true! So couldn't X be true?"
"You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses."
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time -- freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point. Of course, there is the whole uncomputability zoo above all that, but computing the uncomputable is even more of a wild goose chase. "Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin's Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer".
Replies from: gwern↑ comment by gwern · 2013-05-07T18:32:46.710Z · LW(p) · GW(p)
This is an argument of the form "Suppose X were true -- then X would be true! So couldn't X be true?"
No, this is pointing out that if you provide an optimal outcome barricaded by a particular obstacle, then that optimal outcome will trivially be at least as hard as that obstacle.
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time -- freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
This is exactly the point made for computable approximations to AIXI. Thank you for agreeing.
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
but computing the uncomputable is even more of a wild goose chase. "Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin's Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer".
Fantastic! I claim my extra 43 years of life.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-08T07:55:22.267Z · LW(p) · GW(p)
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
No, carelessness on my part. Doesn't affect my original point, that schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time -- freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
This is exactly the point made for computable approximations to AIXI.
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI. To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
Fantastic! I claim my extra 43 years of life.
43 years is a poor sort of immortality.
Replies from: gwern↑ comment by gwern · 2013-05-08T19:48:13.112Z · LW(p) · GW(p)
schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
Well, yeah. Again - why would you expect anything else? Given that there exist problems which require that or worse for solution? How can a universal problem solver do any better?
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI.
Yes.
To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
43 years is a poor sort of immortality.
It's more than I had before.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-09T07:44:03.503Z · LW(p) · GW(p)
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
The spin-off argument. Here's a huge compendium of spinoffs of previous approaches to AGI. All very useful, but not AGI. I'm not expecting better from AIXI.
Replies from: gwern↑ comment by gwern · 2013-05-09T15:45:25.629Z · LW(p) · GW(p)
Hm, so let's see; you started off mocking the impossibility and infeasibility of AIXI and any computable version:
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
Then you admitted that actually every working solution can be seen as a form of SI/AIXI:
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that's the way it is, well, that's the way it is... Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI
And now you're down to arguing that it'll be "very useful, but not AGI".
Well, I guess I can settle for that.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-10T08:33:24.207Z · LW(p) · GW(p)
I stand by the first quote. Every working solution can in a useless sense be seen as a form of SI/AIXI. The sense that a hot-air balloon can be seen as an approach to landing on the Moon.
And now you're down to arguing that it'll be "very useful, but not AGI".
At the very most. Whether AIXI-like algorithms get into the next edition of Russell and Norvig, having proved of practical value, well, history will decide that, and I'm not interested in predicting it. I will predict that it won't prove to be a viable approach to AGI.
Replies from: gwern↑ comment by Mass_Driver · 2013-05-10T09:15:53.252Z · LW(p) · GW(p)
Isn't there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.
I can't imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-10T10:41:08.103Z · LW(p) · GW(p)
Isn't there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
Yes, the question is what that middle ground looks like -- how you actually come up with new models. Gelman and Shalizi say it's a non-Bayesian process depending on human judgement. The behaviour that you rightly say is absurd, of the Bayesian Flying Dutchman, is indeed Shalizi's reductio ad absurdum of universal Bayesianism. I'm not sure what gwern has just been arguing, but it looks like doing whatever gets results through the week while going to the church of Solomonoff on Sundays.
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
Replies from: Vaniver↑ comment by Vaniver · 2013-05-10T14:06:45.983Z · LW(p) · GW(p)
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
Eh. What seems AGI-ish to me is making models interact fruitfully across domains; algorithmic models to find new hypotheses for a particular set of data are not that tough and already exist (and are 'better than people' in the sense that they require far less computational effort and are far more precise at distinguishing between models).
Replies from: Richard_Kennaway, Juno_Watt↑ comment by Richard_Kennaway · 2013-05-10T14:39:18.226Z · LW(p) · GW(p)
What seems AGI-ish to me is making models interact fruitfully across domains
Yes, I had in mind a universal algorithmic method, rather than a niche application.
Replies from: Vaniver↑ comment by Vaniver · 2013-05-10T18:28:29.661Z · LW(p) · GW(p)
The hypothesis-discovery methods are universal; you just need to feed them data. My view is that the hard part is picking what data to feed them, and what to do with the models they discover.
Edit: I should specify, the models discovered grow in complexity based on the data provided, and so it's very difficult to go meta (i.e. run hypothesis discovery on the hypotheses you've discovered), because the amount of data you need grows very rapidly.
↑ comment by Juno_Watt · 2013-05-10T14:12:28.756Z · LW(p) · GW(p)
Hmmm. Are we going to see a Nobel awarded to an AI any time soon?
Replies from: Vaniver↑ comment by Vaniver · 2013-05-10T18:27:07.105Z · LW(p) · GW(p)
I don't think any robot scientists would be eligible for Nobel prizes; Nobel's will specifies persons. We've had robot scientists for almost a decade now, but they tend to excel in routine and easily automatized areas. I don't think they will make Nobel-level contributions anytime soon, and by the time they do, the intelligence explosion will be underway.
↑ comment by khafra · 2013-05-10T14:58:13.887Z · LW(p) · GW(p)
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses? I mean, there's probably better approximations than that; but we have strong evidence that this one works and is computable.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-10T15:30:21.706Z · LW(p) · GW(p)
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses?
Whatever approach to AGI anyone has, let them go ahead and try it, and see if it works. Ok, that would be rash advice if I thought it would work (because of UFAI), but if it has any chance of working, the only way to find out is to try it.
Replies from: khafra↑ comment by khafra · 2013-05-10T16:16:40.374Z · LW(p) · GW(p)
I'm not saying I'm willing to code that up; I'm just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we've got so far.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-05-10T16:25:19.425Z · LW(p) · GW(p)
I'm just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we've got so far.
If you have a few billion years to wait.
Replies from: Kawoomba↑ comment by Vaniver · 2013-05-09T16:49:35.372Z · LW(p) · GW(p)
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
Eh. I like the approach of "begin with a simple system hypothesis, and when your residuals aren't distributed the way you want them to be, construct a more complicated hypothesis based on where the simple hypothesis failed." It's tractable (this is the elevator-talk version of one of the techniques my lab uses for modeling manufacturing systems), and seems like a decent approximation of Solomonoff induction on the space of system models.
↑ comment by DanielLC · 2013-05-07T18:49:23.129Z · LW(p) · GW(p)
It's basically different terminology. His point is valid.
A model isn't something you assign probability to. It's something you use to come up with a set of prior probabilities. The model he used assumed independence. It didn't actually assign zero probability to any result. It doesn't assign a probability, zero or otherwise, to the machine being broken, because that's not something that's considered. It also doesn't assign a probability to whether or not its raining.
↑ comment by private_messaging · 2014-02-03T16:08:03.133Z · LW(p) · GW(p)
I think his point is that you are still entirely unable to even enumerate, let alone process, all the relevant hypotheses, nor does the formula inform you of those, nor does it inform you how to deal with cyclic updates (or even that those are a complicated case), etc.
It's particularly bad when it comes to what rationalists describe as "expected utility calculations". The ideal expected utility is a sum of the differential effect of the actions being compared, over all hypotheses, multiplied with their probabilities... a single component of the sum provides very little or no information about the value of the sum, especially when picked by someone with a financial interest as strong as "if i don't convince those people I can't pay my rent". Then, the actions themselves have an impact on the future decision making, which makes the expected value sum grow and branch out like some crazy googol-headed fractal hydra. Mostly when someone's talking much about Bayes they have some simple and invalid expected value calculation that they want you to perform and act upon, so that you'll be worse off in the end and they'll be better off in the end.
↑ comment by Multiheaded · 2012-05-02T05:21:35.886Z · LW(p) · GW(p)
And yet he wants a pragmatically motivated society.
Replies from: None↑ comment by [deleted] · 2012-05-02T06:34:09.630Z · LW(p) · GW(p)
A man can dream can't he? Note he isn't advocating nonsense as an organizing tool, much of his wackier thought is precisely around trying to make an organizing tool work as good as nonsense does. Unfortunately I don't think he has succeed since in my opinion neocameralism is unlikely to be implemented and likely to blow up if someone did implement it.
Replies from: Multiheaded, nykos↑ comment by Multiheaded · 2012-05-02T07:52:01.320Z · LW(p) · GW(p)
I agree, except that some of my own wacky thought (well, it's hardly original, of course) basically says that nonsense isn't a "bad" at all - not for anyone whom we might reasonably call human. For example, as has been pointed out here, people have in-built hypocritical mechanisms to cope with various kinds of "faith", but if you truly consider that you're doing something "rational" and commonsensically correct, you're left driving at an enormous speed without brakes, and the likely damage might be great enough that no-one should ever aspire to "rational" thinking.
Also:
On a wall in South London some Communist or Blackshirt had chalked “Cheese, not Churchill”. What a silly slogan. It sums up the psychological ignorance of these people who even now have not grasped that whereas some people would die for Churchill, nobody will die for cheese.
- Orwell's diary, 20th March, 1941
↑ comment by nykos · 2012-05-03T11:48:47.349Z · LW(p) · GW(p)
Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug's diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don't have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.
Replies from: Normal_Anomaly, SusanBrennan, ArisKatsaris, NancyLebovitz, NancyLebovitz, None, Multiheaded, Multiheaded↑ comment by Normal_Anomaly · 2012-05-05T20:57:02.689Z · LW(p) · GW(p)
Universalism is the reason why common-sense proposals like those of Greg Cochran will never be official policy.
From the Greg Cochran link:
A government with consistent and lasting policies could select for intelligence and achieve striking results in a few centuries, maybe less. But no state ever has, and no existing government seems interested.
It's worth pointing out that at least part of the opposition to government-run eugenics programs is rational distrust that the government will not corrupt the process. If a country started a program of tax breaks for high-IQ people having children, and perhaps higher taxes for low-IQ people having children, a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is]. It's a similar rationale to the one against literacy tests for voting: sure, maybe illiterate people can't be informed voters, but trusting the government to decide who's too illiterate to vote leads to perverse incentives.
Replies from: Viliam_Bur, DanArmak, Multiheaded↑ comment by Viliam_Bur · 2012-05-08T09:25:05.033Z · LW(p) · GW(p)
a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is].
Absolutely. It would start with: "Everyone (accepted as an expert by our party) agrees that the classical IQ tests developed by psychometric methods are too simple and they don't cover the whole spectrum of human intelligence. Luckily, here is a new improved test developed by our best experts that includes the less mathematical aspects of intelligence, such as having a correct attitude towards insert political topic. Recognizing the superiority of this test to the classical tests already gives you five points!"
↑ comment by DanArmak · 2012-05-06T00:34:08.907Z · LW(p) · GW(p)
Also, governments are notoriously bad at making broad and costly social policies that will only give a return on investment "in a few centuries or less". We're not talking just beyond the next elections, the party, the politicians, even the whole state may not even exist by then.
↑ comment by Multiheaded · 2012-05-07T04:46:03.521Z · LW(p) · GW(p)
a corrupt government would twist its policies
"Will".
↑ comment by SusanBrennan · 2012-05-03T12:58:19.894Z · LW(p) · GW(p)
Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people.
That seems a little bit simplistic. How many problems have been caused by smart people attempting to implement plans which seem theoretically sound, but fail catastrophically in practice? The not-so-smart people are not inclined to come up with such plans in the first place. In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
While people may not agree with me on this, I find the theory of "rational addiction" within contemporary economics to carry many of the hallmarks of this way of thinking. It is mathematically justified using impressively complex models and selective post-hoc definitions of terms and makes a number of empirically unfalsifiable claims. You would have to be fairly intelligent to be persuaded by the mathematical models in the first place, but that doesn't make it right.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational. The problems caused by the former are lesser in scale.
Replies from: Viliam_Bur, Multiheaded↑ comment by Viliam_Bur · 2012-05-04T08:52:14.738Z · LW(p) · GW(p)
The theory of "rational addiction" seems like an example that for any (consistent) behavior you can find such utility function that this behavior maximizes it. But it does not mean that this is really a human utility function.
it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
Replies from: SusanBrennan↑ comment by SusanBrennan · 2012-05-04T09:50:47.355Z · LW(p) · GW(p)
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
True. I guess it's just that the consequences of such actions can often lead to a large amount of negative utility according to my own utility function, which I like to think of as more universalist than egoist. But people who are selfish, rational and intelligent can, of course, cause severe problems (according to the utility functions of others at least). This, I gather, is fairly well understood. That's probably why those characteristics describe the greater proportion of Hollywood villains.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-05-04T12:32:24.016Z · LW(p) · GW(p)
Hollywood villains are gifted people who pathologically neglect their self-deception. With enough self-deception, everyone can be a hero of their own story. I would guess most authors of fashionable nonsense kind of believe what they say. This is why opposing them would be too complicated for a Hollywood script.
↑ comment by Multiheaded · 2012-05-07T04:50:21.198Z · LW(p) · GW(p)
In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational
Yes! I'm glad that someone is with me on this.
↑ comment by ArisKatsaris · 2012-05-05T11:25:42.516Z · LW(p) · GW(p)
And yet, it's the "Universalist" system that allows Jews to not get exterminated. I think the cognitive and epistemological flaws of "Universalism" kinda makes some people ignore the fact that it's the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
Was (non-Universalist) Nazi Germany more open to accepting Jew-produced science than the "Universalist" West was? Or is the current non-Universalist Arab world more open to such? Were the previous feudal systems better at accepting atheists or Jewish people? Which non-universalist (and non-Jewish) system was actually better than "Universalism" at recognizing Jewish contributions or intelligence, that you would choose to criticize Universalism for being otherwise? Or better at not killing heretics?
Let's keep it simple -- which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
As for Moldbug's diagnosis, I'm unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I'm not sure Moldbuggery has much predictive capacity.
Replies from: None, Multiheaded, None, Eugine_Nier↑ comment by [deleted] · 2012-05-10T13:45:38.208Z · LW(p) · GW(p)
"Universalism" kinda makes some people ignore the fact that it's the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
I agree. In my mind this is its great redeeming feature and the main reason I think I still endorse universalism despite entertaining much of the criticism of it. At the end of the day I still want to live in a Western Social Democracy, just maybe one that has a libertarian (and I know this may sound odd coming from me) real multicultural bent with regards to some issues.
And yet, it's the "Universalist" system that allows Jews to not get exterminated.
The same is true of the Roman and Byzantine empire. The Caliphate too. Also true of Communist regimes. Many absolute monarchies now that I think about it. Also I'm pretty sure the traditional Indian cast system could keep Jews safe as well.
If Amy Chua is right democracy (a holy word of universalism) may in the long run put market dominant minorities like the Jews more at risk than some alternatives. Introducing democracy and other universalist memes in the Middle East has likely doomed the Christian minorities there for example.
Let's keep it simple -- which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
I'm not quite sure why particularly the Jewish people matter so very much to you in this example. I'm sure you aren't searching for the trivial answer (which would be "in any ancient and medieval Jewish state or nation").
If you are using Jews here as an emblem of invoking the horrors of Nazism, can't we at least throw a bone to Gypsy and Polish victims? And since we did that can we now judge Communism by the same standard? Moldbug would say that Communism is just a country getting sick with a particularly bad case of universalism.
The thing is Universalism as it exists now dosen't seem to be stable, the reason one sees all these clever (and I mean clever in the bad, overly complicating, overly contrarian sense of the word) arguing in the late 2000s against "universalism" online is because the comfortable heretic tolerating universalism of the second half of the 20th century seems to be slowly changing into something else. They have no where else to go but online. The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking. As technology, that enables the surveillance of citizens and enforcement of social norms by peers, advances there dosen't seem to be any force really counteracting it. If you transgress, if you are a heretic in the 21st century, you will remain one for your entire life as your name is one google search away from your sin. As mobs organize via social media or apps become more and more a reality, a political reality, how long will such people remain physically safe? How do you explain to the people beating you that you recanted your heresy years ago? Recall how pogroms where usually the affair of angry low class peasants. You don't need the Stasi to eliminate people. The mob can work as well. You don't need a concentration camp when you have the machete. And while modern tech makes the state more powerful since surveillance is easier, it also makes the mob more powerful. Remaining under the, not just legal, but de facto, protection of the state becomes more and more vital. The room for dissent thus shrinks even if stated ideals and norms remain as they where before.
And I don't think they will remain such. While most people carrying universalist memes are wildly optimistic with "information wants to be free" liberty enhancing aspect of it, the fact remains that this new technology seems to have also massively increased the viability and reach of Anarcho-Tyranny.
The personal psychological costs of living up to universalist ideals and internalizing them seem to be rising as well. To illustrate what I mean by this, consider the practical sexual ethics of say Elizabethan England and Victorian England. On the surface and in their stated norms they don't differ much, yet the latter arguably uses up far more resources and a places a greater cognitive burden of socialization on its members to enforce them.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren't that different in stated ideals, but the practical costs have arguably risen.
Replies from: ArisKatsaris, Multiheaded↑ comment by ArisKatsaris · 2012-05-10T14:36:24.833Z · LW(p) · GW(p)
I'm not quite sure why particularly the Jewish people matter so very much to you in this example.
nykos' was the one who used the example of Jewish superior intelligence not getting acknowdged as such by Universalism. My point was that was there have been hardly any non-Universalist systems that could even tolerate Jewish equal participation, let alone acknowledged Ashkenazi superiority.
Replies from: None↑ comment by Multiheaded · 2012-05-10T14:01:43.828Z · LW(p) · GW(p)
The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking.
I see no proof of that. What economic benefits and comforts? Sure, real wages in Western countries have stopped growing around the 1970s, but e.g. where welfare programs are being cut following the current crisis, it's certainly not the liberals but economically conservative governments championing the cuts.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren't that different in stated ideals, but the practical costs have arguably risen.
I don't understand. Do you mean prestigious norms like "never avoid poor neighbourhoods for your personal safety, because it's supposedly un-egalitarian", or what? What other norms like that exist that are harmful in daily life?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-11T04:05:00.068Z · LW(p) · GW(p)
but e.g. where welfare programs are being cut following the current crisis, it's certainly not the liberals but economically conservative governments championing the cuts.
What's happening is, to paraphrase Thacher, that governments are running out of other people's money. Yes, conservative parties are more willing to acknowledge this fact, but liberal parties don't have any viable alternatives and it was their economic policies that lead to this state of affairs.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-11T08:46:57.307Z · LW(p) · GW(p)
Hmm? And in places where fiscally conservative parties were at the helm before the crisis? What about them?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-12T06:43:12.561Z · LW(p) · GW(p)
The places that are being hardest hit have been ruled by left wing parties for most of the time since at least the 1970s. Also in these places the right wing parties aren't all that right wing.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-12T07:47:44.355Z · LW(p) · GW(p)
The places that are being hardest hit have been ruled by left wing parties for most of the time since at least the 1970s.
Are the Scandinavian nations among the ones hit hardest? Or, say, Poland?
↑ comment by Multiheaded · 2012-05-07T05:08:21.762Z · LW(p) · GW(p)
Let's keep it simple -- which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
You've got to make it more general, that's where it gets interesting! Speaking frankly, from the selfish viewpoint of a typical Western person, the Universalist system has been better than any other system at everything for more than a century, especially at the quality and complexity of life for the average citizen. Of course, Moldbug's adherents would argue that there's no dependency between these two unique, never-before-seen facts of civilization - universalist ideology and an explosive growth in human development for the bottom 90% of society. They'd say that both are symptoms of more rapid and thoroughly supported technological progress than elsewhere.
Let's concede that (although there are reasons to challenge it - see e.g. Weber's The Protestant Ethic and the Spirit of Capitalism, an early argument that religion morphing into a secular quasi-theocracy is what gave the West its edge). Okay, so if both things are the results of our civilization's unique historical path... then, from an utilitarian POV, the cost of universalism is still easily worth paying! We know of no society that advanced to an industrial and then post-industrial state without universailsm, so it would be in practice impossible to alter any feature of technical and social change to exclude the dominance of universalist ideology but keep the pace of "good" progress. Then, even assuming that universalist ideology is single-handedly responsible for the entirety of the 20th century's wars and mass murder (and other evils), it is still preferable to the accumulated daily misery of the traditional pre-industrial civilization - especially so for everyone who voted "Torture" on "Torture vs Specks"! (I didn't, but here I feel differently, because it's "Horrible torture and murder" vs "Lots and lots of average torture".)
Replies from: None↑ comment by [deleted] · 2012-05-10T12:53:06.540Z · LW(p) · GW(p)
Moldbug isn't arguing we should get rid of some technology and its comfort in order to also get rid of universalism and he certainly does recognize both as major aspects of modernity, no he is saying that precisely technological progress now enables us to get rid of the parasitic aspect of modernity "universalism". One can make a case that since it inflames some biases, it is slowing down technological progress and the benefits it brings. Peter Thiel is arguably concerned precisely by this influence when he talks of a technological slowdown. Universalism not only carries opportunity costs, it has historically often broken out in Luddite strains. Consider for example something like the FDA. Recall what criticism of that institution are often heard on LW, yet aren't these same criticism when consistently applied basically hostile to the Cathedral?
Whether MM is right or wrong what you present seems like a bit of a false dilemma. You certainly are right that we haven't seen societies that advance to a post-industrial or industrial state without at least some influence of universalism but it is hard to deny that we do observe varying degrees of such penetration. Moldbug's idea is that even if we can't use technology to get rid of the memeplex in question by social manoeuvring we can still perhaps find a better trade off by not taking "universalism" so seriously. The vast majority of people, the 90% you invoke, may be significantly better of with a world where every city is Singapore than a world where every city is London.
It is no mystery which of these two is more in line with universalist ideals.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-10T13:55:09.535Z · LW(p) · GW(p)
And could you please name those ideals once again? Because it's very confusing.
Replies from: None↑ comment by [deleted] · 2012-05-10T15:53:28.345Z · LW(p) · GW(p)
In the case of Singapore vs. London (implicitly including the governing structure of Britain since London isn't a city state)? A few I can think of straight away:
Democratic decision making. Therapeutic rather than punitive law enforcement. Lenient punishment of crime. Absence of censorship.
Naturally all of these aren't fully realized in London either. Britian dosen't have real free speech, yet it has much more of it than Singapore. Britain has (in my opinion) silly and draconian anti-drug laws, but it dosen't execute people for smuggling drugs. London doesn't have corporal or capital punishment. The parties in Britain are mostly the same class of people, yet at least Cerberus (Lib/Lab/Con) has three heads, you get to vote the one that promises to gnaw at you the least, Singapore is democratic in form only, and it is a very transparent cover. Only one party has a chance of victory, and it has been that way and will remain that way for some time.
Yet despite all these infractions against stated Western ideals, life isn't measurably massively worse in Singapore than in London. And Singapore seems to work better as a multi-ethnic society than London. The world is globalizing, de facto multiculturalism is the destined future of every city from Vladivostok to Santiago so the Davos men tell us. No place like Norway or Japan in our future, but elections where we will see ethnic blocks and identity politics. I don't know about you but I prefer Lee Kuan Yew to that mess of tribal politics. Which city would deal better with a riot? Actually which city is more likely to have a riot? Recall what Lee said in his autobiography and interviews he learned from the 1960s riots. Did it work? It sure looks like it did. Also recall from what Singapore started, and where surrounding Malaysia from which it diverged is today. What is the better model to pull the global south out of poverty? What is the better model to have the worlds people live side by side? Which place will likely be safer, more liveable and more prosperous in 20 years time?
It seems in my eyes that Singapore is clearly winning in such a comparison. Yet clearly it does so precisely by ignoring several universalist ideals. Strangely they didn't seem to have needed to give up iPods and other marvels of modern technology to do it either.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-10T16:13:45.653Z · LW(p) · GW(p)
Yet despite all these infractions against stated Western ideals, life isn't measurably massively worse in Singapore than in London.
Taboo "worse"!
If by life not being "worse" you mean the annual income or the quality of healthcare or the amount of street crime, maybe it's so. If one values e.g. being able to contribute to a news website without fear of fines or imprisonment (see e.g. Gibson's famous essay where he mentions that releasing information about Singapore's GDP could be punished with death), or not fearing for the life of a friend whom you smoke marijuana with, or being able to think that the government is at least a little bit afraid of you (this not necessarily being real, just a pleasant delusion to entertain, like so many others we can't live without)... in short, if one values the less concrete and material things that speak to our more complex instincts, it's not nearly so one-sided.
That's why I dislike utilitarianism; it says without qualification that a life always weighs the same, whatever psychological climate it is lived in (the differences are obvious as soon as you step off a plane, I think - see Gibson's essay again), and a death always weights the same, whether you're killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore), et cetera, et cetera... It's, in the end, not very compatible with the things that liberals OR classical conservatives love and hate. Mere safety and prosperity are not the only things a society can strive for.
Replies from: None, Eugine_Nier↑ comment by [deleted] · 2012-05-10T16:15:11.160Z · LW(p) · GW(p)
If by life not being "worse" you mean the annual income or the quality of healthcare or the amount of street crime, maybe it's so.
Yes. But these are incredibly important things to hundreds of millions of people alive today drowning in violence, disease and famine. What do spoiled first world preferences count against such multitudes?
And you know what, I think 70% of people alive today in the West wouldn't in practice much miss a single thing you mention, though they might currently say or think they would.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-10T16:27:22.885Z · LW(p) · GW(p)
There's a threshold where violence, disease and hunger stop being disastrous in our opinion (compare e.g. post-Soviet Eastern Europe to Africa), and that threshold, as we can see, doesn't require brutal authoritarianism to maintain, or even to achieve. Poland has transitioned to a liberal democracy directly after the USSR fell, although its economy was in shambles (and it had little experience of liberalism and democracy before WW2), Turkey's leadership became softer after Ataturk achieved his primary goals of modernization, etc, etc. There's a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the latter is an acceptable cost for attempting liberal policies to me. I accept that the former might require harsh measures to overcome, but I'd rather see those measures taken by an internally liberal colonial power (like the British Empire) than a local regime.
Replies from: None↑ comment by [deleted] · 2012-05-10T16:29:28.783Z · LW(p) · GW(p)
There's a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the second is an acceptable cost for attempting liberal policies to me.
The actual real people living there, suppose you could ask them, which do you think they would chose? And don't forget those are mere stated preferences, not revealed ones.
If you planted Singapore on their borders wouldn't they try to move there?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-10T16:35:06.128Z · LW(p) · GW(p)
Sure, Singapore is much better than Africa; I never said otherwise! However, if given choice, the more intelligent Africans would probably be more attracted to a Western country, where their less tangible needs (like the need for warm fuzzies) would also be fulfilled. Not many Singaporeans probably would, but that's because the Singaporean society does at least as much brainwashing as the Western one!
Replies from: None, Eugine_Nier↑ comment by [deleted] · 2012-05-10T18:31:49.621Z · LW(p) · GW(p)
I don't understand why you think "warm fuzzies" are in greater supply in London than in Singapore. They are both nice places to live, or can be, even in their intangibles. London-brainwashing is one way to inoculate yourself against Singapore-brainwashing, but perhaps there is another way?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-10T19:48:20.413Z · LW(p) · GW(p)
Have you been to Singapore for any amount of time? I haven't (my dad had, for a day or so, when he worked on a Soviet science vessel), but I trust Gibson and can sympathize with his viewpoint. At the very least I observe that it does NOT export culture or spread memes. These are not the signs of a vibrant and sophisticated community!
Replies from: None↑ comment by [deleted] · 2012-05-11T00:03:33.786Z · LW(p) · GW(p)
At the very least I observe that it does NOT export culture or spread memes.
What could you mean by this that isn't trivially false?
I haven't read the Gibson article (but I will). I know that "disneyland" and "the death penalty" are both institutions that are despised by a certain cohort, but they are not universally despised and their admirers are not all warmfuzzophobic psychos. Artist-and-writer types don't flock to Singapore, but they don't flock to Peoria Illinois either do they?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-11T08:58:20.321Z · LW(p) · GW(p)
Artist-and-writer types don't flock to Singapore, but they don't flock to Peoria Illinois either do they?
Downvoted without hesitation.
If you have the unvoiced belief that cultural products (especially high-quality ones) and memes are created by some specific breed of "artist-and-writer types" (wearing scarves and being smug all the time, no doubt!), then I'd recommend purging it, seeing as it suggests a really narrow view of the world. A country can have a thriving culture not because artistic people "flock" there, but because they are born there, given an appropriate education and allowed to interact with their own roots and community!
By your logic, "artist-and-writer types" shouldn't just not flock to, but actively flee the USSR/post-Soviet Russia. And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
Replies from: Eugine_Nier, None↑ comment by Eugine_Nier · 2012-05-12T06:58:40.603Z · LW(p) · GW(p)
And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
Notice how this more or less refutes the argument you tried to make in the grandparent.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-12T07:50:39.268Z · LW(p) · GW(p)
I'm not making the argument that liberal democracy directly correlates to increasing the cultural value produced. Why else would I defend Iran in that particular regard? No, no, the object of my scorn is technocracy (at least, human technocracy) and I'm even willing to tolerate some barbarism rather than have it spread over the world.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-12T21:33:22.250Z · LW(p) · GW(p)
What definition of technocracy are you using that excludes the USSR and India before its economic liberalization?
↑ comment by [deleted] · 2012-05-11T14:01:34.707Z · LW(p) · GW(p)
You seem to have read some hostility towards artists and writers into my comment, probably because of "types" and "flock"? These are just writing tics, I intended nothing pejorative.
I hold no such belief, and I'm glad you don't either. I only want to emphasize my opinion that Singapore does have a thriving culture, even if it does not have a thriving literary or film industry. But since you admit you don't know a lot about it I'm curious why you have so much scorn for the place? A city can have something to recommend itself even if it hasn't produced a good author or a good movie.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-11T14:21:41.438Z · LW(p) · GW(p)
In short, well, yeah, I hold more "formal" and "portable" culture such as literature, music or video to have relatively more value than the other types of "culture", such as local customs and crafting practices and such - which I assume you meant by "thriving culture" here. All are necessary for harmonious development, but I'd say that e.g. a colorful basket-weaving tradition in one neighborhood which is experienced/participated in by the locals is not quite as important and good to have as an insightful and entertaining story, or a beautiful melody - the latter can still have relevance continents or centuries apart.
Some African tribe can also have a thriving culture like that, but others can't experience it without being born there, it can be unsustainable in the face of technical progress, it can interfere with other things important for overall quality of life (trusting a shaman about medicine can be bad for your health), etc. Overall, you probably get what I'm talking about.
Sure, that's biased and un-PC in a way, but that's the way that I see the world.
(I don't have any scorn for Singapore as a nation and a culture, I just don't care much for a model of society imposed upon it by the national elites in the 20th century that, unlike broadly similar societies in e.g. Japan or even China, doesn't seem to produce those things I value. Even if its GDP per capita is now 50% or so higher than somewhere else. Heck, even Iran - a theocracy that's not well-off and behaves rather irrationally - has been producing acclaimed literature and films, despite censorship.)
Replies from: None↑ comment by [deleted] · 2012-05-11T23:54:29.667Z · LW(p) · GW(p)
It seems to me that if you are talking about artistic achievements that have stood the test of centuries, then you are talking almost exclusively about the west, which I agree is utterly dominant in cultural exports. What I have in mind when I say "Singapore culture is thriving" is that it's a city filled with lovely people going about their business. You could appreciate Singapore culture because you find muslim businessmen or guest worker IT types agreeable -- maybe you like their jokes. You could hate Singapore culture if you instead found muslim businessmen to be vacant and awful. But couldn't we allow that the intelligent african that kicked the discussion off might have either taste? Then we should find out what his tastes are before recommending that he choose London over Singapore.
I read "Disneyland with the death penalty." Gibson's not a very good travel-writer, there's hardly any indication in the article that he spoke to anyone while he was there.
broadly similar societies in e.g. Japan or even China, doesn't seem to produce those things I value
You're not being fair. Singaporeans would have surely produced something to your tastes, if there were a billion of them and their country were two thousand years old.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-05-12T00:34:41.094Z · LW(p) · GW(p)
I would like seeing comments on Gibson's article from Singaporeans, including ex-pat Singaporeans.
↑ comment by Eugine_Nier · 2012-05-11T04:11:50.214Z · LW(p) · GW(p)
Konkvistador's point is that third world countries attempting to imitate western countries haven't had much success.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-11T08:44:52.718Z · LW(p) · GW(p)
When Turkey was modernizing it sure as heck was looking towards Europe for examples, it just didn't implement democratic mechanisms straight away and restricted religious freedom. And if you look at Taiwan, Japan, Ghana, etc... sure, they might be ruled by oligarchic clans in practice, but other than that [1] they have much more similarities than differences with today's Western countries! Of course a straight-up copy-paste of institutions and such is bound to fail, but a transition with those institutions, etc in mind as the preferred end state seems to work.
[1] Of course, Western countries are ruled by what began as oligarchic clans too, but they got advanced enough that there's a difference. And, for good or ill, they are meritocratic.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-12T06:49:15.030Z · LW(p) · GW(p)
I'm not familiar with Ghana, but both Japan and Taiwan had effectively one-party systems while modernizing.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-12T07:55:32.123Z · LW(p) · GW(p)
I don't care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.
edit: derp
Replies from: SusanBrennan, Eugine_Nier↑ comment by SusanBrennan · 2012-05-12T21:36:13.702Z · LW(p) · GW(p)
even Turkey did NOT require some particular ruthlessness to modernize.
Could you explain the meaning of this sentence please. I'm not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk's modernizing reforms. I assume that's not the case, right?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-13T09:34:06.595Z · LW(p) · GW(p)
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on. At least, that's what Wikipedia tells me. He didn't purge political opponents except for one incident where they were about to assassinate him, he maintained a Western facade over his political maneuvering (taking pages from European liberal nationalism of the previous century), etc, etc.
Replies from: Randaly, Eugine_Nier↑ comment by Randaly · 2012-05-13T10:05:42.265Z · LW(p) · GW(p)
To extent that this is a discussion of quality of life and attractiveness of a country, as opposed to what is strictly speaking necessary for development, it's worth remembering the Armenian genocide.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-13T10:44:02.926Z · LW(p) · GW(p)
There's no evidence that Ataturk was more complicit in that than, say, many respected public servants in 50s-60s Germany were complicit in the Holocaust. Nations just go insane sometimes, and taboos break down, and all that. It takes a hero to resist.
Replies from: Randaly↑ comment by Randaly · 2012-05-13T11:37:37.758Z · LW(p) · GW(p)
I feel pretty confident that Niall Ferguson, in his The War of the World, claims that Ataturk directly oversaw at least one massacre; I don't have my copy on hand, however. Also, the Armenian National Institute claims that Ataturk was "the consummator of the Armenian Genocide."
Also, Israel Charney (the founder of the International Association of Genocide Scholars) says:
It is believed that in Turkey between 1913 and 1922, under the successive regimes of the Young Turks and of Mustafa Kemal (Ataturk), more than 3.5 million Armenian, Assyrian and Greek Christians were massacred in a state-organized and state-sponsored campaign of destruction and genocide, aiming at wiping out from the emerging Turkish Republic its native Christian populations.
↑ comment by Eugine_Nier · 2012-05-13T18:39:10.980Z · LW(p) · GW(p)
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on.
Really Ataturk was less harsh than Industrial Revolution-age Britain? I find this highly unlikely (unless your taking about their colonial practices in which case the Armenian genocide is relevant). I think the reason you're overestimating the relative harshness of Britain is that Britain had more freedom of speech than other industrializing nations and thus its harshness (such as it was) is better documented.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-14T06:44:46.183Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/Enclosure
http://en.wikipedia.org/wiki/Riot_Act
http://en.wikipedia.org/wiki/Peterloo_Massacre
http://en.wikipedia.org/wiki/Great_Famine_%28Ireland%29
http://en.wikipedia.org/wiki/Industrial_Revolution#Child_labour
http://en.wikipedia.org/wiki/Opposition_to_the_Poor_Law
http://www.victorianweb.org/history/workers1.html
http://www.victorianweb.org/history/workers2.html
(That's just after a fifteen-minute search. By the way, haven't you read Dickens? He gives quite a vivid contemporary account of social relations, although dramatized.)
Replies from: Eugine_Nier, None↑ comment by Eugine_Nier · 2012-05-18T03:31:35.442Z · LW(p) · GW(p)
http://en.wikipedia.org/wiki/Enclosure
http://en.wikipedia.org/wiki/Riot_Act
Are you claiming that similar and worse things didn't happen in Turkey?
Let me get this straight: you're trying to argue that Britain was harsh because some people expressed opposition to a law you like?
By the way, haven't you read Dickens?
Yes, that's want I meant by Britain's harshness (such as it was) being better documented thanks to its freedom of speech.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-18T03:57:04.901Z · LW(p) · GW(p)
Are you claiming that similar and worse things didn't happen in Turkey?
With the exception of the Armenian genocide (which is comparable in vileness to many things, including the actions of that wonder of private enterprise, the East India Company) - yes. Not during the late 19th and 20th century, I mean. Turkish landlords might've been feudals, but they didn't outright steal the entirety of their tenants' livelihood from under them.
Let me get this straight: you're trying to argue that Britain was harsh because some people expressed opposition to a law you like?
The other way around! Many respected people hated and denounced it so much, it famously prompted Dickens to write Oliver Twist.
↑ comment by [deleted] · 2012-05-14T12:31:24.764Z · LW(p) · GW(p)
"The blogosphere overflows with Google Pundits; those who pooh-pooh, with a few search queries, an argument that runs counter to their own ideological assumptions, usually regarding a subject with which they possess only a passing familiarity." It always gets my goat when the other guy does it.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-14T13:11:03.931Z · LW(p) · GW(p)
I knew perfectly well about all of those except the Great Famine before searching, thank you very much! (I used to think there was only one Irish famine.) That's why I felt confident in saying that 20th century Turkey was not as bad! "Fifteen-minute search" referred to a search for articles to show in support of my argument, not an emergency acquisition of knowledge for myself.
↑ comment by Eugine_Nier · 2012-05-12T21:27:59.279Z · LW(p) · GW(p)
Taboo 'ruthlessness'. For example Japan was certainly ruthless while modernizing by any reasonable definition.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-13T09:30:30.923Z · LW(p) · GW(p)
It didn't fully come into the "Universalist" sphere, ideologically and culturally, until its defeat in WW2, and the most aggressive and violent of its actions were committed in a struggle for expansion against Western dominiance.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-13T18:33:28.030Z · LW(p) · GW(p)
Konkvistador's argument would be that it wouldn't of been able to modernize nearly as effectively if it had come into the "Universalist" sphere before industrializing.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-14T06:13:37.532Z · LW(p) · GW(p)
Maybe, I don't know. On the other hand, maybe it would've avoided conquest and genocide if it had come into that sphere before industrializing.
Or maybe my premise above is wrong and its opening in the Meiji era did in fact count as contact with "Universalism" - note that America and Britain's influence had been considerable there, and Moldbug certainly says that post-Civil War U.S. and post-Chartist Britain (well, he says post-1689, but the Chartist movement definitely was a victory for democracy[1]) were dominated by hardcore Protestant "Universalism".
1- Although its effects were delayed by some 20 years.
↑ comment by Eugine_Nier · 2012-05-11T05:23:23.661Z · LW(p) · GW(p)
whether you're killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore)
You seem to have an overly romantic view of criminals if you think they never kill with malice.
Heck when the government doesn't keep them in check criminal gangs operate like mini-governments that are much worse in terms of warm fuzzies then even Singapore.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-11T08:38:23.625Z · LW(p) · GW(p)
In the West they operate more or less like wild animals.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-12T06:53:42.506Z · LW(p) · GW(p)
↑ comment by [deleted] · 2012-05-10T13:02:10.523Z · LW(p) · GW(p)
As for Moldbug's diagnosis, I'm unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I'm not sure Moldbuggery has much predictive capacity.
Actually Modlbug's diagnosis does provide decent predictive power: In the West at least Whigh history shall continue. The left shall continue to win nearly all battles over what the stated values and norms of our society should be (at least outside the economic realm).
Naturally Whig history makes the same prediction of itself, but the model it uses to explain itself seems built more for a moral universe than the one we inhabit. Not only that I find the stated narrative of Whig history has some rather glaring flaws. MM's theories win in my mind simply because they seem a explanation of comparable or lower complexity in which I so far haven't found comparably problematic flaws.
↑ comment by Eugine_Nier · 2012-05-11T03:55:01.947Z · LW(p) · GW(p)
As for Moldbug's diagnosis, I'm unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America?
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-05-11T10:29:25.861Z · LW(p) · GW(p)
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
The prediction Moldbug made was "no civil war in Syria"; not that there would be a civil war but Assad would manage to endure it.
Indeed in the post I link to, Mencius Moldbug seemed to be predicting that Qaddafi would endure the civil war too; as Moldbug made said post at a point in time in which the war was turning to Qaddafi's favour, and Moldbug wrongly predicted that the West would not intervene to perform airstrikes.
So what exactly did he predict correctly?
↑ comment by NancyLebovitz · 2012-05-07T20:00:43.294Z · LW(p) · GW(p)
One example of a bad consequence of Universalism is the delay of the Singularity.
Not proven. It seems to me that people wildly overdo even the prejudices they have evidence for, so we don't know how much is lost due to excessive prejudice compared to how much is lost due to insufficient prejudice.
↑ comment by NancyLebovitz · 2012-05-03T15:18:07.399Z · LW(p) · GW(p)
My impression is that we aren't terribly good yet at understanding how traits which involve many genes play out, whether political correctness is involved or not.
Replies from: None, Deskchair↑ comment by [deleted] · 2012-05-03T19:32:24.280Z · LW(p) · GW(p)
Very true. I think most HBD proponents are somewhat overconfident of their conclusions (though most of them seem more likely than not). But what I think he was getting at is that we would have great difficulty acknowledging if it was so and that any scientist that wanted to study this is in a very rough spot.
Unlike say promotion of the concept of human caused climate change which has the support of at least the educated classes, it may be impossible for our society to assimilate such information. It seems more likely that they would rather discredit genetics as a whole or perhaps psychometry or claim the scientists are faking this information because of nefarious motives. This suggest there exists a set of scientific knowledge that our society is unwilling or incapable of assimilating and using in a manner one would expect from a sane civilization.
We don't know what we don't know, we do know we simply refuse to know some things. How strong might our refusal be for some elements of the set? What if we end up killing our civilization because of such a failure? Or just waste lives?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-05-04T05:19:16.639Z · LW(p) · GW(p)
I don't know if you could get away with studying the sort of thing you're describing if you framed it as "people who are good at IQ tests" or "people who have notable achievements", rather than aiming directly at ethnic/racial differences. After all, the genes and environment are expressed in individuals.
It's conceivable but unlikely that the human race is at risk because that one question isn't addressed.
Replies from: None↑ comment by [deleted] · 2012-05-04T06:06:56.018Z · LW(p) · GW(p)
It's conceivable but unlikely that the human race is at risk because that one question isn't addressed.
I think I didn't do a good job of writing the previous post. I was trying to say that regardless what the truth is on that one question (and I am uncertain on it, more so than a few months ago), it demonstrates there are questions we as a society can't deal with.
I wasn't saying that not understanding the genetic basis of intelligence is a civilization killer (I didn't mention species extinction, though that is possible as well), which in itself is plausible if various people warning about dysgenics are correct, but that future such questions may be.
I argued that since reality is entangled and our ideology has no consistent relationship with reality we will keep hitting on more and more questions of this kind (ones that our society can't assimilate) and that knowing the answer to some such questions may turn out to be important for future survival.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology or challenging our interpretation of history. It would be directly relevant to FAI yet it would make some predictions that people will refuse to believe because of tribal affiliation or because it is emotionally too straining.
Replies from: NancyLebovitz, Bugmaster↑ comment by NancyLebovitz · 2012-05-05T03:04:46.617Z · LW(p) · GW(p)
Sorry-- species extinction was my hallucination.
Dysgenics is an interesting question-- what do we need to be adapting to?
↑ comment by Bugmaster · 2012-05-04T07:35:42.164Z · LW(p) · GW(p)
I argued that since reality is entangled and our ideology has no consistent relationship with reality...
I think this statement is too strong. Our ideology doesn't have a 100% consistent relationship with reality, true, but that's not the same as 0%.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology...
What, sort of like Hari Seldon's psychohistory ? Regardless of whether our society can absorb it or not, is such a thing even possible ? It may well be that group behavior is ultimately so chaotic that predicting it with that level of fidelity will always be computationally prohibitive (unless someone builds an Oracle AI, that is). I'm not claiming that this is the case (since I'm not a sociologist), but I do think you're setting the bar rather high.
↑ comment by [deleted] · 2012-05-03T12:31:05.236Z · LW(p) · GW(p)
I agree and have for some time, I didn't mean to imply otherwise. Especially this is I think terribly important:
Even though his prescription may be lacking (here is some criticism to neocameralism [] his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug's diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
But currently there is nothing remotely approaching an actionable political plan, so I advocated doing what little good one can despite Cryptocalvinism's iron grasp on the minds of a large fraction of mankind. As Moldbug says Universalism has no consistent relation to reality. A truly horrifying description of reality if it is accurate, since existential risk reduction eventually will become entangled with some ideologically charged issue or taboo.
I wish I could be hopeful but my best estimate is that humanity is facing a no win scenario here.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-07T11:47:21.503Z · LW(p) · GW(p)
all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced
Another thing I'd like to ask you! What are those bad things in your estimate? Or, rather, what areas are we talking about? Are you mainly concerned with censorship, academic dishonesty, bad prediction-making, other theory-related flaws? Or do you find some concrete policy really awful for those epistemic reasons, like state welfare programs, ideological pressure on illiberal regimes or immigration from poor countries? (I chose those examples because I'm in favor of all three, with caveats.)
I know you're against universal suffrage, but that's more or less meta-level; is there something you really loathe that directly concerns daily life, its quality, comfort and freedoms? Of course, I know about the policy preferences Mencius himself draws from his doctrine, but his beliefs are... idiosyncrasic: e.g. I don't think you'd agree with him that selling oneself and one's future children into slavery should be at all acceptable or tolerated.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-05-07T12:24:12.741Z · LW(p) · GW(p)
Of course, I know about the policy preferences Mencius himself draws from his doctrine
That's more than I've managed to get from my reading of him. I get no picture from his writings about what he wants life to be like -- "daily life, its quality, comfort and freedoms " -- under his preferred regime, only about what he doesn't want life to be like under the current regimes.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-07T13:02:51.005Z · LW(p) · GW(p)
True, it's in bits and pieces; but see e.g. the Patchwork series and try some other posts at random.
Basically, a good example of his preferences is the "total power, no influence or propaganda" model of Patchwork; in his own words, the Sovereign's government wouldn't censor dissenters because it has nothing to fear from them. Sure, I strongly doubt it would work that way, even with a perfectly rational sovereign (the blog post linked to above provides some decent criticism of that from an anarchist POV). But we nonetheless can conclude that MM would like a comfortable, rich society with liberal mores (although he does all the conservative elderly grumbling about the supposed irresponsibility and flighty behavior of Westerners today [1]) where he wouldn't ever have to worry about tribal power games or such - enforced with an iron fist, for selfish reasons of productivity and public image, and totally un-hypocritical about that.
He's okay with some redistribution of wealth (the sovereign giving money to private charities it finds worthy, which, being driven mainly by altruism, automatically care for everyone better than a disinterested bureaucracy - again, I'm a little skeptical).
Another thing he likes to say is that the capacity for violence within society should be supremely concentrated and overwhelming, and then the rational government supposedly wouldn't have to actually use it.
And then there are the totally contrarian things like his tolerance for indentured servitude on ideological grounds (look up his posts on "pronomianism"), which, along with his less disagreeable opinions, could well stem from his non-neurotypical (I take Konkvistador's word, and my impressions) wiring.
[1] When he repeats some trite age-old bullshit about "declining personal morality" - while cheering for no-holds-barred ruthless utilitarianism - that's when I tolerate him least.
↑ comment by Multiheaded · 2012-05-07T11:29:19.031Z · LW(p) · GW(p)
The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another.
There's an important question here: WHY do you think people dislike that so much that they're willing to subvert entire fields of knowledge to censor those inquiries? Please ponder that carefully and answer without any mind-killed screeds, ok?
(I'm not accusing you in advance, it's just that I've read about enough such hostile denunciations from the "Internet right" who literally say that "Universalists/The Left/whoever" simply Hate Truth and like to screw with decent society. Oh, and the "Men's Rights" crowd often suggests that those who fear inequality like that just exhibit pathetic weak woman-like thinking that mirrors their despicable lack of masculinity in other areas. And Cthulhu help you if you are actually a woman who thinks like that! Damn, I can't stand those dickheads.)
Of course, I'd like others here to also provide their perspective on probable reasons for such behavior! Don't pull any punches; if it just overwhelmingly looks like people with my beliefs are underdeveloped mentally and somewhat insane, I'll swallow that - but avoid pettiness, please.
↑ comment by Multiheaded · 2012-05-07T05:53:15.224Z · LW(p) · GW(p)
Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy.
After reading that sentence, I expected some rather radical eugenics advocacy. Then I followed that link and saw that all those suggestions (except maybe for cloning, but we can hardly know about that in advance) are really "nice" and inoffensive. Seriously, I think that if even I, who's pretty damn orthodox and brainwashed - a dyed-in-the-wool leftist, as it is - haven't felt a twinge - than you must be overestimating how superstitious and barbaric an educated Universalist is in regards to that problem.
comment by [deleted] · 2012-05-01T08:27:25.803Z · LW(p) · GW(p)
If there is something really cool and you can't understand why somebody hasn't done it before, it's because you haven't done it yourself.
-- Lion Kimbro, "The Anarchist's Principle"
Replies from: olalonde, olalonde↑ comment by olalonde · 2012-05-04T00:48:32.784Z · LW(p) · GW(p)
Forgive my stupidity, but I'm not sure I get this one. Should I read it as "[...] it's probably for the same reasons you haven't done it yourself."?
Replies from: dlthomas↑ comment by dlthomas · 2012-05-04T01:32:40.460Z · LW(p) · GW(p)
I think it just means "you should do it", which is only sometimes the appropriate response.
Replies from: Endovior↑ comment by Endovior · 2012-05-27T23:11:57.765Z · LW(p) · GW(p)
Both sound quite appropriate; it seems likely that in the process of attempting to do some crazy awesome thing, you will run into the exact reasons why nobody has done it before; either you'll find out why it wasn't actually a good idea, or you'll do something awesome.
Replies from: PrometheanFaun↑ comment by PrometheanFaun · 2012-05-28T06:46:22.371Z · LW(p) · GW(p)
But there must be better ways to find out the reasons not to do it. Just doing it instead is a tremendous waste of time.
Talking to the sorts of people who would or should have tried already might be one avenue.
Replies from: Endovior↑ comment by Endovior · 2012-05-30T07:37:52.827Z · LW(p) · GW(p)
That's obviously true, yeah. But if it's cool enough that you'd consider doing it, and you actually, as the quote implies, cannot understand why nobody has attempted it despite having done initial research, then you may be better off preparing to try it yourself rather than doing more research to try and find someone else who didn't quite do it before. Not all avenues of research are fruitful, and it might actually be better to go ahead and try than to expend a bunch of effort trying to dig up someone else's failure.
comment by chaosmosis · 2012-05-04T19:00:21.878Z · LW(p) · GW(p)
Being - forgive me - rather cleverer than most men, my mistakes tend to be correspondingly huger.
Albus Dumbledore
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-05T06:51:34.961Z · LW(p) · GW(p)
Sometimes I check the original and am surprised by how little I actually diverged from Rowling's Dumbledore.
Replies from: Document, MatthewBaker↑ comment by MatthewBaker · 2012-05-07T01:00:30.703Z · LW(p) · GW(p)
PHOENIXS FATE, was something I don't think Rowling's Dumbledore could have done but up until Dumbledore lost the idiot ball in recent chapters I fully agree with you :)
comment by John_Maxwell (John_Maxwell_IV) · 2012-05-07T18:09:23.619Z · LW(p) · GW(p)
How a game theorist buys a car (on the phone with the dealer):
"Hello, my name is Bruce Bueno de Mesquita. I plan to buy the following car [list the exact model and features] today at five P.M. I am calling all of the dealerships within a fifty-mile radius of my home and I am telling each of them what I am telling you. I will come in and buy the car today at five P.M. from the dealer who gives me the lowest price. I need to have the all-in price, including taxes, dealer prep [I ask them not to prep the car and not charge me for it, since dealer prep is little more than giving you a washed car with plastic covers and paper floormats removed, usually for hundreds of dollars], everything, because I will make out the check to your dealership before I come and will not have another check with me."
From The Predictioneer's Game, page 7.
Other car-buying tips from Bueno de Mesquita, in case you're about to buy a car:
- Figure out exactly what car you want to buy by searching online before making any contact with dealerships.
- Don't be afraid to purchase a car from a distant dealership--the manufacturer provides the warranty, not the dealer.
- Be sure to tell each dealer you will be sharing the price they quote you with subsequent dealers.
- Don't take shit from dealers who tell you "you can't buy a car over the phone" or do anything other than give you their number. If a dealer is stonewalling, make it quite clear that you're willing to get what you want elsewhere.
- Arrive at the lowest-price dealer just before 5:00 PM to close the deal. In the unlikely event that the dealer changes their terms, go for the next best price.
↑ comment by Vladimir_M · 2012-05-09T02:43:56.925Z · LW(p) · GW(p)
From my limited experience with buying cars, as well as from theoretical considerations, this won't work because you lack the pre-commitment to buy at the price offered. Once they give you a favorable price, you can try to push it even further downwards, possibly by continuing to play the dealerships against each other. So they'll be afraid to offer anything really favorable. (The market for new cars is a confusopoly based on concealing the information about the dealers' exact profit margins for particular car models, which is surprisingly well-guarded insider knowledge. So once you know that a certain price is still profitable for them, it can only be a downward ratchet.)
The problem can be solved by making the process double-blind, i.e. by sending the message anonymously through a credible middleman, who communicates back anonymous offers from all dealers. (The identities of each party are revealed to the other only if the offer is accepted and an advance paid.) Interestingly, in Canada, someone has actually tried to commercialize this idea and opened a website that offers the service for $50 or so (unhaggle.com); I don't know if something similar exists in the U.S. or other countries. (They don't do any sort of bargaining, brokering, deal-hunting, etc. on your behalf -- just the service of double-anonymous communication, along with signaling that your interest is serious because you've paid their fee.) From my limited observations, it works pretty well.
↑ comment by gwern · 2012-05-07T20:08:21.680Z · LW(p) · GW(p)
I take he does not discuss whether he actually ever did that.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-08T01:13:43.339Z · LW(p) · GW(p)
I have personally purchased Toyotas, Hondas, and a Volkswagen this way. Some of my students at NYU have taken up this method and bought cars this way too... They and I have always beat the price quoted on the Internet with this method.
He further claims to have once saved $1,200 over the price quoted on the Internet for a car he negotiated for his daughter, who was 3000 miles away at the time.
Apparently being a game theory expert does not prevent one from being a badass negotiator.
Why did you guess otherwise?
Replies from: gwern, Vladimir_M↑ comment by gwern · 2012-05-08T01:23:32.197Z · LW(p) · GW(p)
Typically people describing clever complex schemes involving interacting with many other people do not actually do them. Mesquita has previously tripped some flags for me (publishing few of his predictions), so I had no reason to give him special benefit of the doubt.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-05-08T04:21:35.624Z · LW(p) · GW(p)
Maybe many of his predictions are classified because they are for the government?
Replies from: gwern↑ comment by gwern · 2012-05-08T13:50:34.719Z · LW(p) · GW(p)
"I'd love to tell you, but then I'd have to kill you..."
Replies from: FiftyTwo↑ comment by FiftyTwo · 2012-07-25T23:04:49.209Z · LW(p) · GW(p)
Theoretically could you make an approximation of his accuracy by looking at fluctuations in death rates among relevant demographics?
Replies from: DaFranker↑ comment by DaFranker · 2012-07-26T00:50:07.250Z · LW(p) · GW(p)
Even theoretically, you would then need to have perfect information every single other factor influencing relevant-demographics death rates, assuming you somehow magically know the exact relevant demographics. If there is even one other factor that is uncertain, you end up having to increase your approximation's margin of error proportionally to the uncertainty, and each missing data point is another power factor of increase in the margin. Eventually, it's much smarter to realize that you don't have a clue.
Now, take into account that you don't even know all of the factors, and that it's pretty much impossible to prove that you know all of the factors even if by some unknown feat you managed to figure out all possible factors... quickly the problem becomes far beyond what can be calculated with our puny mathematics, let alone by a human. Of course, if you still just want an approximation after doing all of that, it may become possible to obtain an accurate one, but I'm not even sure of that.
Thanks for the food for thought, though.
↑ comment by Vladimir_M · 2012-05-09T03:08:35.941Z · LW(p) · GW(p)
He further claims to have once saved $1,200 over the price quoted on the Internet for a car he negotiated for his daughter, who was 3000 miles away at the time.
What does he mean by "price quoted on the Internet"? If it's the manufacturer's suggested retail price, then depending on the car model and various other factors, saving $1,200 over this price sounds unremarkable at best, and a badly losing proposition at worst. If it was the first price quoted by the dealer, it could be even worse -- at least where I live, dealers will often start with some ridiculous quote that's even higher that the MSRP.
↑ comment by Shmi (shminux) · 2012-07-15T23:16:57.322Z · LW(p) · GW(p)
Having bought/leased a few new and used cars over the years, I immediately think of a number of issues with this, mainly because this trips their "we don't do it this way, so we would rather not deal with you at all" defense. This reduces the number of dealers willing to engage severely. Probably is still OK in a big city, but not where there are only 2 or 3 dealerships of each kind around. There are other issues, as well:
Bypassing the salesperson and getting to talk to the manager directly is not easy, as it upsets their internal balance of fairness. The difference is several hundred dollars.
The exact model may not be available unless it's common, and the wait time might be more than you are prepared to handle. Though the dealers do share the inventory and exchange cars, they are less likely to bother if they know that the other place will get the same request.
They are not likely to give you the best deal possible, because they are not invested in the sale (use sunk cost to your advantage)
They are not likely to believe that you will do as you say, because why should they? There is nothing for you to lose by changing your mind. In fact, once you have all the offers, you ought to first consider what to do next, not blindly follow through on the promise.
This approach, while seemingly neutral, comes across as hostile, because it's so impersonal. This has extra cost in human interactions.
"Searching online" is no substitute to kicking the tires for most people. The last two cars I leased I found on dealers' lots after driving around (way after I researched the hell out of it online), and they were not the ones I thought I would get.
And the last one: were this so easy, the various online car selling outfits, like autobytel would do so much better.
So, while this strategy is possibly better than the default of driving around the lots and talking to the salespeople, it is far from the best way to buy a car.
comment by Jayson_Virissimo · 2012-05-01T08:08:55.364Z · LW(p) · GW(p)
Replies from: bentarmIf one does not know to which port one is sailing, no wind is favorable.
↑ comment by bentarm · 2012-05-01T20:27:35.903Z · LW(p) · GW(p)
In this case, isn't it equally true that no wind is unfavourable?
Replies from: Eliezer_Yudkowsky, Richard_Kennaway, cody-bryce↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-01T23:38:26.948Z · LW(p) · GW(p)
"The Way is easy for those who have no utility function." -- Marcello Herreshoff
Replies from: Vladimir_Nesov, Dorikka↑ comment by Vladimir_Nesov · 2012-05-20T00:11:08.329Z · LW(p) · GW(p)
Not sure, this came up in a few previous conversations. If an agent is almost certain that it's completely indifferent to everything, the most important thing it could do is to pursue the possibility that it's not indifferent to something, that is to work primarily on figuring out its preference on the off chance that its current estimate might turn out to be wrong. So it still takes over the universe and builds complicated machines (assuming it has enough heuristics to carry out this line of reasoning).
Say, "Maybe 1957 is prime after all, and hardware used previously to conclude that it's not was corrupted," which is followed by a sequence of experiments that test the properties of preceding experiments in more and more detail, and then those experiments are investigated in turn, and so on and so forth, to the end of time.
↑ comment by Dorikka · 2012-05-02T02:04:51.448Z · LW(p) · GW(p)
If someone didn't value any world-states more than any others, I'm not sure that a Way would actually exist for them, as they could do nothing to increase the expected utility of future world-states. Thus, it doesn't seem to really make sense to speak of such a Way being easy or hard for them.
Am I missing something?
Replies from: olalonde↑ comment by olalonde · 2012-05-02T02:19:34.509Z · LW(p) · GW(p)
I think you're over analyzing here, the quote is meant to be absurd.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T01:40:13.167Z · LW(p) · GW(p)
Whaaa?
Someone explain please. It didn't seem absurd when I read it.
Replies from: magfrump↑ comment by magfrump · 2012-05-03T03:55:50.658Z · LW(p) · GW(p)
If you don't want anything, it's very easy to get what you want.
However, everyone reading this post is a human, and therefore is almost certain to want many things: to breath, to eat, to sleep in a comfortable place, to have companionship, the list goes on.
I interpreted it similarly to part of this article:
Replies from: chaosmosisyou may choose to [do whatever you want], but only if you don't mind dying.
↑ comment by chaosmosis · 2012-05-03T04:36:24.313Z · LW(p) · GW(p)
Since you said the quote itself was absurd I thought you were saying the post was an internally flawed strawman meant for the purpose of satire, but you meant something else by that word.
Replies from: olalonde↑ comment by olalonde · 2012-05-03T20:37:27.505Z · LW(p) · GW(p)
I'm the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn't make sense. Do you get it better if I say: "It is easy to achieve your goals if you have no goals"? I concede absurd was possibly a bit too strong here.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T21:00:52.676Z · LW(p) · GW(p)
Okay, that makes more sense, yeah I see what you mean and agree.
↑ comment by Richard_Kennaway · 2013-05-20T14:59:53.220Z · LW(p) · GW(p)
That depends on whether your goal is to travel or to arrive.
↑ comment by cody-bryce · 2013-05-20T14:54:24.926Z · LW(p) · GW(p)
I am reminded of an exchange between Alice and the Cheshire cat
`Would you tell me, please, which way I ought to go from here?'
`That depends a good deal on where you want to get to,' said the Cat.
`I don't much care where--' said Alice.
`Then it doesn't matter which way you go,' said the Cat.
–Lewis Carrol
Replies from: Jiro↑ comment by Jiro · 2013-05-20T19:57:54.164Z · LW(p) · GW(p)
Of course, this requires that the Cat either is being difficult, or doesn't understand the word "much".
Which applies to the first quote too: if your destination is not limited to a single possible port, but it is limited to something narrower than "anywhere at all", then bad winds can in fact exist. (Applying this insight to the metaphorical content of that statement is an exercise for the reader.)
Replies from: cody-bryce↑ comment by cody-bryce · 2013-05-20T20:42:06.439Z · LW(p) · GW(p)
I don't see how this criticism applies to the original quote.
(And yes, the Cheshire Cat's entire schtick is being difficult.)
Replies from: DSimon, Jiro↑ comment by DSimon · 2013-05-21T20:45:51.613Z · LW(p) · GW(p)
Even if you don't know which port you're going to, a wind that blows you to some port is more favorable than a wind that blows you out towards the middle of the ocean.
Replies from: cody-bryce↑ comment by cody-bryce · 2013-05-22T13:53:39.903Z · LW(p) · GW(p)
That's only true if you prefer ports reached sooner or ports on this side of the ocean.
↑ comment by Jiro · 2013-05-21T20:13:28.218Z · LW(p) · GW(p)
It is possible that you don't know which port you're sailing to because you have ruled out some possible destinations, but there is still more than one possible destination remaining. If so, it's certainly possible that a wind could push you away from all the good destinations and towards the bad destinations. (It is also possible that a wind could push you towards one of the destinations on the fringe, which pushes you farther from your destination based on a weighted average of distances to the possible destinations, even though it is possible that the wind is helping you.)
(Consider how the metaphor works with sailing=search for truth, port=ultimate truth, and bad wind=irrationality. It becomes a way to justify irrationality.)
The difference between "no knowledge about your destination whatsoever" and "not knowing your destination" is the difference between "I don't care where I'm going" and "I don't much care where I'm going" in the Cheshire Cat's version.
comment by rocurley · 2012-05-03T23:42:32.187Z · LW(p) · GW(p)
Inspired by maia's post:
“When life gives you lemons, don’t make lemonade. Make life take the lemons back! Get mad! I don’t want your damn lemons, what the hell am I supposed to do with these? Demand to see life’s manager! Make life rue the day it thought it could give Cave Johnson lemons! Do you know who I am? I’m the man who’s gonna burn your house down! With the lemons! I’m gonna get my engineers to invent a combustible lemon that burns your house down!”
---Cave Johnson, Portal 2
Replies from: Nisan, SpaceFrank, None, None, DSimon, dlthomas, Baruta07, Baruta07, Baruta07↑ comment by SpaceFrank · 2012-05-04T20:03:04.482Z · LW(p) · GW(p)
When life gives you lemons, order miracle berries.
↑ comment by [deleted] · 2012-05-04T18:29:57.644Z · LW(p) · GW(p)
When life gives you lemons, lemon canon.
comment by Grognor · 2012-05-01T07:13:26.114Z · LW(p) · GW(p)
Let me never fall into the vulgar mistake of dreaming that I am persecuted whenever I am contradicted.
-Ralph Waldo Emerson, probably not apocryphal (at first, this comment said "possibly apocryphal since I can't find it anywhere except collections of quotes")
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-05-01T10:41:05.558Z · LW(p) · GW(p)
It's in WikiQuotes.
Replies from: Grognor↑ comment by Grognor · 2012-05-01T10:51:01.146Z · LW(p) · GW(p)
Which is a collection of quotes!
One that anyone can edit!(!)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-05-01T12:16:11.803Z · LW(p) · GW(p)
But it gives a source!
One that anyone can check!
Replies from: Grognor↑ comment by Grognor · 2012-05-01T12:28:36.646Z · LW(p) · GW(p)
(Just going to note that I wholly disapprove of this line of conversation.)
You have either reached a page that is unavailable for viewing or reached your viewing limit for this book.
It is not as though I did not try to find a source, damnit. Though on closer inspection I see it highlights some invisible text, so that counts as good evidence it's real.
Replies from: Danflycomment by Ghatanathoah · 2012-05-02T16:42:39.279Z · LW(p) · GW(p)
"It is indeed true that he [Hume] claims that 'reason is, and ought only to be the slave of the passions.' But a slave, it should not be forgotten, does virtually all the work."
comment by Mark_Eichenlaub · 2012-05-02T05:34:28.616Z · LW(p) · GW(p)
Asked today if the Titanic II could sink, Mr Palmer told reporters: "Of course it will sink if you put a hole in it."
http://www.smh.com.au/business/clive-palmer-plans-to-build-titanic-ii-20120430-1xtrc.html
comment by Richard_Kennaway · 2012-05-09T07:48:11.299Z · LW(p) · GW(p)
Replies from: DesrtopaSaying "what kind of an idiot doesn't know about the Yellowstone supervolcano" is so much more boring than telling someone about the Yellowstone supervolcano for the first time.
comment by William_Kasper · 2012-05-06T20:10:15.697Z · LW(p) · GW(p)
[Political "gaffe" stories] are completely information-free news events, and they absolutely dominate political news coverage and analysis. It's like asking your doctor if the X-rays show a tumor, and all he'll talk about is how stupid the radiologist's haircut looks. . . . ["Blast"] stories are. . . just as content-free as the "gaffe" stories. But they are popular for the same reason: There's a petty, tribal satisfaction in seeing a member of our team really put the other team in their place. And there's a rush of outrage adrenaline when the other team says something mean about us. So, instead of covering pending legislation or the impact it could have on your life, the news media covers the dick-measuring contest.
-David Wong, 5 Ways to Spot a B.S. Political Story in Under 10 Seconds
Replies from: gwillen, albeola↑ comment by albeola · 2012-05-06T21:07:11.862Z · LW(p) · GW(p)
instead of covering pending legislation or the impact it could have on your life
If "impact on your life" is the relevant criterion, then it seems to me Wong should be focusing on the broader mistake of watching the news in the first place. If the average American spent ten minutes caring about e.g. the Trayvon Martin case, then by my calculations that represents roughly a hundred lifetimes lost.
Replies from: homunqcomment by Grognor · 2012-05-01T07:15:47.996Z · LW(p) · GW(p)
The Disobedi-Ant
The story of the Disobedi-Ant is very short. It refused to believe that its powerful impulses to play instead of work were anything but unique expressions of its very unique self, and it went its merry way, singing, "What I choose to do has nothing to do with what any-ant else chooses to do! What could be more self-evident?"
Coincidentally enough, so went the reasoning of all its colony-mates. In fact, the same refrain was independently invented by every last ant in the colony, and each ant thought it original. It echoed throughout the colony, even with the same melody.
The colony perished.
-Douglas Hofstadter (posted with gwern's "permission")
Replies from: Grognor, gwern↑ comment by Grognor · 2012-05-11T19:38:33.503Z · LW(p) · GW(p)
It bothers me how there are no replies to this quote that aren't replies to gwern's prediction comment.
Replies from: Monkeymind↑ comment by Monkeymind · 2012-05-11T20:02:45.715Z · LW(p) · GW(p)
He walked along the trail with all the other workers. They had toiled all day in the field, and now were heading back to join the rest just over the hill. His kind had lived and worked this land for over a thousand years. They are the hardest workers anyone has ever known. They were all tired and hungry, and it was quiet as they mindlessly shuffled down the trail. He had walked this way many times before, as they all had, without a single thought about the individual sacrifice each has made for the collective. This is the way it has always been. His large strong body moved forward with no thought about what tomorrow would bring. In fact, he didn’t think anything at all. None of them did.
Suddenly a bright white intensely hot beam of light shot out of the sky. His legs curled up underneath him as he collapsed, instantly dead. His insides were cooked and a single puff of smoke rose from his body with a pop. “Time to eat” Jimmy’s mother called from the back porch. Jimmy put his magnifying glass in his pocket and muttered under his breath, ”Stupid ants”. End of the Trail- Monkeymind
↑ comment by gwern · 2012-05-01T18:15:51.308Z · LW(p) · GW(p)
I merely said
23:18 <@gwern>
Grognor: it's certainly short. it's worth a try although don't expect it to go outside -2<=x<=15
If anyone was wondering. (So far my prediction is right...)
Replies from: ArisKatsaris, faul_sname↑ comment by ArisKatsaris · 2012-05-03T11:22:17.040Z · LW(p) · GW(p)
I'm downvoting the whole karma-discussion, because it's effectively karma-wanking spam that abuses the karma-system, and distorts what actual value karma has in estimating the value of any given quote.
Keep this crap to predictionbook.
Replies from: JGWeissman, Jayson_Virissimo↑ comment by JGWeissman · 2012-05-11T20:16:14.813Z · LW(p) · GW(p)
I think that Gwern's comment, being primarly a clarification of "(posted with gwern's 'permission')", could be interpreted more charitably. I agree about the responding karma predictions though.
Replies from: gwern↑ comment by Jayson_Virissimo · 2012-05-03T11:48:28.393Z · LW(p) · GW(p)
Keep this crap to predictionbook.
My comment above was intended to do just that.
↑ comment by faul_sname · 2012-05-01T22:43:19.691Z · LW(p) · GW(p)
I'm predicting 10<=x<=30 (it's currently at 7).
Replies from: MinibearRex, thomblake↑ comment by MinibearRex · 2012-05-02T05:02:58.077Z · LW(p) · GW(p)
And just how much is this line of discussion going to change the karma amount? I'm expecting it to go higher than any (reasonable) estimate, just because I expect LWers to want to screw with people.
Replies from: faul_sname, Jayson_Virissimo, chaosmosis↑ comment by faul_sname · 2012-05-02T06:05:11.903Z · LW(p) · GW(p)
Perhaps not, now that you've said this.
Does TDT account for agents that deliberately try to go against your expectations?
↑ comment by Jayson_Virissimo · 2012-05-02T05:31:48.059Z · LW(p) · GW(p)
How high is "reasonable"?
EDIT: The reason I ask is so that I can add it as a prediction statement on PredictionBook.
Replies from: MinibearRex↑ comment by MinibearRex · 2012-05-02T18:17:38.499Z · LW(p) · GW(p)
I'm expecting that people are currently looking at the current balance of 22, seeing that faul_sname has predicted [10, 30], and will upvote to try to get it out of that range. Which is a good thing for Grognor. But if you want me to pick a "reasonable" estimate, the same process will repeat itself, using whatever value I give. So I need to pick a value that's high enough that I don't think people will even try to reach it.
3^^^3 ;)
Replies from: VKS↑ comment by VKS · 2012-05-02T21:10:23.910Z · LW(p) · GW(p)
What if I predicted that the karma was going to end up even?
Edit: Or better, that it was going to end in a seven?
Replies from: dlthomas↑ comment by dlthomas · 2012-05-02T21:24:22.250Z · LW(p) · GW(p)
What's the last digit (base 10) of 3^^^3, anyway?
Replies from: komponisto, Zack_M_Davis, Randaly↑ comment by komponisto · 2012-05-02T22:05:40.219Z · LW(p) · GW(p)
7. See here
(EDIT: apparently it's no longer possible to link to sections of Wikipedia articles using #. Above link is meant to point to the section of the article entitled "Rightmost decimal digits...")
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-02T22:11:30.360Z · LW(p) · GW(p)
(EDIT: apparently it's no longer possible to link to subsections of Wikipedia articles using #. Above link is meant to point to the section of the article entitled "Rightmost decimal digits...")
URL encode the apostrophe, and it works.
↑ comment by Zack_M_Davis · 2012-05-02T21:54:51.778Z · LW(p) · GW(p)
I haven't studied number theory, but I expect that someone who has would be able to answer this. Successive powers of three have final digits in the repeating pattern 1, 3, 9, 7, so if we can find N mod 4 for the N such that 3^N = 3^^^3, then we would have our answer.
Replies from: VKS↑ comment by VKS · 2012-05-02T22:02:04.850Z · LW(p) · GW(p)
3^odd = 3 mod 4
so it ends in 7.
(but I repeat myself)
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-05-02T22:05:08.271Z · LW(p) · GW(p)
I think you're mistaken. Counterexample: 3^9 = 19683.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-02T22:09:33.952Z · LW(p) · GW(p)
19683 = 3 mod 4
Replies from: VKS, Zack_M_Davis↑ comment by Zack_M_Davis · 2012-05-02T22:18:09.163Z · LW(p) · GW(p)
Oh, sorry; I agree that odd powers of three are 3 mod 4, but I had read VKS as claiming that odd powers of three had a final digit of seven; I probably misunderstood the argument. [EDIT: Yes, I was confused; I understand now.]
Replies from: VKS↑ comment by VKS · 2012-05-02T22:24:57.572Z · LW(p) · GW(p)
right, well, it's just that 3^^^3 = 3^3^3^3^3...3^3^3 = 3^(3^3^3^3...3^3^3), for a certain number of threes. So, 3^^^3 is 3^(some odd power of three).
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2012-05-02T22:28:21.247Z · LW(p) · GW(p)
Yes, thanks; I apologize for having misunderstood you earlier.
Replies from: VKS↑ comment by Randaly · 2012-05-02T21:35:28.143Z · LW(p) · GW(p)
Nope
Replies from: VKS↑ comment by VKS · 2012-05-02T21:47:51.366Z · LW(p) · GW(p)
no, 7
(see other comment)
Replies from: dlthomas↑ comment by chaosmosis · 2012-05-03T01:41:46.339Z · LW(p) · GW(p)
That was my initial reflex desire, but then I thought about it and decided not to.
This happened before you made the parent of this comment.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T13:33:51.366Z · LW(p) · GW(p)
People, it makes no sense to karma punish me for:
- Suppressing inappropriate preferences.
- Giving the above commenter the type of information that the commenter was wondering about.
- Giving people reasons that their karma punishments are unwarranted.
- Using the word dumb to describe irrational karma distributions.
- Modifying my feedback in response to further displays of irrational behavior.
- Not responding to karma incentives in the way you would like me to.
- Not taking any of this seriously at all.
Don't be dumb.
Replies from: WrongBot, ArisKatsaris, chaosmosis↑ comment by WrongBot · 2012-05-03T18:28:07.679Z · LW(p) · GW(p)
Karma isn't (necessarily) about punishment. Downvotes often just mean "I'd prefer to see fewer comments like this."
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T19:19:37.596Z · LW(p) · GW(p)
Either way all of my objections apply, this isn't really relevant to what I was contending.
Also, at least one person gave my initial comment +karma while presumably downvoting the other one, I want to mention that I think that kinda sorta makes sense and appreciate that nuanced view more than the view of people who for reasons unknown dislike feedback about feedback.
If someone would give a reason they dislike feedback about feedback I would feel better. It feels vindictive.
↑ comment by ArisKatsaris · 2012-05-03T18:42:48.009Z · LW(p) · GW(p)
Posts that are solely about karma tend to get downvoted by me, because I want fewer posts that are solely about karma.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T19:18:26.436Z · LW(p) · GW(p)
Lolz ironic downvoting on your comment.
I disagree with you, I think giving feedback about received feedback makes sense.
Edit: The -3 really just goes to prove my point people, don't you think? I was making a valid point here.
Replies from: Alicorn, dlthomas↑ comment by Alicorn · 2012-05-03T22:53:38.685Z · LW(p) · GW(p)
I was making a valid point here.
Saying this does not go a long way towards proving that it is true.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T02:51:12.996Z · LW(p) · GW(p)
Doesn't it intuitively make sense that feedback about feedback is good for the same reasons that feedback is good? If my intuitions are bad, the least someone could do is offer an argument to prove the flaws of my intuition. I could have clarified this, I guess, but I felt no real reason to do so given the stunning absence of actual substantive criticisms of what I was doing.
People weren't responding rationally to my comments, so I pointed out that those people were being dumb. That seems like something that is okay, and like something that might improve feedback mechanisms and which should thus be praised rather than downvoted. ArisKatsaris' one sentence statement about her karma habits didn't have any justifications behind it, so it didn't deserve a detailed and warranted response. I did describe in general terms the substance of my objection, that's enough in the absence of warranted counterarguments
All of the above listed reasons seem like valid arguments to me, if they're flawed I would like to know. But I would like actual reasons, not just vague statements that appeal to unjustified personal preferences.
Replies from: Alicorn↑ comment by Alicorn · 2012-05-04T05:20:48.549Z · LW(p) · GW(p)
People weren't responding rationally to my comments, so I pointed out that those people were being dumb.
Listen to yourself.
I'm not interested in having arguments with you; you don't make that look like a remotely productive use of my time. I'm trying to point out the things you are saying that sound juvenile and cause people to downvote you; it seems to bother you, so maybe if you can figure out the pattern, you will stop saying those things.
Announcing that you are making a valid point does not add anything to a point, however valid it may or may not be.
Declaring unilaterally without support that people weren't responding "rationally", and then pointing out that this makes them "dumb", is not any kind of worthwhile behavior.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T19:36:13.928Z · LW(p) · GW(p)
I am not upset by receiving negative reputation in itself. I am annoyed that people are not giving justifications for their negative reputations, and I was also trying to give reasons that their negative reputations were unjustified. I don't even know that I'm annoyed so much as I'm trying to point out the flawed behavior on this site so that third parties or intelligent but silent viewers within the community are aware of the danger.
Give a reason that my overall position or the above list is logically flawed, please. Or shut up.
Replies from: Alicorn, ArisKatsaris↑ comment by Alicorn · 2012-05-04T20:21:12.302Z · LW(p) · GW(p)
Please go away.
EDIT: Going to turn this into a poll. Permalink to karma sink if it drops below threshold.
Replies from: Alicorn, Alicorn, Alicorn, chaosmosis↑ comment by Alicorn · 2012-05-04T23:08:51.689Z · LW(p) · GW(p)
Vote this comment up if you do not think chaosmosis's future comments should be banned.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-05T00:26:44.859Z · LW(p) · GW(p)
Chaosmosis has had a significant number of upvoted comments. Some of his conduct has been very obnoxious and counterproductive, but I don't think he's reached a point where it's reasonable to write him off as unable to learn from his mistakes. At the least, I think his continued presence is more likely to be fruitful than a couple of other recently active members whose contributions have been uniformly downvoted.
↑ comment by Alicorn · 2012-05-04T23:09:11.871Z · LW(p) · GW(p)
Karma sink. You're all irrational and dumb, shut up!
Replies from: TheOtherDave, Zaine↑ comment by TheOtherDave · 2012-05-04T23:37:35.280Z · LW(p) · GW(p)
Point of clarification: does banning a user on LW do anything but force them to create a new user account if they wish to keep contributing?
I have been using Wei_Dai's awesome greasemonkey script for a while now to filter out some of the users I find valueless, so having them create multiple usernames to dodge the banhammer would be a mild nuisance for me.
So if that's all it does, I'm somewhat opposed to it, but willing to remain neutral for the sake of not inconveniencing other people who don't use that script for whatever reason.
OTOH, the responses to those users are themselves mildly annoying, so if the banhammer does something more worthwhile than that, then I might be in favor of it.
Replies from: dlthomas, Alicorn↑ comment by dlthomas · 2012-05-05T00:03:33.726Z · LW(p) · GW(p)
What about giving users the ability to apply a penalty to the score of posts from people they find uninteresting or aggravating, for the purpose of determining whether comments are hidden for that user? It could be inherited for one comment, or over the entire subtree, or perhaps decay according to some function.
This would, in general, hide comments from those users you object to as well as responses to them. The primary advantage it would have over outright blocks is that it would allow more space for someone to redeem themselves, and would let you catch interesting things in the responses when they do arise. A comment at +22 is likely interesting regardless of who posted it, and if you've seen some interesting posts from someone you've previously downgraded, you'll probably think about relaxing that.
Edited to add: Note that if there seems to be a consensus on this, I'm willing to do the coding required.
↑ comment by Alicorn · 2012-05-05T00:00:55.594Z · LW(p) · GW(p)
I can't ban users; I can ban comments. This makes them inaccessible to nonmod people. Creating a new username would only work against this for as long as it took to identify the new one as the same person.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-05T03:02:59.446Z · LW(p) · GW(p)
Ah, gotcha.
↑ comment by Zaine · 2012-05-05T00:40:56.433Z · LW(p) · GW(p)
Is there a certain threshold that once passed, personal karma totals no longer matter?
Replies from: hairyfigment, Alicorn↑ comment by hairyfigment · 2012-05-05T01:09:54.851Z · LW(p) · GW(p)
Eh? You only need 20 to post in Main.
↑ comment by chaosmosis · 2012-05-04T21:21:58.879Z · LW(p) · GW(p)
I feel as though your comments are now solely directed towards the purpose of gaining reputation.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-05T12:49:05.153Z · LW(p) · GW(p)
I am convinced that you are wrong in point of fact. I am telling you this so you can adjust your feelings to fit reality. As long as you feel Alicorn or others are out to get you, or profit by hurting you, you will probably not be able to make a useful contribution to discussions or enjoy them yourself.
From my experience as a longtime reader of the site I can tell you that reputation on LW is not normally gained by attacking anyone, even if everyone else agrees in disliking the target or their comments. We have community values of responding to the factual content of each comment, with clear literal meaning, and without covert signalling. Reputation is gained by contributing to the conversation.
We also require civility, and since people are often bad at predicting how others will react to borderline comments like "dumb", it's best to be what may seem to you to be extra-civil just to avoid conversational traps. You have edited the comment that seemed to start this whole subthread, and I haven't seen the original, so can't comment more specifically.
↑ comment by ArisKatsaris · 2012-05-05T03:18:21.424Z · LW(p) · GW(p)
I am not upset by receiving negative reputation in itself. I am annoyed that people are not giving justifications for their negative reputations
That would have been more believable if you were complaining about downvotes that other people received, instead of the downvotes that you receive.
It would also have been more believable if you had also complained about the upvotes you received without justification, instead of only the downvotes you received without justification.
Edited to add: Here's my impression of you: You are very strongly biased against all negative feedback you receive, whether silent downvotes or explicit criticism and you're therefore not the best person to criticize it in turn. e.g. in the first thread I encountered you in in, you repeatedly called me a liar when I criticized a post without downvoting it. You couldn't believe me when I told you I didn't downvote it.
You are BAD at this. You are BAD at receiving negative feedback. Therefore you are BAD at criticizing it in turn. If you want to give feedback on negative feedback, then make sure said "negative feedback" wasn't originally directed at you. Try to criticize the feedback given to other people instead -- you might be better suited to evaluate that.
↑ comment by dlthomas · 2012-05-03T23:00:03.831Z · LW(p) · GW(p)
People might be downvoting for any number of reasons.
I spot the following, as potential downvote-triggers for various demographics:
1) "Lolz" 2) "ironic" 3) downvoting 4) disagreement without explanation
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T02:54:23.595Z · LW(p) · GW(p)
4 doesn't match the data, because no one who has disagreed with the above listed points has given a reason that they disagree with them. There have been no arguments made against my posts, just (1, 2, and 3) statements about aesthetic preferences. IMHO all y'all have really bad aesthetic preferences.
Just because I don't speak in a pretentious tone doesn't mean that I can't make valid points. I get kind of sick of all of the LessWrong commenters sounding alike in tone so I intentionally try to diversify things. Diverse forms of discussion seem more likely to produce diverse forms of thought [insert generic Orwell reference here]. Informal tones are also more conducive to casual communication which takes less time to articulate. Formalism in everyday life is stupid.
Judge the accuracy of the information I provide, please, not the tone which I choose to provide it in. Arguing shouldn't have to be so formal and should never preclude major lulz whenever major lulz can be achieved. Anyone who acts as though something else is true should provide warranted reasons for doing so or else should be considered a major n00b.
↑ comment by chaosmosis · 2012-05-04T02:49:21.020Z · LW(p) · GW(p)
I'd appreciate an explanation of why criticism of unwarranted negative feedback justifies more negative feedback.
Anyone up for it?
I currently feel that people just irrationally lash out at criticisms or statements which come close to suggesting criticism of the people controlling the karma. I currently think the commenters identify with each other to the extent that criticizing one of their actions draws them all in to attack, and I also don't think that's a healthy thing for a website to do.
Replies from: ArisKatsaris, CuSithBell↑ comment by ArisKatsaris · 2012-05-04T10:31:29.483Z · LW(p) · GW(p)
I'd appreciate an explanation of why criticism of unwarranted negative feedback justifies more negative feedback.
There's probably some law of diminishing returns, where commenting on something = (A1) utility, and commenting on how people comment on something = (A2) utility, and commenting on how people vote = (A3) utility, and you commenting on how people vote on you commenting on how people vote = (Z) utility, where it tends to go A1>A2>A3>...>Z, and probably Z is deep in the negatives.
You should also distinguish between types of feedback: receiving downvotes for an unknown reason may be frustrating to you, but it doesn't clutter up the threads. You complaining about every time you get downvoted does clutter up the threads. It's not the same type of "negative feedback".
In short: obsess less about karma.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T13:05:04.876Z · LW(p) · GW(p)
There are no "laws" which take over your behavior and force you to respond to my comments in a certain way. If people don't like to see useful comments, and consider those useful comments to be clutter, then their interpretation of what clutter is is wrong and should be corrected to maximize feedback efficiency. Giving feedback on feedback makes sense.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-05-04T13:54:14.434Z · LW(p) · GW(p)
You aren't saying anything new: you think your words useful (of positive utility), but the people who downvote them obviously don't so think them -- they consider them of zero or negative utility; and they vote accordingly.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T19:46:36.837Z · LW(p) · GW(p)
Yes, and I was wondering if someone would give an argument justifying that behavior or that utility assessment rather than simply taking that behavior as a given "law".
I would like warranted arguments as to what was wrong with my comments. I am not asking for arguments that explain the response of the commenters or even one that mentions the reasons they have for downvoting, I'm asking for arguments that justify that behavior and that warrant those reasons. Without those arguments the community appears to be acting very irrationally on a fairly wide scale, which is concerning. No one has provided those arguments as of yet, so I am concerned.
Some people seem to be confusing my complaints with "I don't want to receive bad karma" and then they advise me on ways to make other people like my comments better. But that is not my complaint or my goal, my complaint is that I am receiving bad karma for no good reason and my goal is to get people to recognize this. I'm not really interested in being popular on this site, I'm interested in pointing out the lack of justification for my unpopularity, thus drawing attention to the possibly dangerous implications that this has for the community. The confusion between the two goals is natural but it is entirely mistaken.
Replies from: Strange7↑ comment by Strange7 · 2012-05-04T22:05:57.944Z · LW(p) · GW(p)
But that is not my complaint or my goal, my complaint is that I am receiving bad karma for no good reason and my goal is to get people to recognize this.
Proving that someone didn't have a good reason for doing something is a lot harder than saying that it's so. If you want to get fewer downvotes, there are ways you can do that; if you want to avoid changing your posting style, you could also do that. You cannot do both. Life presents choices.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T22:31:25.132Z · LW(p) · GW(p)
I'm not trying to prove that, I'm trying to get a few people to think that it is probably the case, there's a slight difference. I think that the irrationality of the commenters is the most logical and the simplest way for anyone to make sense of the reputation patterns I've seen so far despite the lack of good warranted criticism.
It's not as though irrationality is incredibly rare or that I should have low priors on the probability that humans use social signalling mechanisms in an irrational manner, after all. The fact that out of all the people here none of them have conceded that they are irrational would actually seem to lend a bit more credence to my belief than it already has.
I agree that I cannot do both. However, I anticipate downvotes even if I were to conclusively prove my argument with all the might and power that Science and Bayes have to offer the universe. If my argument is correct and the commenters respond irrationally to criticism, then of course I should anticipate downvotes despite the accuracy of my criticism. That's kind of the entire point.
Replies from: Strange7, duckduckMOO↑ comment by Strange7 · 2012-05-05T04:33:36.167Z · LW(p) · GW(p)
'round these parts, the way to persuade people of controversial points is proof, not just assertion. So far you haven't offered anything but big talk.
You've got this theory that makes a certain prediction, namely that your posts will be downvoted because everyone but you is an idiot. There is a competing theory which makes the same prediction, namely that your posts will be downvoted because they lack productive content. In order for your theory to beat out the competition, you'll need to find some point where the predictions differ, and then demonstrate that yours is more accurate.
Surely, if we are such fools, and you understand the irrationalities involved so well, you could compose a post which manipulates those corrupt thought-structures into providing you with upvotes?
↑ comment by duckduckMOO · 2012-05-06T16:47:16.731Z · LW(p) · GW(p)
this is a little ridiculous. The reason you were downvoted is someone didn't like your post. The reason all of the rest of your comments are being downvoted is that people don't like to be questioned. And there's some bandwagon effect in there somewhere. I've never got people to explain anything like this (edit: this method of trying to get an explanation). Maybe you are particularly good at it in real life thanks to body language or something but just in text there's no way you're going to get people to explain themselves this
also this sort of thing:
People, it makes no sense to karma punish me for: Giving people reasons that their karma punishments are unwarranted. Using the word dumb to describe irrational karma distributions. Modifying my feedback in response to further displays of irrational behavior. Not responding to karma incentives in the way you would like me to. Not taking any of this seriously at all.
tends to elicit an "I'LL SHOW YOU, FUCKER", response in people or something, effectively identical, from what I have observed of people.
also, people like their requests for feedback humble and/or "positive."
As for what's wrong with your first comment: Supressing "innapropriate" preferences isn't something I like." I didn't downvote you but it's not like you can't just not read comments. If i'd understood that was what you were doing when i read your comment (as I skipped down the page to the comments I was interested in) I would have downvoted it. I won't now as most of the rest of your downvotes are clearly punishing your demanding an explanation (in an "innapropriate" tone) which no one has bothered doing. (why the fuck is the comment pointing out the non existence of laws which take over behaviour downvoted? and the one it's responding to upvoted?) but I really don't like the idea of trying to suppress comments that have no obvious negative impact. It looks kind of the same to me as the way no one bothered to give you an explanation and just decided to downvote instead. Your post is just saying "I decided not to do that," which is simply an expression of your dislike, with no reasoning given, much as your being downvoted rather than responded to is. Also, it's social policing and signalling taking priority over explaining, to the point where the actual "here is what I don't like" bit that could allow someone to learn something is entirely left out. It wasn't as bad as the response you're receiving though.
edit: I must say, though, the demands of "proof" are ridiculous.
↑ comment by CuSithBell · 2012-05-04T15:20:33.744Z · LW(p) · GW(p)
criticism of unwarranted negative feedback
Some may consider this an inaccurate or incomplete description of your downvoted posts.
You may want to give a good-faith consideration as to why that is if you want to keep pursuing this.
Additionally, complaints about downvotes are usually not well received - not least because if someone downvoted you, they will probably disagree with a post claiming that they were incorrect to do so.
I would like to see less of a focus on karma - and, for that matter, "status" - on this website.
For what it's worth, I downvoted the grandparent, and upvoted the great-grandparent.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T19:56:39.201Z · LW(p) · GW(p)
Um, just see this: http://lesswrong.com/lw/c4h/rationality_quotes_may_2012/6ik8 it applies roughly equally as well to your comment as it does to ArisKatsaris' comment.
I'm not interested in status, sorry for the confusion. I actually plan on leaving eventually because I'm concerned about getting drawn into a community which shares so many different memes and concepts. I want to internalize most of those concepts because they do seem objectively useful, but then I want to move on before those concepts become ingrained and I become a drone trapped in the hive. Remaining static is dangerous to free thought, that's something I learned from the Deleuzians, they're really cool.
The Internet is meant for nomadism, not anything else.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-04T21:41:34.419Z · LW(p) · GW(p)
In that case - I upvoted your initial comment. I did not downvote the "complaint list" comment until it had grown quite large.
I think your characterization of your "complaint" comment is inaccurate, and I was trying to induce you to revisit it, because otherwise you're arguing from false premises. I don't think the initial comment should have been downvoted! However, your response was not useful to you for your expressed purpose of eliciting clarification on feedback, or to me or (apparently) the LW-emergent-consciousness for contributing to valuable discussion. It was downvoted for these reasons - and they're good ones!
Of particular note is your unwarranted confidence that people who disagree with you are "irrational" and "dumb". You did not have access to sufficient information to conclude this! In fact, if they were downvoting you because they expected your comment to lead to unproductive discussion, they were right.
More to the point, have you considered that you may have erred in this thread?
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-04T22:19:55.543Z · LW(p) · GW(p)
I did not downvote the "complaint list" comment until it had grown quite large.
I'm trying to understand your reasoning here, and failing. Do you downvote controversial things often? Are you upset with the quality of my comments, or with the quality of all of the comments that followed? Why does it make sense to downvote simply because one of my comments drew lots of attention and anger?
To me, this does not make sense. Please explicitly state your rationale.
I think your characterization of your "complaint" comment is inaccurate
In what possible sense was the characterization inaccurate? I characterized it as a criticism of the negative karma I received. To me, it seems to quite clearly be exactly that. Other commenters also have been responding to it as though it was a criticism of the negative karma I received, that's why some of them mentioned that they downvote comments about karma and why I tried to engage them in a discussion on the merits of criticizing flawed feedback.
However, your response was not useful to you for your expressed purpose of eliciting clarification on feedback.
If I ask for clarification repeatedly and do not receive clarification, that is not my fault. Additionally, I may yet end up receiving actual clarification. Moreover, the lack of clarification also suits my purposes, because it goes a long way towards supporting the possibility that of the dozen or so commenters voting in this thread, none of them have any real justification for their votes.
But, do you have a suggestion as to what might be better at eliciting clarification?
Or are you just trying to seem Reasonable?
or to me
This is nonfalsifiable, and you're getting something out of this conversation or else you'd leave.
or (apparently) the LW-emergent-consciousness for contributing to valuable discussion.
I don't believe this is a relevant metric, my entire point is that their evaluation process is flawed.
EDIT: Moreover, new proof. I've already clearly demonstrated that I don't respond to karma incentives by shutting up, the fact that people keep giving me bad karma despite my obvious immunity to their effects clearly demonstrates that the commenters and reputation people are primarily concerned not with stopping me from making new comments because of the supposed logical invalidity of my comments but instead are concerned with the social cohesion they feel when they use karma to reject my comments as wrong, regardless of the truth value of my comments.
Of particular note is your unwarranted confidence that people who disagree with you are "irrational" and "dumb". You did not have access to sufficient information to conclude this! In fact, if they were downvoting you because they expected your comment to lead to unproductive discussion, they were right.
I will change this believe if I see people change their behavior or give a good justification for giving my list comment -10 votes or if people answer the arguments which I have made in other places about the value of feedback about feedback. I have good reason to believe that LessWrong commenters are irrational because they are a subset of humans and knowing about biases does not make them go away. The fact that no justifications have been produced is also hugely relevant.
The fact that individuals can choose to sabotage the usefullness of a question does not make that question invalid.
More to the point, have you considered that you may have erred in this thread?
Have you considered that no matter what answer I give to this question people will perceive the answer as though it is a "no"? Does this question have any purpose other than making the end of your post sound better? Are you actually thinking that my answer to this question will matter in some way?
The answer is yes.
And, I've concluded that I was overconfident in my expectations that irrational individuals would concede their own irrationality within a community that values rationality. I should not have expected otherwise, there are stronger incentives within this community to avoid admitting defeat than there are in other communities because this community treats accuracy and objectivity as a sacred value.
However, I've also concluded that posting that question was still a good decision on an overall level because I still believe that individuals are perceiving the power of my arguments. Part of the reason that I perceive this is because of the scarcity of downvotes on the comments where I challenge commenters to provide me with evidence and those commenters fail. Another reason that I perceive this is because I have yet to see any objection to my list of comments which attacks it on its merits. A third reason is that I believe all of those arguments are objectively good. The final reason is that I have not seen any objections to my comments from any of the main posters on this site who strike me as extremely intelligent.
EDIT: I also just realized that I need to identify a new threshold at which I'm satisfied with stopping. My previous threshold was going to be the moment where someone stated that they believed my critique to be largely accurate, but given my above realization about how disincentives against conceding irrationality within a rationalist community are actually stronger, I no longer think that threshold will suit my purposes.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-04T22:56:30.637Z · LW(p) · GW(p)
I did not downvote the "complaint list" comment until it had grown quite large.
I'm trying to understand your reasoning here, and failing. Do you downvote controversial things often? Are you upset with the quality of my comments, or with the quality of all of the comments that followed? Why does it make sense to downvote simply because one of my comments drew lots of attention and anger?
To me, this does not make sense. Please explicitly state your rationale.
What I meant was - I did not downvote the comment until it itself had grown quite large. To be blunt, my rationale was that at some point it crossed the line from "poorly-worded request for clarification" to "nutty rant".
I think your characterization of your "complaint" comment is inaccurate
In what possible sense was the characterization inaccurate?
In what possible sense?! You called it "criticism of unwarranted negative feedback". It could easily be argued that it didn't read as "criticism" so much as "complaint", it certainly wasn't just "criticism", and the term "unwarranted" basically assumes the conclusion, making yours a loaded question ("why did you give me an undeserved downvote?").
However, your response was not useful to you for your expressed purpose of eliciting clarification on feedback.
If I ask for clarification repeatedly and do not receive clarification, that is not my fault.
If you have a goal, and your actions do not accomplish that goal, then saying that this is not your fault will also not accomplish that goal.
But, do you have a suggestion as to what might be better at eliciting clarification?
"Could someone who downvoted clarify why they thought my comment was not valuable?"
Or are you just trying to seem Reasonable?
Quit it! Even "rationalists" will be better disposed towards you if you make a basic attempt to interpret them charitably.
or to me
This is nonfalsifiable, and you're getting something out of this conversation or else you'd leave.
I suspect that maybe you could be an interesting contributor here once this thread concludes. You haven't claimed to have discovered the secret mathless Grand Unified Theory, for one thing.
or (apparently) the LW-emergent-consciousness for contributing to valuable discussion.
I don't believe this is a relevant metric, my entire point is that their evaluation process is flawed.
Distinguish the former and the latter complaint! Are you saying that "contributes to valuable discussion" is a bad metric for LWers to use, or that LW is bad at judging what accomplishes that?
Of particular note is your unwarranted confidence that people who disagree with you are "irrational" and "dumb". You did not have access to sufficient information to conclude this! In fact, if they were downvoting you because they expected your comment to lead to unproductive discussion, they were right.
I will change this believe if I see people change their behavior or give a good justification for giving my list comment -10 votes or if people answer the arguments which I have made in other places about the value of feedback about feedback. I have good reason to believe that LessWrong commenters are irrational because they are a subset of humans and knowing about biases does not make them go away. The fact that no justifications have been produced is also hugely relevant.
As to why your list comment is at -10, you've received a lot of justifications. Some in this very post. If you want justifications for the other comment's downvotes, you may have to choose a different tack.
More to the point, have you considered that you may have erred in this thread?
Have you considered that no matter what answer I give to this question people will perceive the answer as though it is a "no"? Does this question have any purpose other than making the end of your post sound better? Are you actually thinking that my answer to this question will matter in some way?
My primary purpose was not rhetorical grandstanding or anything to do with your expected answer in this thread. I was hoping you would think hard about the decisions you've made in this thread and realize that some were in error, then decide to change them.
The answer is yes.
And, I've concluded that I was overconfident in my expectations that irrational individuals would concede their own irrationality within a community that values rationality. I should not have expected otherwise, there are stronger incentives within this community to avoid admitting defeat than there are in other communities because this community treats accuracy and objectivity as a sacred value.
However, I've also concluded that posting that question was still a good decision on an overall level because I still believe that individuals are perceiving the power of my arguments. Part of the reason that I perceive this is because of the scarcity of downvotes on the comments where I challenge commenters to provide me with evidence and those commenters fail. Another reason that I perceive this is because I have yet to see any objection to my list of comments which attacks it on its merits. A third reason is that I believe all of those arguments are objectively good. The final reason is that I have not seen any objections to my comments from any of the main posters on this site who strike me as extremely intelligent.
No! That's not the kind of error I'm talking about. "I overestimated your intelligence" does not count. Do you really think that every single downvote and every single comment explaining your missteps was undeserved? Because if so, you should realize how unlikely that is, and reexamine the thread with that fact in mind.
comment by cousin_it · 2012-05-16T23:51:40.806Z · LW(p) · GW(p)
The Patrician steepled his hands and looked at Vimes over the top of them.
"Let me give you some advice, Captain," he said.
"Yes, sir?"
"It may help you make some sense of the world."
"Sir."
"I believe you find life such a problem because you think there are the good people and the bad people," said the man. "You're wrong, of course. There are, always and only, the bad people, but some of them are on opposite sides. "
He waved his thin hand towards the city and walked over to the window.
"A great rolling sea of evil," he said, almost proprietorially. "Shallower in some places, of course, but deeper, oh, so much deeper in others. But people like you put together little rafts of rules and vaguely good intentions and say, this is the opposite, this will triumph in the end. Amazing!" He slapped Vimes good-naturedly on the back.
"Down there," he said, "are people who will follow any dragon, worship any god, ignore any iniquity. All out of a kind of humdrum, everyday badness. Not the really high, creative loathesomeness of the great sinners, but a sort of mass-produced darkness of the soul. Sin, you might say, without a trace of originality. They accept evil not because they say yes, but because they don't say no. I'm sorry if this offends you,'' he added, patting the captain's shoulder, "but you fellows really need us."
"Yes, sir?" said Vimes quietly.
"Oh, yes. We're the only ones who know how to make things work. You see, the only thing the good people are good at is overthrowing the bad people. And you're good at that, I'll grant you. But the trouble is that it's the only thing you're good at. One day it's the ringing of the bells and the casting down of the evil tyrant, and the next it's everyone sitting around complaining that ever since the tyrant was overthrown no-one's been taking out the trash. Because the bad people know how to plan. It's part of the specification, you might say. Every evil tyrant has a plan to rule the world. The good people don't seem to have the knack."
"Maybe. But you're wrong about the rest!" said Vimes. "It's just because people are afraid, and alone-" He paused. It sounded pretty hollow, even to him.
He shrugged. "They're just people," he said. "They're just doing what people do. Sir."
Lord Vetinari gave him a friendly smile. "Of course, of course," he said. "You have to believe that, I appreciate. Otherwise you'd go quite mad. Otherwise you'd think you're standing on a feather-thin bridge over the vaults of Hell. Otherwise existence would be a dark agony and the only hope would be that there is no life after death. I quite understand."
(...)
After a while he made a few pencil annotations to the paper in front of him and looked up.
"I said," he said, "that you may go."
Vimes paused at the door.
"Do you believe all that, sir?" he said. "About the endless evil and the sheer blackness?"
"Indeed, indeed," said the Patrician, turning over the page. "It is the only logical conclusion."
"But you get out of bed every morning, sir?"
"Hmm? Yes? What is your point?"
"I'd just like to know why, sir."
"Oh, do go away, Vimes. There's a good fellow."
-- Terry Pratchett, "Guards! Guards!"
I really like the character of Lord Vetinari. He's like a more successful version of Quirrell from HPMOR who decided that it's okay to have cynical beliefs but idealistic aims.
Replies from: Bugmaster, Nominull↑ comment by Nominull · 2012-05-17T01:22:01.369Z · LW(p) · GW(p)
Vimes has the right of it here, I think. They are just people, they are just doing what people do. And even if what people do isn't always as good as it could be, it is far from being as bad as it could be. Mankind is inherently good at a level greater than can be explained by chance alone, p<.05.
Replies from: CasioTheSane, CuSithBell↑ comment by CasioTheSane · 2012-05-17T05:42:28.736Z · LW(p) · GW(p)
Simply writing "p<.05" after a statement doesn't count as evidence for it.
Edit: "Goodness" can be explained from evolutionary game theory: Generous Tit-for-Tat behavior is an excellent survival strategy and often leads to productive (or at least not mutually destructive) cooperation with other individuals practicing Generous Tit-for-Tat. Calling this "goodness" or "evilness" (altruism vs selfishness) is a meaningless value judgment when both describe the same behavior. Really it's neither- people aren't good for the sake of being good, or bad for the sake of being bad but behaving a certain way because it's a good strategy for survival.
Replies from: Nominull↑ comment by Nominull · 2012-05-17T07:45:26.939Z · LW(p) · GW(p)
"p<.05" is a shorthand way of saying "the evidence we have is substantially unlikely to be the random result of unbiased processes". It wasn't intended to be taken literally, unless you think I've done randomized controlled trials on the goodness of mankind.
Yes, surely the inherent goodness comes from evolutionary game theory, it's hard to see where else it would have come from. But the fact that evolutionary game theory suggests that people should have evolved to be good should be a point in favor of the proposition that mankind is inherently good, not a point against it.
EDIT: Now that I think about it, doing an RCT on the goodness of mankind might help illuminate some points. You could put a researcher in a room and have him "accidentally" drop some papers, and see if it's people or placebo mannequins who are more likely to help him pick them up.
↑ comment by CuSithBell · 2012-05-17T01:35:47.488Z · LW(p) · GW(p)
Mankind is inherently good at a level greater than can be explained by chance alone, p<.05.
Chance as opposed to...?
comment by NancyLebovitz · 2012-05-02T15:49:35.109Z · LW(p) · GW(p)
The larger the island of knowledge, the longer the shoreline of wonder.
Wikiquotes: Huston Smith Wikipedia: Ralph Washinton Sockman
Replies from: DanArmak, Thomas↑ comment by DanArmak · 2012-05-02T16:48:51.996Z · LW(p) · GW(p)
Only while the island is smaller than half the world :-)
Anyway, I can always measure your shore and get any result I want.
Replies from: jeremysalwen, CuSithBell↑ comment by jeremysalwen · 2012-05-02T18:35:02.374Z · LW(p) · GW(p)
No, you can only get an answer up to the limit imposed by the fact that the coastline is actually composed of atoms. The fact that a coastline looks like a fractal is misleading. It makes us forget that just like everything else it's fundamentally discrete.
This has always bugged me as a case of especially sloppy extrapolation.
Replies from: VKS, DanArmak↑ comment by VKS · 2012-05-02T22:31:09.108Z · LW(p) · GW(p)
The island of knowledge is composed of atoms? The shoreline of wonder is not a fractal?
Replies from: Bugmaster↑ comment by DanArmak · 2012-05-02T18:42:19.128Z · LW(p) · GW(p)
Of course you can't really measure on an atomic scale anyway because you can't decide which atoms are part of the coast and which are floating in the sea. The fuzziness of the "coastline" definition makes measurement meaningless on scales even larger than single atoms and molecules, probably. So you're right, and we can't measure it arbitrarily large. It's just wordplay at that point.
↑ comment by CuSithBell · 2012-05-04T15:35:30.962Z · LW(p) · GW(p)
And assuming an arbitrarily large world, as the area of the island increases, the ratio of shoreline to area decreases, no? Not sure what that means in terms of the metaphor, though...
Replies from: DanArmak↑ comment by DanArmak · 2012-05-04T17:57:54.458Z · LW(p) · GW(p)
Eventually the island's population can't fit all at once on the shore, and so not everyone can gather new wonder.
Replies from: Document, CuSithBell↑ comment by Document · 2012-05-09T21:49:34.514Z · LW(p) · GW(p)
Replies from: DanArmak↑ comment by DanArmak · 2012-05-09T22:00:59.319Z · LW(p) · GW(p)
Then you realize that in almost all universes there is no life, and consequently, no land...
Replies from: Document↑ comment by Document · 2012-05-09T22:05:45.199Z · LW(p) · GW(p)
Now I'm confused, so I guess I'm out.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-10T00:00:53.473Z · LW(p) · GW(p)
Modal realism says "all possible worlds are as real as the actual world" (Wikipedia). In different possible worlds there are different laws of physics, almost all of which don't allow for life. In some proportion of those where they do allow for life, there's no life anyway (it seems to be rare in our universe). In some proportion of universes with life, there is no sentient life...
Without sentient life, there's no knowledge, so no shore. No shore means no land.
↑ comment by CuSithBell · 2012-05-04T18:21:02.098Z · LW(p) · GW(p)
Well, shoot.
Replies from: DanArmak↑ comment by Thomas · 2012-05-03T19:18:03.803Z · LW(p) · GW(p)
A short shoreline of wonder is a good sign that the island of knowledge is small.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T19:30:11.105Z · LW(p) · GW(p)
UNLESS IT'S A CONTINENT!!!!!! BOOM.
Replies from: dbaupp↑ comment by dbaupp · 2012-05-04T06:44:39.832Z · LW(p) · GW(p)
I don't understand. Continents are just big islands, they have shorelines too.
Replies from: DanArmak, chaosmosis↑ comment by DanArmak · 2012-05-04T15:17:50.700Z · LW(p) · GW(p)
If a continent takes up more than half the world, then the shorter the shoreline, the bigger the continent.
Replies from: dlthomas↑ comment by dlthomas · 2012-05-04T18:56:09.415Z · LW(p) · GW(p)
But the cutoff is obviously not "continent"/"not continent", but rather "takes up more than half the world" versus "doesn't take up more than half the world" - possibly with an additional constraint of a sufficiently simple shoreline...
Replies from: DanArmak↑ comment by chaosmosis · 2012-05-04T18:49:22.838Z · LW(p) · GW(p)
Geometry. Big areas with less big corresponding perimeters.
Replies from: dbaupp, arundelo↑ comment by dbaupp · 2012-05-05T01:08:52.085Z · LW(p) · GW(p)
This is answer is about as informative as answering "Why do aeroplanes fly?" with "Calculus. Differential equations with forces.".
If you are talking about continents larger than half the world, then DanArmak has already pointed it out and much more politely. However, as dlthomas points out the distinction is not based on it being a continent or not, but on it covering more than half the word.
Also, everything we call a continent on Earth takes up less than half of it, and for such things there is a minimum perimeter that increases as the area increases. (The minimum perimeter is something a little bit like 2*sqrt(pi*Area)
(except different because the Earth is a sphere rather than a plane).)
↑ comment by arundelo · 2012-05-05T02:00:33.810Z · LW(p) · GW(p)
Were you trying to point out that the shoreline's length varies as the square root of the size of the island?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-05T03:06:21.214Z · LW(p) · GW(p)
Doesn't that depend a lot on how convoluted the shoreline is?
Replies from: arundelo↑ comment by arundelo · 2012-05-05T03:55:52.899Z · LW(p) · GW(p)
Yes, but only if the shape varies too.
Replies from: dlthomascomment by MichaelGR · 2012-05-03T17:33:52.301Z · LW(p) · GW(p)
If you want to build a ship, don't drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea...
- Antoine de Saint Exupery
↑ comment by albeola · 2012-05-06T23:11:45.158Z · LW(p) · GW(p)
Has this actually been working?
Replies from: None, Documentcomment by maia · 2012-05-03T21:12:09.282Z · LW(p) · GW(p)
"If God gives you lemons, you find a new God."
-- Powerthirst 2: Re-Domination
Replies from: Bill_McGrath, chaosmosis↑ comment by Bill_McGrath · 2012-05-05T01:03:06.736Z · LW(p) · GW(p)
I maintain you should use the lemons as an offering to appease your angry new god.
↑ comment by chaosmosis · 2012-05-03T21:30:42.764Z · LW(p) · GW(p)
If you liked Powerthirst, there's a similar thing called "SHOWER PRODUCTS FOR MEN" on youtube.
comment by Alejandro1 · 2012-05-02T19:33:52.383Z · LW(p) · GW(p)
The word problem may be an insidious form of question-begging. To speak of the Jewish problem is to postulate that the Jews are a problem; it is to predict (and recommend) persecution, plunder, shooting, beheading, rape, and the reading of Dr. Rosenberg's prose. Another disadvantage of fallacious problems is that they bring about solutions that are equally fallacious. Pliny (Book VIII of Natural History) is not satisfied with the observation that dragons attack elephants in the summer; he ventures the hypothesis that they do it in order to drink the elephants' blood, which, as everyone knows, is very cold.
-- Jorge Luis Borges, "Dr. Américo Castro is Alarmed"
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-05-02T23:06:11.962Z · LW(p) · GW(p)
(Pliny, not Plinty.)
The article is not about antisemitism, by the way. It's about one Dr. Castro's alarm over a "linguistic disorder in Buenos Aires" — i.e. a putative decline in the quality of Argentinian Spanish usage.
Replies from: Alejandro1↑ comment by Alejandro1 · 2012-05-02T23:34:28.874Z · LW(p) · GW(p)
Thank you, corrected! Yes, it is a wonderful demolition of Castro's pretentious pronouncements on the Argentine dialect, which contains some of the finest examples of Borges' erudite snark. ("...the doctor appeals to a method that we must either label sophistical, to avoid doubting his intelligence, or naive, to avoid doubting his integrity...")
comment by john_ku · 2012-05-05T12:48:46.885Z · LW(p) · GW(p)
If the difficulty of a physiological problem is mathematical in essence, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics and no further.
Norbert Wiener
Replies from: soreff↑ comment by soreff · 2012-05-05T22:58:52.318Z · LW(p) · GW(p)
I'm going to be unfair here - there is a limit to how much specificity one can expect in a brief quote but: In what sense is the difficulty "mathematical in essence", and just how ignorant of how much mathematics are the physiologists in question? Consider a problem where the exact solution of the model equations turns out to be an elliptic integral - but where the practically relevant range is adequately represented by a piecewise linear approximation, or by a handful of terms in a power series. Would ignorance of the elliptic integral be a fatal flaw here?
Replies from: othercriteria↑ comment by othercriteria · 2012-05-06T22:12:44.049Z · LW(p) · GW(p)
Speaking as someone who is neither the OP nor Norbert Wiener, I think even the task of posing an adequate mathematical model should not be taken for granted. Thousands of physiologists looked at Drosophila segments and tiger stripes before Turing, thousands of ecologists looked at niche differentiation before Tilman, thousands of geneticists looked at the geological spread of genes before Fisher and Kolmogorov, etc. In all these cases, the solution doesn't require math beyond an undergraduate level.
Also, concern over an exact solution is somewhat misplaced given that the greater parts of the error are going to come from the mismatch between model and reality and from imperfect parameter estimates.
comment by [deleted] · 2012-05-08T14:24:04.122Z · LW(p) · GW(p)
.
Replies from: PhilosophyTutor↑ comment by PhilosophyTutor · 2012-05-10T03:37:32.565Z · LW(p) · GW(p)
If you have a result with a p value of p<0.05, the universe could be kidding you up to 5% of the time. You can reduce the probability that the universe is kidding you with bigger samples, but you never get it to 0%.
Replies from: RobinZ↑ comment by RobinZ · 2012-05-11T04:10:49.501Z · LW(p) · GW(p)
How would you rephrase that using Bayesian language, I wonder?
Replies from: PhilosophyTutor↑ comment by PhilosophyTutor · 2012-05-11T06:08:30.505Z · LW(p) · GW(p)
It already is in Bayesian language, really, but to make it more explicit you could rephrase it as "Unless P(B|A) is 1, there's always some possibility that hypothesis A is true but you don't get to see observation B."
comment by Shmi (shminux) · 2012-05-24T18:14:17.925Z · LW(p) · GW(p)
Replies from: BillyOblivion, Noneif you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!
↑ comment by BillyOblivion · 2012-05-28T07:31:51.453Z · LW(p) · GW(p)
OTOH it could be that the "you" in the above knows little to nothing about computer simulation.
For example a moderately competent evolutionary virologist might have theory about how viruses spread genes across species, but have only a passing knowledge of LaTeX and absolutely no idea how to use bio-sim software.
Or worse, CAN explain, but their explanation demonstrates that lack of knowledge.
↑ comment by [deleted] · 2012-05-29T08:32:39.994Z · LW(p) · GW(p)
Such as set theory?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-30T05:29:39.388Z · LW(p) · GW(p)
Well, every heuristic has exceptions.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-06-05T18:16:56.829Z · LW(p) · GW(p)
By definition?
comment by [deleted] · 2012-05-23T17:45:15.050Z · LW(p) · GW(p)
Is it fair to say you're enjoying the controversy you've started?
Thiel: I don't enjoy being contrarian.
Yes you do. *laughs*
Thiel: No, I think it is much more important to be right than to be contrarian.
--Peter Thiel, on 60 Minutes
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-25T16:12:34.996Z · LW(p) · GW(p)
Heh, I bet he was being coy. In my experience, people who can't get any enjoyment out of wearing a clown suit (or, indeed, a scary uniform) - and aren't somewhat, well, autistic - actively avoid saying contrarian things; it's more or less social autopilot.
Replies from: CuSithBell, faul_sname↑ comment by CuSithBell · 2012-06-01T00:11:25.704Z · LW(p) · GW(p)
I read this as "people who aren't ( (clownsuit enjoyers) and (autistic) ) ...", but it looks like others have read it as "people who aren't (clownsuit enjoyers) and aren't (autistic)" = "people who aren't ( (clownsuit enjoyers) or (autistic) )", which might be the stricter literal reading. Would you care to clarify which you meant?
↑ comment by faul_sname · 2012-05-31T22:54:36.410Z · LW(p) · GW(p)
Peter Thiel is, well, autistic. Or at least has some tendencies in that direction (he's unlikely to be clinically autistic, with the possible exception of mild aspergers).
comment by MichaelGR · 2012-05-03T17:32:32.109Z · LW(p) · GW(p)
“Smart people learn from their mistakes. But the real sharp ones learn from the mistakes of others.”
― Brandon Mull, Fablehaven
Replies from: Ezekiel, wadavis, Document↑ comment by Ezekiel · 2012-05-04T22:35:28.475Z · LW(p) · GW(p)
The real sharp ones also learn from the mistakes of others.
Replies from: fortyeridania↑ comment by fortyeridania · 2012-05-05T12:07:24.674Z · LW(p) · GW(p)
Are you correcting the accuracy of the quotation, or commenting?
Replies from: Ezekielcomment by Nominull · 2012-05-02T01:39:17.125Z · LW(p) · GW(p)
Plato says that the unexamined life is not worth living. But what if the examined life turns out to be a clunker as well?
-Kurt Vonnegut
Replies from: gwern, Jayson_Virissimo↑ comment by gwern · 2012-05-02T02:27:29.901Z · LW(p) · GW(p)
Then you can commit suicide without worries.
Replies from: JoachimSchipper, MixedNuts, None↑ comment by JoachimSchipper · 2012-05-02T07:07:58.136Z · LW(p) · GW(p)
Or try to vary life among other dimensions than (un)"examined"; most people do feel they live lifes worth living, after all.
(In general, I'm not sure we should be advocating suicide in all but the most extreme cases.)
Replies from: MBlume↑ comment by [deleted] · 2012-05-02T03:06:06.632Z · LW(p) · GW(p)
Thoughts like that should not be encouraged. One should instead worry about preserving the life we've been given and committing actions which will lead to the preservation of others as well. What you just said sounds like a very schizophrenic thing to say (To me. I do however realize when I am stating an opinion regardless of how much evidence supports my hypothesis.).
Replies from: Desrtopa, gwern↑ comment by Desrtopa · 2012-05-02T06:25:18.797Z · LW(p) · GW(p)
I've argued on a number of occasions on this site that people who're suicidal are usually not in a position to accurately judge the expected value of staying alive, but honestly, if a person's life really isn't worth living, why should they have to?
Replies from: Dentin, chaosmosis, None↑ comment by Dentin · 2012-05-02T21:18:25.170Z · LW(p) · GW(p)
This is a question that is very close to me, and which I've been chewing over for the better part of a decade. I have had a close personal friend for many years with a history of mental illness; having watched their decline over time, I found myself asking this question. From a purely rational standpoint, there are many different functions that you can use calculate the value of suicide versus life. As long as you don't treat life as a sacred/infinite value ("stay alive at all costs"), you can get answers for this.
My problem is that a few years ago, I started noticing measures that were pro-suicide. As quality of life and situation declined, more and more measures flipped that direction. What do you do as an outside observer when most common value judgements all seem to point toward suicide as the best option?
It's not like I prefer this answer. What I want is for the person in question get their life together and use their (impressive) potential to live a happy, full life; what I want is another awesome person making the world we live in even more awesome. Instead, there is a person who as near as I can estimate actually contributes negative value when measured by most commonly accepted goals.
How much is the tradeoff worth? If I sacrifice the remainder of my rather productive life in an attempt to 'save' this person, have I done the right thing? I cannot in good conscience say yes.
These are obnoxious problems.
↑ comment by chaosmosis · 2012-05-02T16:48:54.683Z · LW(p) · GW(p)
"If a person's life really isn't worth living [objectively]" then the person should stop caring about flawed concepts like objective value. "If a person's life really isn't worth living [subjectively]" then they should work on changing their subjective values or changing the way that their life is so it is subjectively worth living. If neither of the above is possible, then they should kill themselves.
It's important that we recognize where the worth "comes from" as a potential solution to the problem.
This insight brought to you by my understanding of Friedrich Nietzsche. (Read his stuff!)
Replies from: Desrtopa, chaosmosis↑ comment by Desrtopa · 2012-05-02T17:15:13.230Z · LW(p) · GW(p)
It's hard to say what it would even mean for moral value to be truly objective, but say that, if a person is alive, it will cause many people to suffer terribly. Should they stop caring about this in order to keep wanting to live?
If a person is living in inescapably miserable circumstances, changing their value system so they're not miserable anymore is easier said than done. And if it were easy, do you think it would be better to simply always change our values so that they're already met, rather than changing the world to satisfy our values?
Replies from: DanArmak, chaosmosis↑ comment by DanArmak · 2012-05-02T18:00:36.476Z · LW(p) · GW(p)
Better to self-modify to suffer less due to not achieving your goals (yet), while keeping the same goals.
Replies from: Desrtopa, chaosmosis↑ comment by chaosmosis · 2012-05-02T18:45:17.879Z · LW(p) · GW(p)
This doesn't make sense.
How do you retain something as a goal while removing the value that you place on it?
Replies from: Normal_Anomaly, DanArmak↑ comment by Normal_Anomaly · 2012-05-02T19:07:25.437Z · LW(p) · GW(p)
I think DanArmak means modify the negative affect we feel from not acheiving the goals while keeping the desire and motivation to acheive them.
EDIT: oops, ninja'd by DanArmak. Never mind.
↑ comment by DanArmak · 2012-05-02T19:06:19.351Z · LW(p) · GW(p)
Don't remove the value. Remove just the experience of feeling bad due to not yet achieving the value.
If I have a value/goal of being rich, this doesn't have to mean I will feel miserable until I'm rich.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-02T20:21:22.324Z · LW(p) · GW(p)
What you're implicitly doing here is divorcing goals from values (feelings are a value). Either that or you're thinking that there's something especially wrong related to negative incentives that doesn't apply to positive ones.
If you don't feel miserable when you're poor or, similarly, if you won't feel happier when you're rich, then why would you value being rich at all? If your emotions don't change in response to having or not having a certain something then that something doesn't count as a goal. You would be wanting something without caring about it, which is silly. You're saying we should remove the reasons we care about X while still pursuing X, which makes no sense.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-02T20:48:36.257Z · LW(p) · GW(p)
you're thinking that there's something especially wrong related to negative incentives that doesn't apply to positive ones.
There's something terribly wrong about the way negative incentives are implemented in humans. I think the experience of pain (and the fear or anticipation of it) is a terrible thing and I wish I could self-modify so I would feel pain as damage/danger signals, but without the affect of pain. (There are people wired like this, but I can't find the name for the condition right now.)
Similarly, I would like to get rid of the negative affect of (almost?) everything else in life. Fear, grief, etc. They're the way evolution implemented negative reinforcement learning in us, but they're not the only possible way, and they're no longer needed for survival; if we only had the tools to replace them with something else.
If you don't feel miserable when you're poor or, similarly, if you won't feel happier when you're rich, then why would you value being rich at all?
Being rich is (as an example) an instrumental goal, not a terminal one. I want it because I will use the money to buy things and experiences that will make me feel good, much more than having the money (and not using it) would.
Replies from: Nick_Tarleton, chaosmosis↑ comment by Nick_Tarleton · 2012-05-03T23:41:44.598Z · LW(p) · GW(p)
I wish I could self-modify so I would feel pain as damage/danger signals, but without the affect of pain. (There are people wired like this, but I can't find the name for the condition right now.)
"pain asymbolia"
↑ comment by chaosmosis · 2012-05-02T23:01:11.590Z · LW(p) · GW(p)
Being rich is (as an example) an instrumental goal, not a terminal one. I want it because I will use the money to buy things and experiences that will make me feel good, much more than having the money (and not using it) would.
Treating it as an instrumental goal doesn't solve the problem, it just moves it back a step. Even if you wouldn't feel miserable by being poor because you magically eliminated negative incentives you would still feel less of the positive incentives when you are poor than when you were rich, even though richness is just the means to feeling better. All of this:
If your emotions don't change in response to having or not having a certain something then that something doesn't count as a goal. You would be wanting something without caring about it, which is silly. You're saying we should remove the reasons we care about X while still pursuing X, which makes no sense.
still applies.
(Except insofar as it might be altered by relevant differences between positive and negative incentives.)
Better to self-modify to suffer less due to not achieving your goals (yet), while keeping the same goals.
To clarify, what I'm contending is that this would only make sense as a motivational system if you placed positive value on achieving certain goals which you hadn't yet achieved, I think you agree with this part but am not sure. But I don't think we can justify treating positive incentives differently than negative ones.
I don't view the distinction between an absence of a positive incentive and the presence of a negative incentive the same way you do. I'm not even sure that I have any positive incentives which aren't derived from negative incentives.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-03T16:42:04.233Z · LW(p) · GW(p)
Even if you wouldn't feel miserable by being poor because you magically eliminated negative incentives you would still feel less of the positive incentives when you are poor than when you were rich, even though richness is just the means to feeling better.
Negative and positive feelings are differently wired in the brain. Fewer positive feelings is not the same as more negative ones. Getting rid of negative feelings is very worthwhile even without increasing positive ones.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T16:53:04.015Z · LW(p) · GW(p)
But the same logic justifies both, even if they are drastically different in other sort of ways.
Forcing yourself to feel maximum happiness would make sense if forcing yourself to feel minimum unhappiness made sense. They both interact with utilitarianism and preference systems which are the only relevant parts of the logic. The degree or direction of the experience doesn't matter here.
Removing negative incentives justifies maxing out positive incentives = nihilism.
I mean, you can arbitrarily only apply it to certain incentives which is desirable because that precludes the nihilism. But that feels too ad hoc and it still would mean that you can't remove the reasons you care about something while continuing to think of it as a goal, which is part of what I was trying to get at.
So, given that I don't like nihilism or preference paralysis but I do support changing values sometimes, I guess that my overall advocacy is that values should only be modified to max out happiness / minimize unhappiness if happiness / no unhappiness is unachievable (or perhaps also if modifying those specific values helps you to achieve more value total through other routes). Maybe that's the path to an agreement between us.
If you have an insatiable positive preference, satiate it by modifying yourself to be content with what you have. If you can never be rid of a certain negative incentive, try to change your preferences so that you like it. Unfortunately, this does entail losing your initial goals. But it's not a very big loss to lose unachievable goals while still achieving the reasons the goals matter, so fulfilling your values by modifying them definitely makes sense.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-03T21:28:52.653Z · LW(p) · GW(p)
Reducing bad experience was the original subject of discussion. As I said, it's worthwhile to reduce them even without increasing good experience. I never said I don't want to increase good experience - I do! As you say, both are justified.
I didn't mean to imply that I wanted one but not the other; I just said each one is a good thing even without the other. I'm sorry I created the wrong impression with my comments and didn't clarify this to begin with.
Of course when self-modifying to increase pleasure I'd want to avoid the usual traps - wireheading, certain distortions of my existing balance of values (things I derive pleasure from), etc. But in general I do want to increase pleasure.
I also think reducing negative affect is a much more urgent goal. If I had a choice between reducing pain and increasing pleasure in my life right now, I'd choose reducing pain; and the two cannot (easily) be traded. That's why I said before that "there's something wrong about negative [stuff]".
Replies from: chaosmosis↑ comment by chaosmosis · 2012-05-03T21:43:25.930Z · LW(p) · GW(p)
I agree with a lot of what you're saying, I made errors too, and IMHO apologizing doesn't make much sense, especially in the context of errors, but I'll apologize for my errors too because I desire to compensate for hypothetical status losses that might occur as a result of your apology, and also because I don't want to miss out any more than necessary on hypothetical status gains that might occur as a result of (unnecessary) apologies. But the desire to reciprocate is also within this apology, I'm not just calculating utilons here.
Sorry for my previous errors.
You said:
Of course when self-modifying to increase pleasure I'd want to avoid the usual traps - wireheading, certain distortions of my existing balance of values (things I derive pleasure from), etc. But in general I do want to increase pleasure.
I said:
I mean, you can arbitrarily only apply it to certain incentives which is desirable because that precludes the nihilism. But that feels too ad hoc and it still would mean that you can't remove the reasons you care about something while continuing to think of it as a goal, which is part of what I was trying to get at.
I don't know how you avoid this problem except by only supporting modifying incentives in cases of unachievable goals. I'd like to avoid it but I would like to see a mechanism for doing so explicitly stated. If you don't know how to avoid this problem yet, that's fine, neither do I.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-03T22:03:11.009Z · LW(p) · GW(p)
Apologizing is indeed status signaling; I feel better in conversations where it is not necessary or expected.
When I said I was sorry, I meant it in the sense of "I regret". I didn't mean it as an apology and wasn't asking for you to reciprocate. (Also, the level of my idiomatic English tends to vary a lot through the day.)
Now I regret using the expression "sorry"!
I'm glad we agree about apologies :-)
As for the problem of modifying (positive) preferences: I don't have a general method, and haven't tried to work one out. This is because I don't have a way to self-modify like this, and if I acquire one in the future, it will probably have limitations, strengths and weaknesses, which would guide the search for such a general method.
That said, I think that in many particular cases, if I were presented with the option to make a specific change, and enough precautions were available (precommitment, gradual modifications, regret button), making the change might be safe enough - even without solving the general case.
I think this also applies to reducing negative affect (not that we have the ability to that, either) - and the need is more urgent there.
Replies from: thomblake, chaosmosis, chaosmosis↑ comment by thomblake · 2012-05-04T16:26:35.197Z · LW(p) · GW(p)
Apologizing is indeed status signaling;
It's not just about status. It also communicates "This was an accident, not on purpose" and/or "If given the opportunity, I won't do that again" which are useful information.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-04T16:46:29.871Z · LW(p) · GW(p)
It's not clear to me where the line between status signalling and communicating useful information even is.
My dog, when she does something that hurts me, frequently engages in behaviors that seem designed to mollify me. Now, that might be because she's afraid I'll punish her for the pain and decides to mollify me to reduce the chances of that. It might be because she's afraid I'll punish her for the status challenge and decides to anti-challenge me. It might be both. It might be that she has made no decisions at all, and that mollifying behavior is just an automatic response to having caused pain, or given challenge. It might be that the behavior isn't actually mollifying behavior at all, whether intentional or automatic, and it's a complete coincidence that I respond to it that way. Or it might not be a coincidence, but rather the result of my having been conditioned to respond to it that way. It might be something else altogether, or some combination.
All of that said, I have no problem categorizing her behavior as "apologizing."
I often find myself apologizing for things in ways that feel automatic, and I sometimes apologize for things in ways that feel deliberate. I have quite a bit more insight into what's going on in my head than my dog's head when this happens, but much of it is cognitively impermeable, and a lot of the theories above seem to apply pretty well to me too.
↑ comment by chaosmosis · 2012-05-03T22:05:13.186Z · LW(p) · GW(p)
Neat, then we agree on all of that. I also would prefer something ad hoc to the "solution" I thought of.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-03T22:13:41.508Z · LW(p) · GW(p)
The Dark Arts are as nothing besides the terrible power of signaling!
I've read - and I have no idea how much of this is true - that in some Eastern cultures you can get bonus points in a conversation by apologizing for things that weren't in fact offensive before you started apologizing; or taking the blame for minor things that everyone knows you're not responsible for; or saying things that amount to "I'm a low status person, and I apologize for it", when the low-status claim is factually untrue and, again, everyone knows it...
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-03T22:33:31.393Z · LW(p) · GW(p)
I live in the Northeast US, which isn't especially "Eastern", but I've nevertheless found that taking the blame for things that everyone knows I'm not responsible for to be a very useful rhetorical trick, at least in business settings.
Replies from: DanArmak, chaosmosis↑ comment by DanArmak · 2012-05-03T23:05:21.687Z · LW(p) · GW(p)
(Warning: this came out somewhat as a rant. I don't have the energy to rewrite it better right now.)
Honestly: stories like this terrify me. This is not exaggeration: I feel literal terror when I imagine what you describe.
I like to think that I value honesty in conversations and friendships - not Radical Honesty, the ordinary kind. I take pride in the fact that almost all of my conversations with friends have actual subjects, which are interesting for everyone involved; that we exchange information, or at least opinions and ideas. That at least much of the time, we don't trade empty, deceptive words whose real purpose is signaling status and social alliance.
And then every once in a while, although I try to avoid it, I come up against an example - in real life too - of this sort of interaction. Where the real intent could just as well be transmitted with body language and a few grunts. Where consciousness, intelligence, everything we evolved over the last few million years and everything we learned over the last few thousand, would be discarded in a heartbeat by evolution, if only we didn't have to compete against each other in backstabbing...
If I let myself become too idealistic, or too attached to Truth, or too ignorant and unskilled at lying, this will have social costs; my goals may diverge too far from many other humans'. I know this, I accept this. But will it mean that the vast majority of humanity, who don't care about that Truth nonsense, will become literally unintelligible to me? An alien species I can't understand on a native level?
Will I listen to "ordinary" people talking among themselves one day, and doing ordinary things like taking the blame for things they're not responsible for so they can gain status by apologizing, and I will simply be unable to understand what they're saying, or even notice the true level of meaning? Is it even plausible to implement "instinctive" status-oriented behavior on a conscious, deliberate level? (Robin Hanson would say no; deceiving yourself on the conscious level is the first step in lying unconsciously.)
Maybe it's already happened to an extent. (I've also seen descriptions that make mild forms of autism and related conditions sound like what I'm describing.) But should I immerse myself more in interaction with "ordinary" people, even if it's unpleasant to me, for fear of losing my fluency in Basic Human? (For that matter, can I do it? Others would be good at sensing that I'm not really enjoying a Basic Human conversation, or not being honest in it.)
Replies from: dlthomas, chaosmosis↑ comment by dlthomas · 2012-05-03T23:08:31.354Z · LW(p) · GW(p)
Linux Kernel Management Style says to be greedy when it comes to blame.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-03T23:15:58.323Z · LW(p) · GW(p)
Here are some relevant paras:
It's not actually that hard to accept the blame, especially if people kind of realize that it wasn't all your fault. Which brings us to the best way of taking the blame: do it for another guy. You'll feel good for taking the fall, he'll feel good about not getting blamed, and the guy who lost his whole 36GB porn-collection because of your incompetence will grudgingly admit that you at least didn't try to weasel out of it.
Then make the developer who really screwed up (if you can find him) know in private that he screwed up. Not just so he can avoid it in the future, but so that he knows he owes you one. And, perhaps even more importantly, he's also likely the person who can fix it. Because, let's face it, it sure ain't you.
ETA: I take back my initial reaction. It's not completely different from what TheOtherDave described. But there are some important differences from at least what I described and had in mind:
- If someone else already accepted the blame, it doesn't advise you to try to take away the blame from him and on yourself, especially if he's really the one at fault!
- It doesn't paint being blamed as being a net positive in some situations, so no incentive to invent things to be blamed for, or to blow them u pout of all proportion
- Telling off the one really at fault, in private, is an important addition - especially if everyone else is tacitly aware you'll do this, even if they don't always know who was at fault. That's taking responsibility more than taking blame.
↑ comment by Bugmaster · 2012-05-03T23:42:49.611Z · LW(p) · GW(p)
In addition, there's a difference between a random person taking blame for the actions of another random person; and a leader taking blame for the mistakes of one of his subordinates. As far as I can tell, the situation described in the article you linked to is a bit closer to the second scenario.
↑ comment by chaosmosis · 2012-05-04T03:22:43.264Z · LW(p) · GW(p)
See my above comment, I manage to subvert Basic Human conversation fairly well in real life.
I empathize with all of your complaints. Doing things like explicitly pointing out when you're manipulating other people (like when I said I empathize with all of your complaints) while still qualifying that within the bounds of the truth (like I will do right now, because despite the manipulativeness of disclosure involved my empathy was still real [although you have no real reason to believe so and acknowledge that {although that acknowledgement was yet another example of manipulation ([{etc}]) }]).
For another less self referential example, see the paragraph I wrote way above this where I explicitly pointed out some problems of the norms involved with apologies, but then proceeded to apologize anyway. I think that one worked very well. My apology for apologizing is yet another example, that one also worked fairly well.
(I hope the fact that I'm explicitly telling you all of this verifies my good intentions, that is what the technique depends upon, also I don't want you to hate me based on what is a legitimate desire to help [please cross apply the above self referential infinitely recursive disclaimer].)
Although in real life, I'm much less explicit about maniuplation, I just give it a subtle head nob but people usually seem to understand because of things like body language, etc. It probably loses some of its effectiveness without the ability to be subtle (or when you explain the concept itself while simultaneously using the concept, like I attempted to do in this very comment). Explaining the exact parts of the technique is hard without being able to give an example which is hard because I can't give the example through text because of the nature of real life face-to-face communication.
Blargh,.
↑ comment by chaosmosis · 2012-05-04T03:20:11.416Z · LW(p) · GW(p)
I have adopted the meta-meta strategy of being slighly blunt in real life but in such a way that reveals that I am 1. being blunt for the purpose of allowing others to do this do me 2. trying to reveal disdain for these type of practices 3. knowingly taking advantage of these type of practices despite my disdain for them. People love it in real life, when it's well executed. I'm tearing down the master's house with the master's tools in such a way that makes them see me as the master. It's insidiously evil and I only do it because otherwise everyone would hate me because I'm so naturally outspoken.
That sounds really braggy, please ignore the bragginess, sorry.
↑ comment by chaosmosis · 2012-05-03T22:07:16.621Z · LW(p) · GW(p)
I APOLOGIZE FOR MY APOLOGY. :(
↑ comment by chaosmosis · 2012-05-02T18:41:09.743Z · LW(p) · GW(p)
If you cannot change the world to satisfy your values then your values should change, is what I advocate. To answer your tradeoff example: Choose whichever one you value more, then make the other unachievable negative value go away.
And I don't know how to solve the problem I mention in my other comment below.
↑ comment by chaosmosis · 2012-05-02T16:53:45.676Z · LW(p) · GW(p)
There's an interesting issue here.
The agent might have a constitution such that they don't place subjective value on changing their subjective values to something that would be more fulfillable. The current-agent would prefer that they not change their values. The hypothetical-agent would prefer that they have already changed their values. I was just reading the posts on Timeless Decision Theory and it seems like this is a problem that TDT would have a tough time grappling with.
I'm also feeling that it's plausible that someone is systematically neg karmaing me again.
↑ comment by [deleted] · 2012-05-02T16:22:27.363Z · LW(p) · GW(p)
They don't "have" to keep going but striving for better is a more optimistic encouragement is it not? I would rather teach someone that they have worth rather than tell them that suicide (Which will undoubtedly have negative effects on their family if they have a family that loves them) is what I want for them too.
↑ comment by gwern · 2012-05-02T03:20:12.827Z · LW(p) · GW(p)
Seriously lame comment, man. Suicide is one of the classic (literally) motifs of philosophy, starting with Socrates (or earlier, with Empedocles!) and continuing right up to the modern day with Camus and later thinkers.
Replies from: DanArmak, None↑ comment by [deleted] · 2012-05-02T06:52:26.620Z · LW(p) · GW(p)
That shows no evidence that would support the fact that it's an efficient mentality one should willingly accept. Obviously nature is trying everything it can to preserve the life it has created. You wish to go against the will of life's plan for preservation.
↑ comment by Jayson_Virissimo · 2012-05-02T05:39:47.162Z · LW(p) · GW(p)
I'm pretty sure Plato was quoting Socrates.
Replies from: dlthomascomment by Vaniver · 2012-05-10T16:07:28.476Z · LW(p) · GW(p)
If rational thought is useful at all, then it must be maintained as a practice. Parents must teach it to their children, teachers must teach it to their students, and people must respect each other for their rationality. If the practice of rational thought is not to be lost, some group of people, at least, will have to maintain it.
comment by mindspillage · 2012-05-14T14:19:05.214Z · LW(p) · GW(p)
"In war you will generally find that the enemy has at any time three courses of action open to him. Of those three, he will invariably choose the fourth." —Helmuth Von Moltke
(quoted in "Capturing the Potential of Outlier Ideas in the Intelligence Community", via Bruce Schneier)
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-05-15T01:49:56.715Z · LW(p) · GW(p)
There is a corollary of the Law of Fives in Discordianism, as follows: Whenever you think that there are only two possibilities (X, or else Y), there are in fact at least five: X; Y; X and Y; neither X nor Y; and J, something you hadn't thought of before.
Replies from: roystgnr↑ comment by roystgnr · 2012-05-17T03:25:42.578Z · LW(p) · GW(p)
Is this a quotation or paraphrase of some famous quote? Googling "discordianism" "law of fives" "two possibilities" only comes up with a handful of hits, all unrelated except for this lesswrong.com page itself.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-05-17T12:18:30.921Z · LW(p) · GW(p)
Probably this:
All statements are true in some sense, false in some sense, meaningless in some sense, true and false in some sense, true and meaningless in some sense, false and meaningless in some sense, and true and false and meaningless in some sense.” ―Malaclypse the Younger, Principia Discordia, Or, How I Found Goddess and What I Did to Her When I Found Her: The Magnum Opiate of Malaclypse the Younger
found here.
The Principia Discordia was a basis for a lot of the ideas in Illuminatus! by Wilson and Shea. The Illuminati card game doesn't begin to do the Illuminatus! justice.
comment by Old_Rationality · 2012-05-02T11:44:04.306Z · LW(p) · GW(p)
The atmosphere of political parties, whether in France or England, is not congenial to the formation of an impartial judgment. A Minister, who is in the thick of a tough parliamentary struggle, must use whatever arguments he can to defend his cause without inquiring too closely whether they are good, bad, or indifferent. However good they may be, they will probably not convince his political opponents, and they can scarcely be so bad as not to carry some sort of conviction to the minds of those who are predisposed to support him.
Evelyn Baring, Earl of Cromer, Modern Egypt
comment by AlexSchell · 2012-05-02T02:32:49.357Z · LW(p) · GW(p)
Replies from: Jayson_Virissimo[Instrumentalism about science] has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.
↑ comment by Jayson_Virissimo · 2012-05-02T13:30:49.323Z · LW(p) · GW(p)
This criticism of instrumentalism only works in so far as instrumentalism is descriptive, rather than prescriptive.
comment by Mark_Eichenlaub · 2012-05-02T05:33:38.096Z · LW(p) · GW(p)
If you're trying to choose between two theories and one gives you an excuse for being lazy, the other one is probably right.
Paul Graham “What You’ll Wish You’d Known” http://paulgraham.com/hs.html
Replies from: JGWeissman, juped, thomblake, Grognor↑ comment by Grognor · 2012-05-03T01:08:03.398Z · LW(p) · GW(p)
Almost the same as the one Eliezer used here
Replies from: thomblake↑ comment by thomblake · 2012-05-03T15:43:56.136Z · LW(p) · GW(p)
The quote in that link makes a good point: If one gives you an excuse to be lazy, then you might be privileging the hypothesis; it could be that it was only raised to the level of attention so that you can avoid work. Thus, the lazy choice really does get a big hit to its prior probability for being lazy.
But it's still false that the other one is probably right. In general, if a human is choosing between two theories, they're both probably insanely wrong. For rationalists, you can charitably drop "insanely" from that description.
Replies from: dlthomas↑ comment by dlthomas · 2012-05-05T17:51:01.790Z · LW(p) · GW(p)
Your first paragraph is a good analysis (enough to merit an upvote of the comment as a whole). Your second seems redundant; I don't think anyone would interpret the quoted phrase of non-technical English to mean that you should actually raise your estimate of the theory that doesn't permit laziness relative to other theories not under consideration, and if you have two theories both of which are equally wrong, it doesn't matter much what you do to differentiate them.
comment by Mark_Eichenlaub · 2012-05-02T05:29:43.612Z · LW(p) · GW(p)
I don't think we can get much more specific without starting to be mistaken.
Paul Graham, "Is It Worth Being Wise?" http://paulgraham.com/wisdom.html
Replies from: shokwave↑ comment by shokwave · 2012-05-02T06:10:13.760Z · LW(p) · GW(p)
Noticing this moment is important!
Of course, we shouldn't stop when we notice this. We should keep getting more specific, and we should begin testing whether we are mistaken.
Replies from: DanielLC↑ comment by DanielLC · 2012-05-02T22:42:46.951Z · LW(p) · GW(p)
More accurately, we should test more specific things, then become more specific. First make the test, then update the beliefs.
Replies from: dlthomas↑ comment by dlthomas · 2012-05-02T23:19:37.494Z · LW(p) · GW(p)
I think we're splitting unnecessary hairs here; obviously we shouldn't update our belief to something more specific than we can justify. At the same time, we want to formulate hypotheses in advance of the tests, and test whether these hypotheses are mistaken or worthy of promotion to belief, which to me seems a perfectly reasonable interpretation of what shokwave wrote.
comment by baiter · 2012-05-01T13:07:24.550Z · LW(p) · GW(p)
My function is to raise the possibility, 'Hey, you know, some of this stuff might be bullshit.'
-- Robert Anton Wilson
Replies from: cousin_it↑ comment by cousin_it · 2012-05-01T19:29:08.862Z · LW(p) · GW(p)
Contrarians of LW, if you want to be successful, please don't follow this strategy. Chances are that many people have raised the same possibility before, and anyway raising possibilities isn't Bayesian evidence, so you'll just get ignored. Instead, try to prove that the stuff is bullshit. This way, if you're right, others will learn something, and if you're wrong, you will have learned something.
Replies from: fubarobfusco, handoflixue, fortyeridania↑ comment by fubarobfusco · 2012-05-01T21:39:57.524Z · LW(p) · GW(p)
For what it's worth, some context:
JW: To what extent do you think you've become a part of the New Age movement? The stalls in the atrium tonight seemed to be concerned with a lot of New Age material, and to an extent the way you've been talking about Virtual Realities and mind expansion you seem to be almost a forerunner of the movement.
RAW: The Berkeley mob once called Leary and me "the counter-culture of the counter-culture." I'm some kind of antibody in the New Age movement. My function is to raise the possibility, "Hey, you know, some of this stuff might be bullshit."
— http://media.hyperreal.org/zines/est/intervs/raw.html
Wilson had a tendency to come across as a skeptic among mystics and a mystic among skeptics.
Replies from: elharo↑ comment by elharo · 2013-05-09T10:54:36.788Z · LW(p) · GW(p)
Most scientists, skeptics, theists, and new agers of various stripes share a common (and not necessarily wrong) belief in the truth. They differ primarily in how they believe one gets to the truth, and under what conditions, if ever, one should change one's mind about the truth.
Robert Anton Wilson was unusual in that he really tried to believe multiple and contradictory claimed truths, rather than just one. For instance, on Monday, Wednesday and Friday he might believe astrology worked. Then on Tuesday, Thursday, and Saturday he'd believe astrology was bullshit. On Sunday he'd try to believe both at the same time. This wasn't indecision but rather a deliberate effort to change his mind, and see what happened. That is, he was brain hacking by adjusting his belief system. He was not walled in by a need to maintain a consistent belief system. He deliberately believed contradictory things.
Call a believer someone who believes proposition A. Call a nonbeliever someone who believes proposition NOT A. Call an a-gnostic someone who doesn't assign a much higher probability to one of A and NOT A. Wilson would be a multi-gnostic: that is, someone who believes A and believes NOT A, someone who is both a believer and a non-believer. This is how he came across as a skeptic among mystics and a mystic among skeptics. He was both, and several other things besides.
↑ comment by handoflixue · 2012-05-01T19:49:25.029Z · LW(p) · GW(p)
I doubt I can do much to prove a lot of the 'core' concepts of rationality, but I can do a lot to point people towards it and shake up their belief that there isn't such a proof.
↑ comment by fortyeridania · 2012-05-05T11:44:56.991Z · LW(p) · GW(p)
(1) Insisting that those who disagree with you prove their opinions sets too high a bar for them. Being light means surrendering to the truth ASAP.
(2) Raising possibilities is Bayesian evidence, assuming the possibility-raiser is a human, not a random-hypothesis generator.
Replies from: cousin_it, abramdemski↑ comment by abramdemski · 2012-05-21T05:01:18.975Z · LW(p) · GW(p)
I think "try to prove" was an importantly different word choice from "prove" in cousin_it's comment. The point is that in the context of a "new age" movement, it may be enough to raise the possibility; people really may not be thinking about it. In the context of Less Wrong, that is not usually enough; people are often already thinking about evidence for and against.
comment by CasioTheSane · 2012-05-05T19:01:54.891Z · LW(p) · GW(p)
We're even wrong about which mistakes we're making.
-Carl Winfeld
comment by J_Taylor · 2012-05-03T18:16:35.575Z · LW(p) · GW(p)
It is not seeing things as they are to think first of a Briareus with a hundred hands, and then call every man a cripple for only having two. It is not seeing things as they are to start with a vision of Argus with his hundred eyes, and then jeer at every man with two eyes as if he had only one. And it is not seeing things as they are to imagine a demigod of infinite mental clarity, who may or may not appear in the latter days of the earth, and then to see all men as idiots.
-G.K. Chesterton
Replies from: thomblake, hairyfigment, khafra↑ comment by thomblake · 2012-05-03T18:39:43.979Z · LW(p) · GW(p)
Related: this slide
↑ comment by hairyfigment · 2012-05-04T19:03:39.780Z · LW(p) · GW(p)
This struck me as an odd position for a Christian apologist. I know that if I didn't see us all as idiots, I might think we all deserved to die -- oh, wait.
Replies from: smijer↑ comment by smijer · 2012-05-05T18:18:47.919Z · LW(p) · GW(p)
I'm not sure Chesterson deserves the epithet of apologist. Christian yes... evangelist, of a sort. I see him as a cut above the apologist class of Christian commentators.
Replies from: hairyfigment↑ comment by hairyfigment · 2012-05-06T22:34:32.949Z · LW(p) · GW(p)
I don't know that "apologist" counts as a natural class, but he definitely produced Christian apologetics. He may have preferred to call them 'refutations' of non-Christian or atheist doctrines.
comment by Ezekiel · 2012-05-04T22:52:03.681Z · LW(p) · GW(p)
When scientists discuss papers:
"I don't think this inference is entirely reasonable. If you're using several non-independent variables you're liable to accumulate more error than your model accounts for."
When scientists discuss grants:
"A guy who worked at the NSF once told me if we light a candle inside this jackal skull, the funders will smile upon our hopes."
"I'll get the altar!"
~ Zach Weiner, SMBC #2559
Replies from: fortyeridania↑ comment by fortyeridania · 2012-05-05T12:24:48.659Z · LW(p) · GW(p)
(1) Do people act more rationally when their interests are more directly concerned? (2) Are scientists' interests more directly concerned with winning grants than with making correct scientific inferences?
If the answer to both is "yes," then I think we should raise our confidence in jackal rituals relative to the current methodologies of statistical inference.
Replies from: khafra, Ezekiel↑ comment by khafra · 2012-05-07T15:12:38.217Z · LW(p) · GW(p)
Fortunately for jackals, there's an unjustified independence assumption here. Other stuff I've read strongly suggests that the outcomes of published research are strongly influenced by the expectations of the researchers about future grant money.
↑ comment by Ezekiel · 2012-05-05T17:59:52.543Z · LW(p) · GW(p)
(1) Do people act more rationally when their interests are more directly concerned?
Hell, no. Religion (especially the more commandment-heavy ones like Islam and Orthodox Judaism) being the best example, with interpersonal relationships running a close second.
The idea of that strip, as I understand it, is that scientists pretty much only act rationally inside the lab.
Replies from: Manfredcomment by MichaelGR · 2012-05-03T17:31:34.084Z · LW(p) · GW(p)
“No matter how busy you may think you are, you must find time for reading, or surrender yourself to self-chosen ignorance.”
- Confucius
↑ comment by sixes_and_sevens · 2012-05-03T21:29:42.535Z · LW(p) · GW(p)
"Well it's alright for you, Confucius, living in 5th Century feudal China. Between all the documentation I have to go through at work, and all the blogs I'm following while pretending to work, and all the textbooks I have to get through before my next assignment deadline, I don't have time to read!"
comment by Jayson_Virissimo · 2012-05-01T07:48:47.141Z · LW(p) · GW(p)
Replies from: RobinZProper treatment will cure a cold in seven days, but left to itself a cold will hang on for a week.
↑ comment by RobinZ · 2012-05-01T14:53:18.915Z · LW(p) · GW(p)
Why that citation?
Edit: Question answered below.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-01T16:09:01.716Z · LW(p) · GW(p)
Why that citation?
What's wrong with my citation?
Replies from: None↑ comment by [deleted] · 2012-05-01T17:19:29.752Z · LW(p) · GW(p)
I did some checks and that appears to be said by Darrell Huff. Links below:
http://www-stat.wharton.upenn.edu/~steele/HoldingPen/Huff/Huff.htm
http://motd.ambians.com/quotes.php/name/linux_medicing/toc_id/1-1-20/s/47
Are you sure it was Henry G. Felsen?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-02T00:19:36.657Z · LW(p) · GW(p)
I did some checks and that appears to be said by Darrell Huff...Are you sure it was Henry G. Felsen?
According to Darrell Huff, it was first said by Henry G. Felson:
As Henry G. Felson, a humorist and no medical authority, pointed out quite a while ago, proper treatment will cure a cold in seven days, but left to itself a cold will hang on for a week.
Huff, Darrell. How to lie with statistics. New York: Norton, 1993.
Replies from: RobinZcomment by Old_Rationality · 2012-05-02T11:59:39.194Z · LW(p) · GW(p)
His mind refused to accept a simple inference from simple facts, which were patent to all the world. The very simplicity of the conclusion was of itself enough to make him reject it, for he had an elective affinity for everything that was intricate. He was a prey to intellectual over-subtlety.
Evelyn Baring, Earl of Cromer, Modern Egypt
comment by Stephanie_Cunnane · 2012-05-02T05:11:15.026Z · LW(p) · GW(p)
Are you better off than you were one year ago, one month ago, or one week ago?
-Tim Ferriss, The 4-Hour Workweek
Replies from: Mass_Driver↑ comment by Mass_Driver · 2012-05-02T23:37:12.197Z · LW(p) · GW(p)
Has anyone tried to put Ferriss's 4-Hour Workweek plan into practice? If so, did it make you better off than you were a month ago?
EDIT: Ferriss recommends (among other things) that readers invent and market a simple product that can be sold online and manufactured in China, yielding a steady income stream that requires little or no ongoing attention. There are dozens of anecdotes on his website and in his book that basically say "I heard that idea, I tried it, it worked, and now I'm richer and happier." These anecdotes (if true) indicate that the plan is workable for at least some people. What I don't see in these anecdotes is people who say "I really didn't think of myself as an entrepreneur, but I forced myself to slog through the exercises anyway, and then it worked for me!"
So, I'm trying to elicit that latter, more dramatic kind of anecdote from LWers. It would help me decide if most of the value in Ferriss's advice lies in simply reminding born entrepreneurs that they're allowed to execute a simple plan, or if Ferriss's advice can also enable intelligent introverts with no particular grasp of the business world to cast off the shackles of office employment.
Replies from: knb, sometimes_rational↑ comment by knb · 2012-05-03T03:37:50.887Z · LW(p) · GW(p)
I have, and yes it made me much better off (although I wouldn't really describe it as a "plan", since its more "meta" than I think of "plans" as being.)
Replies from: Mass_Driver↑ comment by Mass_Driver · 2012-05-04T08:20:41.647Z · LW(p) · GW(p)
Cool! So, what was your pre-4HWW lifestyle like, and how did it change?
↑ comment by sometimes_rational · 2012-05-08T11:44:55.038Z · LW(p) · GW(p)
There are other resources that recommend this practice. Steve Pavlina is currently running a series on passive income on his blog that looks interesting as well.
I don't know if the recommendations made in 4-Hour workweek or that blog are sustainable in the real world without a large amount of "luck".
comment by komponisto · 2012-05-01T15:05:58.262Z · LW(p) · GW(p)
[S]top whining and start hacking.
-- Paul Graham
(Arguably a decent philosophy of life, if a bit harshly expressed for my taste.)
Replies from: DanArmak, MixedNuts, MarkusRamikin, FiftyTwo↑ comment by MarkusRamikin · 2012-05-05T07:50:12.532Z · LW(p) · GW(p)
Kane: Quit griping!
Lambert: I like griping.
(from Alien)
↑ comment by FiftyTwo · 2012-05-02T00:00:02.444Z · LW(p) · GW(p)
Stop whining and do something
Might be a better phrasing? It also accounts for doing good things even if you can't solve the current problem.
Replies from: dlthomas, komponisto↑ comment by dlthomas · 2012-05-02T01:21:24.741Z · LW(p) · GW(p)
So long as it doesn't lead to "We have to do something; X is something; ergo, we must X!"
Replies from: FiftyTwo↑ comment by FiftyTwo · 2012-05-02T01:32:16.420Z · LW(p) · GW(p)
True, but very few things are less effective than whining.
Replies from: Eugine_Nier, dlthomas, None↑ comment by Eugine_Nier · 2012-05-02T04:29:17.988Z · LW(p) · GW(p)
Actually, while whining rarely accomplishes anything, a lot of things anti-accomplish something, i.e., they make the problem worse.
↑ comment by dlthomas · 2012-05-02T02:02:38.342Z · LW(p) · GW(p)
True. Perhaps:
Replies from: fubarobfuscoStop whining; do something effective if you can find it.
↑ comment by fubarobfusco · 2012-05-03T03:20:02.783Z · LW(p) · GW(p)
"If you can find it" invites beliefs. Do something effective, or pick a different topic.
↑ comment by [deleted] · 2012-05-02T02:13:32.762Z · LW(p) · GW(p)
Whining aims to let people know that there is a problem that needs to be solved. That sounds like a relatively effective way to let the world know that we still have much to do.
Replies from: mimfadora↑ comment by mimfadora · 2012-05-02T04:26:44.130Z · LW(p) · GW(p)
I get what you are saying but, be that as it may, knowing that there is still much to do and actually doing something about it are two completely different things. And whining may also prove to be counterproductive since it is often perceived to be annoying thus people are less likely to help or take you seriously.
Replies from: None↑ comment by [deleted] · 2012-05-02T07:02:43.997Z · LW(p) · GW(p)
Actually, most people who find whining annoying make every attempt to eliminate it. It can be counter-productive if the story it has to tell is perceived in an incorrect manner. You will do nothing if you do not know that there are things to do.
↑ comment by komponisto · 2012-05-02T02:29:40.878Z · LW(p) · GW(p)
The "stop whining" part is the harsh part; the "start hacking" part is beautiful.
comment by Endovior · 2012-05-12T04:04:17.670Z · LW(p) · GW(p)
People are happy to judge each other according to what they think of as standards, while thinking their own particular case is, well… particular. It’s different for you because you have reasons, everybody else just has excuses.
--Hazel, Tales of MU
comment by Jayson_Virissimo · 2012-05-01T07:58:52.776Z · LW(p) · GW(p)
Replies from: Jayson_Virissimo, RobinZ, J_TaylorScientific Realism is the only philosophy that doesn't make the success of science a miracle.
↑ comment by Jayson_Virissimo · 2012-05-01T16:05:16.431Z · LW(p) · GW(p)
I claim that the success of current scientific theories is no miracle. It is not even surprising to the scientific (Darwinist) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive — the ones which in fact latched on to actual regularities in nature.
↑ comment by RobinZ · 2012-05-01T14:53:45.962Z · LW(p) · GW(p)
One quote per post, please.
Edit: Belated thanks!
Replies from: RobertLumley, Jayson_Virissimo↑ comment by RobertLumley · 2012-05-12T01:43:47.746Z · LW(p) · GW(p)
If they are strongly related, reply to your own comments.
I'm assuming Jayson_Virissimo felt they were strongly related.
Replies from: RobinZ↑ comment by RobinZ · 2012-05-12T05:25:42.994Z · LW(p) · GW(p)
Indeed he did - but the two quotes were in the same comment until after I posted my request. That's why there's an asterisk after the timestamp on the original quote post - the comment was edited to remove the Bas van Fraassen quote.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-05-12T05:40:46.768Z · LW(p) · GW(p)
Ah, my apologies. That's what I get for reading the thread 11 days late. :-)
Replies from: RobinZ↑ comment by Jayson_Virissimo · 2012-05-01T16:01:03.533Z · LW(p) · GW(p)
The second quote is responding to the claim in the first quote. It is a pseudo-conversation.
↑ comment by J_Taylor · 2012-05-01T21:54:17.687Z · LW(p) · GW(p)
Putnam of all people really should have known better than to use the word 'miracle'.
Replies from: hairyfigment↑ comment by hairyfigment · 2012-05-03T07:56:27.220Z · LW(p) · GW(p)
Replies from: J_Taylor↑ comment by J_Taylor · 2012-05-03T18:13:42.429Z · LW(p) · GW(p)
There is no dominant conceptual analysis of 'miracle' such that Putnam's sentence has a clear and distinct meaning. (I may be incorrect about this; I do not follow Philosophy of Religion.) Of course, since Putnam was writing to an extremely secular audience (by American standards), 'miracle' is a useful slur that essentially translates to 'WTF is this I don't even'.
comment by fubarobfusco · 2012-05-05T22:11:48.577Z · LW(p) · GW(p)
The Imaginary Mongoose
Mr R. G. Knowles, on being asked what he considered to be the best story he had ever heard, instanced the following: —
An inquisitive gentleman, riding in a carriage in one of the London tube railways, noticed that a man opposite him carried upon his knees a small black box of somewhat peculiar construction. The inquisitive one eyed it furtively for a brief while, then, unable to restrain his curiosity, he leaned forward and remarked: —
"You seem to take great care of that box, sir. May I ask what it contains?"
"Certainly. It contains a mongoose," was the reply.
"Oh, indeed!" exclaimed the other, his curiosity still unsatisfied. "A mongoose! And pray, what is it for?"
"Well, the fact is," explained the owner of the box, lowering his voice, "I have got a friend who has got delirium tremens, and he fancies he sees snakes. Now, the mongoose, you know, kills snakes, so I am taking it to him."
"—Dear me!" cried the surprised recipient of this piece of information. "But—but" — here he thought hard for several seconds — "but surely you do not want a real mongoose to kill imaginary snakes!"
"Of course not," was the reply. "This is only an imaginary mongoose."
— Kilmore Free Press; Kilmore, Victoria, Australia; 14 December 1916.
A version of this story is found in Aleister Crowley's Magick in Theory and Practice, and a paraphrase is quoted in Robert Anton Wilson's Masks of the Illuminati, attributed to a fictionalized Crowley; that version may be found here.
Replies from: tgb, DanielLC↑ comment by tgb · 2012-05-05T23:56:12.509Z · LW(p) · GW(p)
Love the story, but the punchline shouldn't be spoiled in the title!
Replies from: fubarobfusco, homunq↑ comment by fubarobfusco · 2012-07-04T18:29:03.836Z · LW(p) · GW(p)
It's that way in the Australian original, although not in Crowley's or Wilson's version.
↑ comment by homunq · 2012-05-20T13:07:47.559Z · LW(p) · GW(p)
Agree. Even though anyone commenting or voting in this thread has already read the story, there are still others who haven't. Please edit the offending word out of the title, and I'll upvote the original post.
Please do not upvote this comment if you've already upvoted the original post.
comment by hairyfigment · 2012-05-03T07:36:27.951Z · LW(p) · GW(p)
When somebody picks my pocket, I'm not gonna be chasing them down so I can figure out whether he feels like he's a thief deep down in his heart. I'm going to be chasing him down so I can get my wallet back.
-- illdoc1 on YouTube
Replies from: Ezekiel, RobinZ↑ comment by Ezekiel · 2012-05-04T22:36:41.657Z · LW(p) · GW(p)
I'm not sure I get this. Could you explain, please?
Replies from: hairyfigment, Document, chaosmosis↑ comment by hairyfigment · 2012-05-05T01:07:58.872Z · LW(p) · GW(p)
- Straightforward consequentialism.
- If you hurt someone in an easily avoidable way, they'll respond to the hurt and not to what's in your heart.
I could go on a bit longer, but I'm drunk and this seems like plenty.
↑ comment by Document · 2012-05-09T21:52:15.761Z · LW(p) · GW(p)
Can we really hold someone responsible if he had no choice and his brain forced him to steal?
Edit: Missed the "you" in Ezekiel's question; sorry.
↑ comment by chaosmosis · 2012-05-04T22:56:20.565Z · LW(p) · GW(p)
I think it's just about pragmatism vs. philosophical reasoning or Deep Wisdom.
comment by dspeyer · 2012-05-01T15:59:32.645Z · LW(p) · GW(p)
It takes a very clever human to come up with a genuinely funny joke about goodness, but any human can be trained to act as if goodness were funny.
-- C.S. Lewis, The Screwtape Letters (from memory -- I may have the exact phrasing wrong).
You can replace "goodness" in this sentence with almost anything that tends to get flippantly rejected without thought.
Replies from: Tyrrell_McAllister, FiftyTwo, NancyLebovitz↑ comment by Tyrrell_McAllister · 2012-05-01T20:18:34.961Z · LW(p) · GW(p)
Good memory. The original reads:
Only a clever human can make a real Joke about virtue, or indeed about anything else; any of them can be trained to talk as if virtue were funny.
↑ comment by FiftyTwo · 2012-05-02T00:03:33.015Z · LW(p) · GW(p)
Not sure if finding something funny in the context of a joke necessarily leads to one not taking it seriously in other contexts. [E.g. when xkcd and smbc make science jokes I don't think my belief in the science they are referencing diminishes.]
Replies from: dspeyer↑ comment by dspeyer · 2012-05-13T18:09:43.016Z · LW(p) · GW(p)
When xkcd and smbc make science jokes, they're real jokes written by clever humans.
Flippancy is more like Dell's recent "shut up bitch" scandal and the it's a joke, laugh reactions to it. Mads Christensen presented no substantive evidence that women are unable to contribute to IT, he just tried to train the crowd to regard the very idea of a capable woman as if it were funny.
↑ comment by NancyLebovitz · 2012-05-03T15:20:26.262Z · LW(p) · GW(p)
The bit about "trained to act as if" is very astute. The same training can be applied to overvaluing things with little or no apparent value.
comment by Multiheaded · 2012-05-16T12:31:44.492Z · LW(p) · GW(p)
I've been looking up some American people (radical activists/left-wing theorists/etc) whom I knew little about but felt surprised about how they're a byword and evil incarnate to every right-wing blogger out there. I don't have any political or moral judgment about what I've read in regards to those (or at least let's pretend that I don't), but incidentally I found a nice quote:
If people feel they don’t have the power to change a situation, they stop thinking about it.
Saul Alinsky, Rules for Radicals
Replies from: Multiheaded, TimS, Multiheaded, TheOtherDave, Multiheaded, Jayson_Virissimo↑ comment by Multiheaded · 2012-05-16T12:46:29.106Z · LW(p) · GW(p)
And here's some rather... more spicy stuff from him:
The seventh rule of the ethics of means and ends is that generally success or failure is a mighty determinant of ethics. The judgment of history leans heavily on the outcome of success or failure; it spells the difference between the traitor and the patriotic hero. There can be no such thing as a successful traitor, for if one succeeds he becomes a founding father.
The ninth rule of the ethics of means and ends is that any effective means is automatically judged by the opposition as being unethical.
In this world laws are written for the lofty aim of "the common good" and then acted out in life on the basis of the common greed. In this world irrationality clings to man like his shadow so that the right things get done for the wrong reasons—afterwards, we dredge up the right reasons for justification. It is a world not of angels but of angles, where men speak of moral principles but act on power principles; a world where we are always moral and our enemies always immoral; a world where "reconciliation" means that when one side gets the power and the other side gets reconciled to it, then we have reconciliation.
Always remember the first rule of power tactics: Power is not only what you have but what the enemy thinks you have. The second rule is: Never go outside the experience of your people. When an action or tactic is outside the experience of the people, it results in confusion, fear, and retreat. [...] The third rule is: Whenever possible go outside of the experience of the enemy. Here you want to cause confusion, fear, and retreat.
↑ comment by TimS · 2012-05-19T20:58:00.600Z · LW(p) · GW(p)
Alinsky is interesting to me because it seems like he was one of the first to notice a new, likely to be effective method of social change - and he used up all the effectiveness of the technique.
I wouldn't expect non-violent protest (in America) to be capable of that kind of social change in the future, because those in power have learned how to deal with it effectively (mass arrest for minor infractions and an absolute refusal to engage in political grandstanding). By this point, mass protests are quite ineffective at creating social change here in the US (consider the relatively pointlessness of the Occupy movement)
I'm sure there are other examples of techniques of social change becoming totally ineffective as authorities learned how to respond better, but I can't think of any off the top of my head.
↑ comment by Multiheaded · 2012-05-18T09:53:59.061Z · LW(p) · GW(p)
I'd also like to mention that the American Right's treatment of Alinsky is really depressing. Just one random quote: "Alinsky got what he wanted in the form of 90% illegitimacy rates among American blacks and poverty wholly dominated by single mothers."
Really? A guy who taught little people how to stand up for themselves in ruthless tribal politics... somehow single-handedly (or with his evil college student henchmen) caused a complicated social problem that existed since Segregation's end - instead of, I dunno, making communities more unified and more conscious of the war that is life (like trade unions become with good non-dogmatic leadership)?
(Another stunning lie: "Alinsky’s entire adult life was devoted to destroying capitalism in America — an economic system he considered to be oppressive and unjust."
He talked of working within the system and changing it slowly and patiently all the time - for moral as well as tactical reasons. "Those who enshrine the poor or Have-Nots are as guilty as other dogmatists and just as dangerous", he wrote. And: "The political panaceas of the past[2], such as the revolutions in Russia and China, have become the same old stuff under a different name... We have permitted a suicidal situation to unfold wherein revolution and communism have become one. These pages are committed to splitting this political atom, separating this exclusive identification of communism with revolution."
"Let us in the name of radical pragmatism not forget that in our system with all its repressions we can still speak out and denounce the administration, attack its policies, work to build an opposition political base. True, there is government harassment, but there still is that relative freedom to fight. I can attack my government, try to organize to change it. That's more than I can do in Moscow, Peking, or Havana. Remember the reaction of the Red Guard to the "cultural revolution" and the fate of the Chinese college students.[1] Just a few of the violent episodes of bombings or a courtroom shootout that we have experienced here would have resulted in a sweeping purge and mass executions in Russia, China, or Cuba. Let's keep some perspective.")
Sadly, even M.M. chimed in when that hysteria was at its peak around the 2008 elections, with Obama's supposed methodological connection to the evil treasonous commie terrorist trumpeted everywhere on the "fringe" websites. And that's the kind of people most likely to boast of their reasoning and objectivity online?
Mencius also blasted the SDS (Students for a Democratic Society) who used Gandhi's nonviolent tactics to attack the very literal Ku Klux Klan rule in Mississippi during the so-called Freedom Summer, risking life and limb, and a small part of whose members formed the semi-violent terrorist group Weather Underground a decade later.
[1] Yep, the "Cultural Revolution" was less a government-initiated purge in the image of 1937 than it was a little civil war between two slightly different factions of zealots.
2] For a brilliant example of this madness dressed as conservatism, just look at this idiot. He took Alinsky's sardonic reference to those revolutions' hype as "panaceas" as a sign of approval!
America, Fuck Yeah.
P.S. To be fair, here's a voice of sanity from some libertarian dude, who has the misfortune of posting at a site that even Moldbug rightly called a useless dump.
Replies from: Eugine_Nier, None↑ comment by Eugine_Nier · 2012-05-19T04:51:06.284Z · LW(p) · GW(p)
A guy who taught little people how to stand up for themselves in ruthless tribal politics...
Your confusing standing up for oneself with mass defecting from social conventions. The fact that modern blacks have learned to confuse the two is a large part of the reason why they're stuck as an underclass.
somehow single-handedly (or with his evil college student henchmen) caused a complicated social problem that existed since Segregation's end
It wasn't nearly as bad at segregation's end as it is now.
instead of, I dunno, making communities more unified and more conscious of the war that is life
Yes, that's why black communities today consider members who study hard or try to integrate into mainstream society (outside of racial advocacy) as traitors who are "acting white".
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-19T09:01:40.462Z · LW(p) · GW(p)
I don't know much about any of that, but blaming the first on Alinsky sounds just ridiculous (as well as evokes nasty associations for people who are conscious of antiblack rhetoric throughout U.S. history). Have you looked at his activities? And do you think he only worked with blacks, or resented whites, or what?
http://www.progress.org/2003/alinsky2.htm
The last one might be exaggerated, too. Are successful (non-criminal) black businessmen hated and despised in their communities?
(Overall, you sound a touch mind-killed.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-19T20:17:42.497Z · LW(p) · GW(p)
I don't know much about any of that, but blaming the first on Alinsky sounds just ridiculous
True, I was exaggerating by blaming him for the effects of the movement he was a part of.
And do you think he only worked with blacks,
No, and I'm sure he did some similar damage to some white communities as well.
Are successful (non-criminal) black businessmen hated and despised in their communities?
Well, depends on how they succeeded (someone who succeeded in sports or music is more accepted then someone who succeeded through business).
(Overall, you sound a touch mind-killed.)
What about yourself? At the risk of engaging in internet cold reading I think you were so scarred by what you perceive as "right wing technocracy" as expressed by Moldbug and some of his fans on LW that you're desperately looking for any ideology/movement that seems strong enough to oppose it.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-20T08:38:32.713Z · LW(p) · GW(p)
Replied elsewhere.
At the risk of engaging in internet cold reading I think you were so scarred by what you perceive as "right wing technocracy" as expressed by Moldbug and some of his fans on LW that you're desperately looking for any ideology/movement that seems strong enough to oppose it.
Well, there's a grain of truth to that, but I'll try not to compromise my ethics in doing so. I'd put it like this: I have my ideology-as-religion (utopian socialism, for lack of a better term) and, like with any other, I try to balance its function of formalizing intuitions versus its downsides of blinding me with dogma - but I'm open to investigating all kinds of ideologies-as-politics to see how they measure against my values, in their tools and their aims.
Also, I consider Moldbug to be relatively innocent in the grand scheme. He says some rather useful things, and anyways there are others whose thoughts are twisted far worse by that worldview I loathe; he's simply a good example (IMO) of a brilliant person exhibiting symptoms of that menace.
Replies from: None↑ comment by [deleted] · 2012-05-22T15:49:06.912Z · LW(p) · GW(p)
My good sir if you are a utopian socialist, it unfortunately seems to me that you are striving to treat a fungal infection while the patient is dying of cancer.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-22T17:09:38.953Z · LW(p) · GW(p)
I said it's my ideal of society, not that I'd start collectivizing everything tomorrow! Didn't you link that story, Manna? If you approve of its ideas, then you're at least partly a socialist too - in my understanding of the term. Also, which problems would you call "cancer", specifically?
Replies from: None↑ comment by [deleted] · 2012-05-22T18:37:34.275Z · LW(p) · GW(p)
I said it's my ideal of society, not that I'd start collectivizing everything tomorrow!
Oh I didn't mean to imply you would! But surely you would like to move our current society towards that at some (slow or otherwise) rate, or at least learn enough about the world to eventually get a good plan of doing so.
If you approve of its ideas, then you're at least partly a socialist too
Nearly every human is I think. Socialism and its variants tap into primal parts of our mind and its ethical and political intuitions. And taking seriously most of our stated ethics one is hard pressed to not end up a libertarian or a communist or even a fascist. Fortunately most people don't think too hard about politics. I don't want the conversation to go down this path too far though since I fear the word "socialist" is a problematic one.
Also, which problems would you call "cancer", specifically?
Specifically the great power structures opposing moves towards your ideal. It almost dosen't matter which ideal, since those that I see would oppose most change and I have a hard time considering them benevolent. Even milquetoast regular leftism thinks itself fighting a few such forces, and I would actually agree they are there. You don't need to agree with their bogeyman, surely you see some much more potent forces shaping our world, that don't seem inherently interested in your ideals, that are far more powerful than.... the writer of a photocopied essay you picked up on the street?
For Moldbug himself points out, since the barrier to entry to writing a online blog is so low, absent other evidence, you should take him precisely as seriously as a person distributing such photocopied essays. How many people have read anything by Moldbug? Of those how many agree? Of those how many are likely to act? What if you take the entire "alternative" or "dissident" or "new" right and add these people together. Do you get million people? Do you even get 100 thousand? And recall these are dissidents! By the very nature of society outcasts, malcontent's and misfits are attracted to such thinking.
While I have no problem with you reading right wing blogs, even a whole lot of them, since I certainly do, I feel the need to point out, that you cite some pretty obscure ones that even I have heard about let alone followed, dosen't that perhaps tell you that you may be operating under a distorted view or intuition of how popular these ideas are? By following their links and comment section your brain is tricked into seeing a different reality from the one that exists, take a survey of political opinion into your hands and check the scale of the phenomena you find troubling.
Putting things into perspective, It seems a waste to lose sleep over them, does it not? Many of them are intelligent and consistent, but then so is Will Newsome and I don't spend much time worrying about everlasting damnation. If you want anything that can be described as "utopian" or "socialist" your work is cut out for you, you should be wondering how to move mountains, not stomp on molehills.
Replies from: Multiheaded, Eugine_Nier↑ comment by Multiheaded · 2012-05-23T10:44:06.839Z · LW(p) · GW(p)
That's a good comment, thanks. You've slightly misunderstood my feelings and my fears, though. I'll write a proper response.
In brief, I fear alt-right/technocratic ideas not because they're in any way popular or "viral" at present, but because I have a nasty gut feeling that they - in a limited sense - do reflect "reality" best of all, that by most naive pragmatist reasoning they follow from facts of life, and that more and more people for whom naive reasoning is more important than social conventions will start to adopt such thinking as soon as they're alerted to its possibility.
And, crucially, in the age of the internet and such, there will be more and more such under-socialized, smart people growing up and thinking more independently - I fear it could be like the spread of simplified Marxism through underdeveloped and humiliated 3rd-world countries, and with worse consequences. See the Alinsky quote above - "revolution and communism have become one". If rationalism and techno-fascism become "one" like that, the whole world might suffer for it.
Replies from: Karmakaiser, None↑ comment by Karmakaiser · 2012-06-08T15:23:38.744Z · LW(p) · GW(p)
I'm following you from your links in "Nerds are Nuts" and I would like to restate your second paragraph to make sure I have your beliefs rights.
The reason the alt-right is scary is not because they are wrong in their beliefs about reality, but because they are correct about the flaws they see in modern-leftism and this makes their proposals all the more dangerous. Just because a doctor can diagnose what ails you, it does not follow that he knows how to treat you. The Alt Right is correct in it's diagnosis of societal cancers but their proposals look depressingly closer to leeching than to chemo-therapy.
Is this an accurate restatement?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-08T15:28:14.218Z · LW(p) · GW(p)
The Alt Right is correct in it's diagnosis of societal cancers but their proposals look depressingly closer to necromancy than to chemo-therapy.
In all frankness, that's how I bellyfeel it.
Replies from: Karmakaiser↑ comment by Karmakaiser · 2012-06-08T15:38:35.685Z · LW(p) · GW(p)
What positive beliefs about politics do you have in light of your fear of necromancy and cancer? My intuition says some form of pragmatic Burkean conservatism but I don't want to typecast you.
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-06-08T15:46:28.055Z · LW(p) · GW(p)
Well, I respect Burke a lot, but my true admiration goes out to people like Chesterton (a big fan of the French Revolution) and Kropotkin and Orwell and maybe even the better sort of religious leader, like John Paul II - the ones who realize the power and necessity of ideology-as-faith, but take the best from both its fronts instead of being tied down on one side. In short, I love idealism.
(If forced to pick among today's widely used labels, though, I'd be OK with "socialist" and not at all with "conservative".)
Replies from: Karmakaiser↑ comment by Karmakaiser · 2012-06-09T14:39:14.851Z · LW(p) · GW(p)
I thought about this on and off the rest of yesterday and my belief is that these two statements are key.
The Alt Right is correct in it's diagnosis of societal cancers [...]
In short, I love idealism.
What I get from this is the divide between epistemological and instrumental. Using that classic lesswrong framework I've come to this as a representation of your views:
In order to understand the world, if you are going to err, err on the side of Cynicism. But, if you are going to live in it and make it better, you have to err on the side of Idealism.
Cynicism is epistemologically useful but instrumentally desctructive (Explained by the fact you agree with alt-right in the pessimistic view of the world and the reasons things are not as good as they could be.)
Idealism is instrumentally useful but epistemologically destructive. (Explained by the fact you regard ideology-as-faith as vitally useful, but that doesn't make faith true.)
Is this a fair reading?
Replies from: None, Multiheaded↑ comment by [deleted] · 2012-06-09T15:27:51.058Z · LW(p) · GW(p)
I struggled with something similar a while ago, and Vladimir_M had a different take.
Replies from: Karmakaiser↑ comment by Karmakaiser · 2012-06-09T16:58:17.520Z · LW(p) · GW(p)
I really like summarizing to make sure I get things right. Watch as I prove it!
When dealing with real world morality and goal seeking behavior we seem forced to stare in the face the following facts:
We are very biased.
We could be more rational
Our rationality isn't particularly good at morality.
Complicating this are the following:
Heuristics generally work. How much rationality do you need to out compete moral and procedural heuristics?
Just how rational can we get. Can low IQ people become much more rational, or are we forced to believe in a cognitive and rationality based elite?
Should we trust moral reasoning or heuristics at all?
I've seen the following conclusions drawn so far by people who take bias seriously: (There may be more, this is what I've encountered. Also the first two are just jokes I couldn't resist)
Lovecraftian: The world is ruled by evil Gods beyond imagination. I have seen too much! Go back into the warm milkbath of ignorance! Chew your cud you cows and never think of the slaughter to come!
Existientialism: Everything sucks forever but let's not kill ourselves because it's better to push a rock up a mountain or something. We can never know anything and nothing can ever mean anything so we should talk about it forever. Give me Tenure! Linkin Park and Tenure!
Moldbuggery: Bias is bad, real fucking bad. The current systems don't encourage rationality all that much either. Only a cognitive elite can ever become debiased enough to run things and they should only be trusted if we get a system that aligns with the interests of the subjects. (Ex: Aurini, GLADOS, Konkvistador, Moldbug, Nick Land)
[I had a section on Robin Hanson, but I don't think I understand it well enough to summarize on reflection, so "This Page Left Blank"]
Old Whiggish: We are very biased and ought to trust intuition, tradition and reason roughly in equal measure. We pride reason too much and so people who try to be perfectly rational are worse reasoners than those who allow a little superstition in their life. Our Heuristics are better than we think. If it works, we should keep it even if it isn't true. (Ex: Taleb, Derbyshire, Burke, Marcus Aurelius. Almost Jonathan Haidt post "The Righteous Mind" but not quite)
Rational Schizophrenia: A pure cynicism about how things are should be combined with an idealism of how to act. [See above for Multithreaded's advice]
Yudkowskyianism: Bias is very bad but our prospects for debiasing is less pessimistic than either of those make it out to be. Rationality is like marital arts, any can learn to use leverage regardless of cognitive strength. Though there are clear ways in which we fail, now that we have Bayesian Probability theory derived from pure logic we know how to think about these issues. To abuse a CS Lewis quote: "The Way has not been tried and found wanting; it has been found difficult and left untried." Try it before giving up because something is only undoable until somebody does it. (Ex: Lukeprog, Yudkowsky)
How does that strike you as the current "Rationality landscape?" Again I'm largely new here as a community member so I could be mischaractizing or leaving ideas out.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-13T10:38:13.731Z · LW(p) · GW(p)
The first glance, as usual, reveals interesting things about one's perception:
Moldbuggery: Bias is bad, real fucking bad. The current systems don't encourage rationality all that much either. Only a cognitive elite can ever become debased enough to run things
That's honestly how I read it at first. Ha.
BTW Konkvistador belongs in better company (nothing against the others); I've come to admire him a little bit and think he's much wiser than other fans of Moldbug.
Oh, and speaking of good company... "pure cynicism about how things are combined with an idealism of how to act" - that sounds like the ethics that Philip K. Dick tenatively proposes in his Exegesis; shit's fucked, blind semi-conscious evil rules the world, but there's a Super-Value to being kind and human even in the face of Armageddon.
Replies from: Karmakaiser, None↑ comment by Karmakaiser · 2012-06-13T12:55:35.884Z · LW(p) · GW(p)
I asked Konkvistador if he endorsed the Moldbuggery statement in IRC and he liked it. But I think I want to decontextualize the attitudes toward bias and debiasing So I can better fit different authors/posters together. :/
I've come up with /fatalism/pessimism/elitism/rational schizophrenia/optimism . With that breakdown I can put Konvistador in the same category with Plato. I love the name rational schizophrenia too much to give it up.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-13T12:56:53.755Z · LW(p) · GW(p)
I love the name rational schizophrenia too much to give it up.
I liked it too, thanks! :)
↑ comment by Multiheaded · 2012-06-09T15:19:16.587Z · LW(p) · GW(p)
Huh... yeah! I'd sign under that. And, when you phrase it so nicely, I'm sure that a few others here would.
Replies from: TimS↑ comment by TimS · 2012-06-10T00:36:14.580Z · LW(p) · GW(p)
I'd endorse too (with appropriate caveats about what part of the alt-right I struggle to reject), but the meta-ethical point Karmakaiser is making doesn't help decide what ethical positions to adopt - only what stance one should take towards one's adopted moral positions.
↑ comment by Multiheaded · 2012-06-08T15:53:05.315Z · LW(p) · GW(p)
Also, there's an interesting writer with agreeable sentiments coming up on my radar after 30 seconds of googling. His name's Garret Keizer.
http://www.motherjones.com/politics/2005/03/left-right-wrong
"THE REAL PROBLEM OF OUR TIME," George Orwell wrote in 1944, "is to restore the sense of absolute right and wrong when the belief that it used to rest on -- that is, the belief in personal immortality -- has been destroyed. This demands faith, which is a different thing from credulity." It also demands conviction, which is a different thing from wanting to win at any price. The real problem of the left in our time is to restore those absolutes and to find that faith.
Of course, Orwell was not talking about religious faith. Nor am I. Ironically, one of the treasures bequeathed to us by the world's ethical religions is the self-effacing hint that the basis of morality does not have to be religious. "Whatever you would have others do to you, do to them." In other words, the most reliable sense of right and wrong comes from your own skin, your own belly, your own broken heart.
That said, religion can provide some useful insights, if only to debunk a few of the notions that are being foisted upon us in the name of religion. The Christian right preaches an extremely selective version of its own creed, long on Leviticus and short on Luke, with scant regard for the Prophets and no end of veneration for the profits. Its message goes largely unchallenged, partly through general ignorance of biblical tradition and partly because liberal believers and nonbelievers alike wish to maintain a respectable distance from the rhetoric of fundamentalism. This amounts to a regrettable abandonment of tactics. One of Saul Alinsky's "Rules for Radicals" was "Make the enemy live up to their own book of rules" -- a tough act to pull off if one doesn't even know the rule book.
Shit, I'd better start reading this guy!
↑ comment by [deleted] · 2012-05-23T14:48:58.047Z · LW(p) · GW(p)
I see, so this is why you seem to often bring up such discussion on LessWrong? Because you see it as a repository of smart, under-socialized, independent thinkers? I do to a certain extent and in this light, your most recent writing appears much more targeted rather than a overblown obsession.
In brief, I fear alt-right/technocratic ideas not because they're in any way popular or "viral" at present, but because I have a nasty gut feeling that they - in a limited sense - do reflect "reality" best of all, that by most naive pragmatist reasoning they follow from facts of life, and that more and more people for whom naive reasoning is more important than social conventions will start to adopt such thinking as soon as they're alerted to its possibility.
Do you think this might already be happening? The naive social conventions ignoring utilitarianism we often find ourselves disagreeing with seems to be remarkably widespread among baseline LessWrongers. One merely needs to point out the "techno-facist" means and how well they might work and I can easily see well over a third embracing them, and even more, should criticism of "Cathedral" economic and political theory become better understood and more widespread.
But again remember the "alternative right" has plenty of anti-epistemology and mysticism springing from a fascination with old fascist and to a lesser extent new left intellectuals, this will I think restrain them from fully coalescing around the essentially materialist ethos that you accurately detect is sometimes present.
And even if some of this does happen either from the new right people or from "rationalists" and the cognitive elite, tell me honestly would such a regime and civilization have better or worse odds at creating FAI or surviving existential risk than our own?
And, crucially, in the age of the internet and such, there will be more and more such under-socialized, smart people growing up and thinking more independently
But recall what Vladimir_M pointed out, in order to gain economic or political power one must in the age of the internet be more conformist than before, because any transgression is one google search away. Doesn't this suggest there will be some stability in the social order for the foreseeable future? Or that if change does happen it will only occur if a new ideal is massively popular so that "everyone" transgresses in its favour. Then punishment via hiring practices, reputation or law becomes ineffective.
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-05-23T17:10:10.278Z · LW(p) · GW(p)
Also: a third of LWers embracing technofascism? Is that a reference to a third of angels siding with Lucifer in Paradise Lost? Or was this unintended, a small example of our narrative patterns being very similar from Old Testament to Milton to now?
Replies from: None↑ comment by Multiheaded · 2012-05-23T15:54:49.462Z · LW(p) · GW(p)
tell me honestly would such a regime and civilization have better or worse odds at creating FAI or surviving existential risk than our own?
Surviving existential risk, probably. But, unlike today's inefficient corrupt narrow-minded liberal oligarchy, such a regime would - precisely because of its strengths and the virtues of people who'd rise to the top of it (like objectivity, dislike of a "narrative" approach to life and a cynical understanding of society) - be able to make life hardly worth living for people like us. I don't know whether the decrease in extinction risk is worth the vastly increased probability of stable and thriving dystopia, where a small managerial caste is unrestrained and unchallenged. Again, democracy and other such modern institutions, pathetic and stupid as they might be from an absolute standpoint, at least prevent real momentous change.
And their "F"AI could well implement many things we'd find awful and dystopian, too (e.g., again, a clean ordered society where slavery is allowed and even children are legally chattel slaves of their parents, to be molded and used freely) - unlike something like this happening with our present-day CEV, it'd be a feature, not a bug. In short, it's likely a babyeater invasion in essense.
(more coming)
Replies from: Athrelon, None↑ comment by Athrelon · 2012-06-12T19:35:06.878Z · LW(p) · GW(p)
I want to hear more about the Moldbuggian dystopia. Should make excellent SF.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-13T03:42:20.866Z · LW(p) · GW(p)
I'm writing it! In Russian, though.
↑ comment by [deleted] · 2012-05-23T16:14:51.557Z · LW(p) · GW(p)
I think your idea that for people's lives to be worth living they need to have certain beliefs is one of your ugliest recurring themes.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-23T20:26:38.104Z · LW(p) · GW(p)
I'm a moral anti-realist through and through, despite believing in god(s). I judge everyone and their lives from my own standpoint. Hell, a good citizen of the Third Reich might've found my own life pointless and unworthy of being. Good thing that he's shot or burnt, then. There's no neutral evaluation.
Replies from: thomblake, None↑ comment by thomblake · 2012-05-23T20:37:18.934Z · LW(p) · GW(p)
I judge everyone and their lives from my own standpoint... There's no neutral evaluation.
You sound like a subjectivist moral realist.
Possibly even what we tend to call "subjectively objective" (I think we should borrow a turn of phrase from Epistemology and just call it subject-sensitive invariantism).
↑ comment by Eugine_Nier · 2012-05-23T05:39:12.442Z · LW(p) · GW(p)
Specifically the great power structures opposing moves towards your ideal. It almost dosen't matter which ideal, since those that I see would oppose most change
Keep in mind that while every improvement is a change, most potential changes are not improvements and for most ideals, attempting to implement them leads to total disaster.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-23T10:23:52.137Z · LW(p) · GW(p)
Yep. Both him and me have stressed the first half of that several times in one form or other. However, it's nonsense to say that trying to implement ideals is bad, period, because the problem here is that humans are very bad at some things that would be excellent in themselves - like a benevolent dictatorship. If, for example, we had some way to guarantee that one would stay benevolent, then clearly all other political systems should get the axe - to an utilitarian, there's no more justification for their evils! But in reality attempts at one usually end in tears.
However, trying to, say, keep one's desk neat & organized is also an ideal, yet many people, particularly those with OCD, are quite good at implementing it. It is clear, then, that whatever we do, we should first look to psychological realities, and manipulate ourselves in such a way that they assist our stated goals or just don't hinder them as much.
↑ comment by [deleted] · 2012-05-18T14:06:35.237Z · LW(p) · GW(p)
You left these quotes unsourced:
Replies from: Multiheaded"Alinsky got what he wanted in the form of 90% illegitimacy rates among American blacks and poverty wholly dominated by single mothers."
"Alinsky’s entire adult life was devoted to destroying capitalism in America — an economic system he considered to be oppressive and unjust."
↑ comment by Multiheaded · 2012-05-18T14:14:05.945Z · LW(p) · GW(p)
They're not from "real" articles by "real" journalists/propagandists or whatever, just from random blogging idiots. I simply picked a couple of representative ones.
Quote 1: http://ricochet.com/main-feed/Buckley-vs.-Alinsky (see comments)
Goddamn, the second guy is just too dumb to breathe, but the first one freaks me out. Apparently he's one of those peculiar Catholics who never heard of the New Testament and its values. And maybe a "rationalism"-worshipper, too... those traits, as I've seen in that corner of the blogosphere, aren't as antagonistic as one might assume.
(Yes, Buckley might've been a decent man, but he shouldn't have gone on TV with that voice. Alinsky's is little better, but at least he sounds remotely like a public speaker. This might be just some kinda distortion on the record, I dunno.)
Replies from: None↑ comment by [deleted] · 2012-05-18T17:03:07.747Z · LW(p) · GW(p)
They're not from "real" articles by "real" journalists/propagandists or whatever, just from random blogging idiots. I simply picked a couple of representative ones.
Representative of what? Why not give representative quotes from the very best and brightest Alinsky critics?
For instance:
Mr. Buckley: Well, I think that you've touched on a point that's extremely interesting I would like to develop because you do have fascinating general notions. For instance, you said I'll steal before I'll take charity.
Mr. Alinsky: Yeah.
Mr Buckley: Now, suppose I'm the person you're going to steal it from -- would you consult my feelings if I were to say to you, before stealing from me -- please, won't you just take it? Or is it the act of stealing that gives you the satisfyaction that you require?
Mr. Alinsky: No, of course not. You know better than that. It isn't the act of stealing. (Next of his remark blurred in overlap).
Mr. Buckley: Well, then it is charity -- why don't you take charity then?
Mr. Alinsky: Well, you know what I meant by charity -- just going to, getting welfare handouts and --
Mr. Buckley: Welfare handouts are the products (blurred in overlap) of philanthropy. Your difficulty, it seems to me, is that you may be premature --
Mr. Alinsky: Well, you may not have contradictions. Of course, I have -- life is just a constellation of contradictions --
Mr. Buckley: Now, I think you're very cynical. I don't think you think you are. But you are. You really assume the way pretty much the way a blind man does about sex (inaudible) pleasure for it, it's associated with the act of rape. You feel that only the exercise of power can get to you certain, certain usufructs of life, and that, therefore, you must either take it from somebody because you will not permit that society give it to you.
Later:
Mr. Buckley: .But, I think what's most interesting about yourself -- at least to most people -- is this distinctive appeal that you have to certain types of people who recognize there is a problem of the poor. You appeal to some of them because you have this disdain for wel-fare-ism (Mr. Buckley draws this out) as suggested by that ultimatum of yours that you'd rather steal than receive welfare. Now, this appeals to a lot of people sort of Conservative-minded, who are against welfare because they do believe that there is going on in this country a sort of institutionalization of welfare -- that we ought to get out of it and that to be essentially human, you've got to make your own way. So you appeal to them. On the other hand, you appeal -- they would be Conservative in a way -- you appeal also to Liberals and radicals because yours is a highly non-rhetorical approach. You actually want to organize the poor, and you want to cause them to demand things. And you seem to be utterly either unconscious or, if not unconscious, at least insensible to the normal niceties of approach. When you want something, you simply want it.
Later
Replies from: MultiheadedMr. Buckley: You're very much like Ayn Rand, you see.
Mr. Alinski: That's not so.
Mr. Buckley: Only that which I can personally get belongs to me and nobody's going to help me to get it. I think that America, viewed as a nation, is the most humane nation in the experience of the world. I think there is more genuine concern for the poor, for the underprivileged, for the weak in America than we've ever seen in the history of the world. And I see you trying to fire and establish -- and disestablish the order that made that possible.
↑ comment by Multiheaded · 2012-05-20T08:51:26.867Z · LW(p) · GW(p)
That's decent and interesting criticism. Indeed, Alinsky appears to have been a hardcore Syndicalist, and both Buckley and me are to the right of him, although Buckley's a lot further. However, that last one is very dubious to me:
I think that America, viewed as a nation, is the most humane nation in the experience of the world. I think there is more genuine concern for the poor, for the underprivileged, for the weak in America than we've ever seen in the history of the world. And I see you trying to fire and establish -- and disestablish the order that made that possible.
Since Marx, leftists have probably heard this kind of argument in most debates: advanced civilization generates - or will eventually - so much charity in all its forms (through both tradition and individual kindness) as to cure most of the lower classes' problems and thus make many concerns of unfairness and inequality irrelevant.
Alinsky clearly understood the problem with that: charity is in itself a status race and a status pump; it can be wielded with malice and used to keep people down. Just look at Africa and how we're trying to drown it in money instead of coming over there en masse and applying real help, manually. (Which is also problematic status-wise, but at least it might actually improve a society.)
↑ comment by Eugine_Nier · 2012-05-20T20:56:22.150Z · LW(p) · GW(p)
The argument is not that, for example, the United States, is perfect. It's that whatever Marxists replace it with will be worse.
Alinsky clearly understood the problem with that: charity is in itself a status race and a status pump; it can be wielded with malice and used to keep people down.
A lot of people "understand" this problem in the sense that they know it exists in the existing system. Unfortunately, they frequently have no better understanding of the causes and potential solutions than some version of "the current system has these problems because it is evil/corrupt, once we replace it with our new good/pure system these problems will magically go away".
Just look at Africa and how we're trying to drown it in money instead of coming over there en masse and applying real help, manually. (Which is also problematic status-wise, but at least it might actually improve a society.)
That's what we were doing until leftists forced us to stop on the grounds we were "oppressing" them.
Note: If you think colonialism was indeed bad, what makes you thing doing something similar again will turn out any different?
↑ comment by [deleted] · 2012-05-20T23:18:57.252Z · LW(p) · GW(p)
I'm modestly familiar with the works of Marx, but I don't know what "syndicalism" is. And I don't know what proposal you're making, or alluding to, with this:
Just look at Africa and how we're trying to drown it in money instead of coming over there en masse and applying real help, manually.
Sounds ominous!
Replies from: TimS↑ comment by TheOtherDave · 2012-05-16T14:31:20.801Z · LW(p) · GW(p)
Relatedly, if we don't want to think about a situation, we frequently convince ourselves that we're powerless to change it.
Less relatedly, I am growing increasingly aware of the gulf between what is implied by talking about "people" in the first person plural, and talking about "people" in the third person plural.
↑ comment by Multiheaded · 2012-05-16T13:21:45.952Z · LW(p) · GW(p)
Oh lawd!
After organizing FIGHT (an acronym for Freedom, Independence [subsequently Integration], God, Honor, Today) in Rochester, New York,[9] Alinsky once threatened to stage a "fart in" to disrupt the sensibilities of the city's establishment at a Rochester Philharmonic concert. FIGHT members were to consume large quantities of baked beans after which, according to author Nicholas von Hoffman, "FIGHT's increasingly gaseous music-loving members would hie themselves to the concert hall where they would sit expelling gaseous vapors with such noisy velocity as to compete with the woodwinds."[10] Satisfied with the reaction to his threat, Alinsky would later threaten a "piss in" at Chicago O'Hare Airport. Alinsky planned to arrange for large numbers of well dressed African Americans to occupy the urinals and toilets at O'Hare for as long as it took to bring the city to the bargaining table. According to Alinsky, once again the threat alone was sufficient to produce results.[10] Conceding that his tactics were "absurd," the community activist rejected the contention that they were frivolous, arguing "[w]hat oppressed person doesn't want, literally or figuratively, to shit on his oppressors? [At the Rochester Philharmonic] was the closest chance they'd have. Such tactics aren't just cute; they can be useful in driving your opponent up the wall. Very often the most ridiculous tactic can prove the most effective."
Now just imagine what Anonymous could've done today with him around!
Alinsky planned to arrange for large numbers of well dressed African Americans to occupy the urinals and toilets at O'Hare for as long as it took to bring the city to the bargaining table.
I weakly suspect that this was in fact the inspiration for /b/'s infamous "Pool's Closed" raids.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-18T02:55:05.563Z · LW(p) · GW(p)
It's amazing what you can accomplish if you can convince a large enough group of people that defecting from social norms is a good idea.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-18T11:11:04.572Z · LW(p) · GW(p)
Yep. Maybe we'd do well to experience something like that at least once and learn from it, either in an apolitical act or acting with one's preferred tribe. My BF is a friend/assistant of a weird Russian actionist group, so he knows a bit about this kind of stuff. He tells me it feels liberating in a way.
(On that note, Palahniuk-inspired fight clubs are also somewhat popular as a transgressive/liberating activity among Russian young men. There's graffiti advertizing one all around my middle-class neighbourhood. I'm not gonna try one, though; I can withstand some hardship but loathe physical pain.)
Replies from: khafra↑ comment by khafra · 2012-05-18T13:00:06.011Z · LW(p) · GW(p)
If you're interested in the general idea, but don't want to just go to some basement and get beat up in an uncontrolled environment, Система Кадочникова works up in a slow and controlled manner to getting punched without having it bother you unduly; I think the training can invoke many of the same feelings (although I've never done an actual fight club for basically your reasons).
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-18T13:23:54.186Z · LW(p) · GW(p)
Heh, thanks. I'd rather look into some more basic street-fighting classes, though. To tell the truth, martial arts scare me a little bit with how much spiritual dedication they demand.
I've got a few simple self-defence and outdoor survival lessons from a 3-month class I attended after high school, it was pretty neat. Then, half a year ago, when I got robbed at knifepoint by some homeless drunk, at least I didn't do anything stupid. He wasn't all that big and had a really tiny penknife, I had a very thick winter coat and in retrospect I could've stunned him with an uppercut - we practiced sudden attacks in the class, as the lesson was basically "A serious fight lasts for one blow"... but with the fear and the adrenaline and the darkness I perceived the knife to be about 20cm long - 3 times larger than it really was - and decided not to resist. He pressed me against a wall, told me to give my phone and dashed.
I went home, called the police, and to my surprise they got him that very night, as they had his picture from a drunken brawl before; he hadn't even pawned my phone. I was really calm and collected while dealing with the cops and all that (all in all, they sure surpassed my rather low expectations of the Russian police), but later I felt rather sick... a little like being raped, I presume. I got over it quickly, though; I feel a little sorry for that shit-stain and his inhuman life.
(By the way, that bit with the knife sure was funny in retrospect. When the cops showed me the evidence and asked to confirm it, I initially said something like "Well, yeah, the blade had the same shape... but it was at least two times bigger, I swear! Might he have another knife or something?" The detective was kinda amused.)
↑ comment by Jayson_Virissimo · 2012-05-16T13:06:49.792Z · LW(p) · GW(p)
If people feel they don’t have the power to change a situation, they stop thinking about it.
Interpreted as a conditional statement this is almost certainly false (I completed a degree in political science even though half-way through I understood that me trying to achieve "positive change" was hopeless). What do you think he means? How could we test such a claim?
Replies from: Multiheaded, NancyLebovitz↑ comment by Multiheaded · 2012-05-16T13:12:01.283Z · LW(p) · GW(p)
One rather large-scale example, discussed in this community since the beginning of time: deathism and the general public's attitude (or lack thereof) to cryonics.
"You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing." - Eliezer
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-16T15:57:37.684Z · LW(p) · GW(p)
Good example.
↑ comment by NancyLebovitz · 2012-05-16T14:22:40.674Z · LW(p) · GW(p)
I think the quote can be interpreted as likely to be true of many people rather than absolutely true of everyone.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-16T15:58:26.576Z · LW(p) · GW(p)
Yes, this appears to be the most charitable interpretation.
comment by DSimon · 2012-05-11T04:31:34.161Z · LW(p) · GW(p)
"Our gods are dead. Ancient Klingon warriors slew them a millenia ago; they were more trouble than they were worth."
- Lt. Cmdr. Worf, regarding Klingon beliefs
↑ comment by Fyrius · 2012-05-14T13:04:51.331Z · LW(p) · GW(p)
As badass as this bit of Klingon mythology may be, I'm not sure I see the relevance to rationalism. If I understand correctly, then what was considered "more trouble than they were worth" were the actual, really existing gods themselves, and not the Klingons' belief in imagined gods.
Replies from: DSimon↑ comment by DSimon · 2012-05-14T21:44:01.183Z · LW(p) · GW(p)
I was thinking in terms of moral realism and appropriate ambition rather than atheism or epistemology. The right response to a tyrannical or dangerous deity is to find a way to get rid of it if possible, rather than coming up with reasons why it's not really so bad.
Replies from: Fyriuscomment by Old_Rationality · 2012-05-02T11:52:19.104Z · LW(p) · GW(p)
I can state very positively why it was that, after having twice refused to utilise General Gordon's services, I yielded on being pressed a third time by Lord Granville. I believed that at the time I stood alone in hesitating to employ General Gordon... With this array of opinion against me, I mistrusted my own judgement. I did not yield because I hesitated to stand up against the storm of public opinion. I gave a reluctant assent, in reality against my own judgement and inclination, because I thought that, as everybody differed from me, I must be wrong. I also thought I might be unconsciously prejudiced against General Gordon from the fact that his habits of thought and modes of action in dealing with public affairs differed widely from mine. In yielding, I made a mistake which I shall never cease to regret.
Evelyn Baring, Earl of Cromer, Modern Egypt
comment by kalla724 · 2012-05-02T01:35:54.030Z · LW(p) · GW(p)
The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.
-- Anais Nin
Replies from: MixedNuts↑ comment by MixedNuts · 2012-05-02T16:28:18.498Z · LW(p) · GW(p)
This misses the point. There shouldn't be any mystery left. And that'll be okay.
Replies from: DanArmak, chaosmosis↑ comment by DanArmak · 2012-05-02T16:35:20.830Z · LW(p) · GW(p)
With perfect knowledge there would be no mystery left about the real world. But that is not what "sense of wonder and mystery" refers to. It describes an emotion, not a state of knowledge. There's no reason for it to die.
Replies from: bmschech↑ comment by bmschech · 2012-05-10T15:57:46.220Z · LW(p) · GW(p)
Nicely said. I'd like to add that perfect knowledge can only be of the knowable. The non-knowable is irreducibly wondrous and mysterious. The ultimate mystery, why there is something rather than nothing, seems unknowable.
Replies from: DanArmak↑ comment by DanArmak · 2012-05-11T16:32:41.975Z · LW(p) · GW(p)
There's plenty of inherently unknowable things around. For instance, almost all real numbers are uncomputable and even undefinable in any given formal language.
↑ comment by chaosmosis · 2012-05-02T16:39:22.562Z · LW(p) · GW(p)
You can't stop looking for flaws even after you've found all of them, otherwise you might miss one.
Also: http://en.wikipedia.org/wiki/Unexpected_hanging_paradox
Replies from: bmschech↑ comment by bmschech · 2012-05-10T16:09:15.912Z · LW(p) · GW(p)
Not sure why you brought this up, but as long as you did, I'd like to share my resolution of this paradox. Basically, it hinges on the definition of a surprise. If the prisoner is spared on Wednesday, he will know that he is doomed on Thursday or Friday, but is ignorant of which of these possibilities is true. So when Thursday dawns, whatever outcome obtains will be surprising. To say we are surprised by an event is simply to say that we cannot predict it in advance. Therefore, you can only reason about surprise looking forward in time, not backward. Or look at it this way. What if the judge told the prisoner that he was going to draw a slip of paper from a hat contain five slips, labeled Monday thru Friday and execute him on that day. Whatever day is chosen will be a surprise to both judge and prisoner.
comment by William_Kasper · 2012-05-06T19:19:36.222Z · LW(p) · GW(p)
Replies from: Nominull, army1987It's weird how proud people are of not learning math when the same arguments apply to learning to play music, cook, or speak a foreign language.
↑ comment by Nominull · 2012-05-07T06:12:39.174Z · LW(p) · GW(p)
I think that the relevant distinction is "is it really horribly unpleasant and I make no progress no matter how long I spend and I don't find correct output aesthetically pleasing."
"Weird" is a statement about your understanding of people's pride, not a statement about people's pride.
Replies from: TimS↑ comment by TimS · 2012-05-08T01:11:52.281Z · LW(p) · GW(p)
Proud of not learning math includes math like algebra or conversation of units. That sort of math, which might be taught in elementary school, is practically useful in daily life. Being proud of not knowing that kind of math is profoundly anti-learning. The attitude applies equally to learning anything, from reading to history to car mechanics.
Replies from: sixes_and_sevens, Nominull↑ comment by sixes_and_sevens · 2012-05-08T11:08:19.336Z · LW(p) · GW(p)
Something a not-especially-mathsy friend of mine said a while back:
It makes me sad when I see or hear people say 'algebra and trig are pointless, you never use them in real life'.
Because what this says to me is 'I make life more difficult for myself because I don't understand how to make it simpler'.
↑ comment by Nominull · 2012-05-08T17:21:54.193Z · LW(p) · GW(p)
Then how do you explain, in your model, the comic's implicit observation that people do not apply this same attitude to to learning to play music, cook, or speak a foreign language? Let's try to fit reality here, not just rag on people for being "anti-learning" in the same way others might speak of someone being "anti-freedom".
Replies from: TimS↑ comment by TimS · 2012-05-08T18:24:13.414Z · LW(p) · GW(p)
Briefly, cognitive bias of some kind. Compartmentalization. Belief that what I like and enjoy is good and worthwhile , and what I dislike is bad and useless. It's the failure to apply the lesson from a favored domain to an unfavored one that is the worthwhile point of the author's statement.
↑ comment by A1987dM (army1987) · 2012-05-07T15:26:54.635Z · LW(p) · GW(p)
Not many people are required to take cooking classes, hardly any goes through 20 years after graduating without ever needing to cook, and there are lots people “proud” of not learning foreign languages. And playing music is higher-status than doing maths.
comment by CharlieSheen · 2012-05-15T11:17:25.376Z · LW(p) · GW(p)
pattern recognition is a valuable aid to anyone navigating the chaos of the real world, their denials they engage in such nefarious human-like activity to the contrary notwithstanding.
--Heartiste (the blogger formerly known as Roissy), on useful stereotypes. Source.
Replies from: MixedNutscomment by hairyfigment · 2012-05-04T19:14:10.997Z · LW(p) · GW(p)
It has been my experience that most problems in life are caused by a lack of information...When two friends get mad at each other, they usually do it because they lack information about each other's feelings. Americans lack information about Librarian control of their government. The people who pass this book on the shelf and don't buy it lack information about how wonderful, exciting, and useful it is.
-- Alcatraz Smedry in Alcatraz versus the Evil Librarians, by Brandon Sanderson.
Replies from: gwillencomment by [deleted] · 2012-05-29T05:32:35.606Z · LW(p) · GW(p)
I can tell how old I am because I can remember a day long ago when journalists would describe a book as "provocative" and "controversial" to whet readers' interest in the book. Today, the words "provocative" and "controversial" have become code for Move Along, Nothing to See Here.
--Steve Sailer, commenting on cultural changes and words
Replies from: simplicio↑ comment by simplicio · 2013-05-07T17:48:56.535Z · LW(p) · GW(p)
Interesting. But those words are still used to promote. Impossible for me to say whether they are used that way less now than before... I guess I will take Sailer's word for it?
comment by NexH · 2012-05-12T08:45:09.994Z · LW(p) · GW(p)
From Terry Pratchett´s Unseen Academicals (very minor/not significant spoilers):
Replies from: Desrtopa‘You had to find the truth for yourself. That is how we all find the truth.’
‘And if the truth is terrible?’
‘I think you know the answer to that one, Nutt’ said the voice of Ladyship.
‘The answer is that, terrible or not, it is still the truth,’ said Nutt.
‘And then?’ said her voice, like a teacher encouraging a promising pupil.
‘And then the truth can be changed’ said Nutt.
↑ comment by Desrtopa · 2012-05-14T04:06:14.355Z · LW(p) · GW(p)
If you feel the need to put the quote in rot13 to avoid spoilers, it's probably not worth posting at all (I don't think that this quote spoils anything significant about the plot in any case.)
Replies from: NexH↑ comment by NexH · 2012-05-14T19:10:32.837Z · LW(p) · GW(p)
I see. I think the quoted text is very representative of rational thinking, but since I personally don´t like spoilers/previews very much, I opted for caution and rot13ed it. My thinking was that an unseen quote can be seen later if so wished, but it is harder to forget something already read. But perhaps for most people the discordance of seeing a lone rot13ed text has a negative utility that is lower than that of reading a very minor spoiler/preview? If that is so, I will unrot13 it.
In any case, thank you for your input. For now, I will edit the parent so that it is clear that the severity of the spoiler is very low.
comment by [deleted] · 2012-05-07T21:06:16.555Z · LW(p) · GW(p)
It is not merely that a stock of true beliefs is vastly more likely to be helpful than a stock of false ones, but that the policy of aiming for the truth, of having and trying to satisfy a general (de dicto) desire for the truth—what we might simply call "critical inquiry"—is the best doxastic policy around. Anything else, as Charles Peirce correctly insists, lead to "a rapid deterioration of intellectual vigor."—Richard Joyce, The Myth of Morality (2001) p. 179.
comment by kdorian · 2012-05-05T14:38:28.832Z · LW(p) · GW(p)
You know, I once read an interesting book which said that, uh, most people lost in the wilds, they, they die of shame. Yeah, see, they die of shame. 'What did I do wrong? How could I have gotten myself into this?' And so they sit there and they... die. Because they didn't do the one thing that would save their lives. Thinking.
- David Mamet
Replies from: NancyLebovitz, RobinZ↑ comment by NancyLebovitz · 2012-05-07T20:12:56.461Z · LW(p) · GW(p)
ETA: Gwern checked the book and posted the relevant section below. I got it backwards-- seven to twelve are the ages most likely to die. Six and under are more likely to survive.
Actually, there's something rather like that in Deep Survival, a book that's mostly about wilderness survival. IIRC, six to twelve year olds are more likely to survive than adults, and it's because of less fear of embarrassment.
However, the author didn't go into a lot of details about which mistakes the adults make-- I think it was that the kids seek cover, but the adults make bad plans and insist on following through with them.
Replies from: gwern↑ comment by gwern · 2012-05-07T21:48:45.261Z · LW(p) · GW(p)
Downloading the book, pg236, you forgot one interesting detail:
One of the many baffling mysteries concerns who survives and who doesn't. "It's not who you'd predict, either," Hill, who has studied the survival rates of different demographic groups, told me. "Sometimes the one who survives is an inexperienced female hiker, while the experienced hunter gives up and dies in one night, even when it's not that cold. The category that has one of the highest survival rates is children six and under, the very people we're most concerned about." Despite the fact that small children lose body heat faster than adults, they often survive in the same conditions better than experienced hunters, better than physically fit hikers, better than former members of the military or skilled sailors. And yet one of the groups with the poorest survival rates is children ages seven to twelve. Clearly, those youngest children have a deep secret that trumps knowledge and experience.
Scientists do not know exactly what that secret is, but the answer may lie in basic childhood traits. At that age, the brain has not yet developed certain abilities. For example, small children do not create the same sort of mental maps adults do. They don't understand traveling to a particular place, so they don't run to get somewhere beyond their field of vision. They also follow their instincts. If it gets cold, they crawl into a hollow tree to get warm. If they're tired, they rest, so they don't get fatigued. If they're thirsty, they drink. They try to make themselves comfortable, and staying comfortable helps keep them alive. (Small children following their instincts can also be hard to find; in more than one case, the lost child actually hid from rescuers. One was afraid of "coyotes" when he heard the search dogs barking. Another was afraid of one-eyed monsters when he saw big men wearing headlamps. Fortunately, both were ultimately found.) The secret may also be in the fact that they do not yet have the sophisticated mental mapping ability that adults have, and so do not try to bend the map. They remap the world they're in.
Children between the ages of seven and twelve, on the other hand, have some adult characteristics, such as mental mapping, but they don't have adult judgment. They don't ordinarily have the strong ability to control emotional responses and to reason through their situation. They panic and run. They look for shortcuts. If a trail peters out, they keep going, ignoring thirst, hunger, and cold, until they fall over. In learning to think more like adults, it seems, they have suppressed the very instincts that might have helped them. But they haven't learned to stay cool. Many may not yet be self-reliant.
http://wiki.lesswrong.com/wiki/Valley_of_bad_rationality ?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-08T08:10:20.544Z · LW(p) · GW(p)
All I can say at the moment is WOW.
comment by J_Taylor · 2012-05-01T14:31:32.891Z · LW(p) · GW(p)
Spider: The point is, the only real tools we have are our eyes and our heads. It's not the act of seeing with our own eyes alone; it's correctly comprehending what we see.
Channon: Treating life as an autopsy.
Spider: Got it. Laying open the guts of the world and sniffing the entrails, that's what we do.
-- Warren Ellis, Transmetropolitan
comment by Multiheaded · 2012-05-14T11:41:31.209Z · LW(p) · GW(p)
"Nothing matters at all. Might as well be nice to people. (Hand out your chuckles while you can.)"
- A Softer World. I've always loved that webcomic, as sappy as it is.
(Mouse over a strip to see its last sentence.)
Also:
"You were my everything. Which, upon reflection, was probably the problem."
"Overreaction: Any reaction to something that doesn't affect me."
"Civilization is the ability to distinguish what you like from what you like watching pornography of. (And anyway, why were you going through my computer?)"
"The Internet made us all into cyborgs with access to a whole world of information to back up whatever stupid thing we believe that day. (The Racist Computer Wore Tennis Shoes)"
"Everyone wants someone they can bring home to mom. I need someone to distract my mom while I raid the medicine cabinet. (Someone who thinks suggested dosages are quaint.)" - that's not a rationality quote, but it's how my boyfriend thinks and operates.
comment by A1987dM (army1987) · 2012-05-19T10:44:48.637Z · LW(p) · GW(p)
Replies from: CasioTheSane, TheOtherDaveIf a student says “I find physics boring and dull”, it simply means only one thing: that they had a bad teacher. Any good teacher can turn physics into something absolutely spectacular.
↑ comment by CasioTheSane · 2012-05-25T01:57:09.157Z · LW(p) · GW(p)
In general, science is only boring when you don't understand it.
Even people who love science often regard areas other than their field of expertise as dull. In reality, I suspect that if they took the time to better understand those "dull" specialties they'd find them fascinating as well.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-26T04:06:00.666Z · LW(p) · GW(p)
Even people who love science often regard areas other than their field of expertise as dull.
Carefully, you might have reversed cause and effect there.
Replies from: CasioTheSane↑ comment by CasioTheSane · 2012-05-26T16:38:45.373Z · LW(p) · GW(p)
It goes without saying that things you can't comprehend are boring regardless of their actual content- nobody wants to re-read their favorite 1,000 page novel as a PGP encrypted string for example. It's also a fact that scientists don't have the knowledge to comprehend the interesting bits in a field they haven't specialized in. So there's no plausible route by which people could really know if a field they lack expertise in truly would be dull to them or not, even if it would in fact be dull: they must be assuming it to be dull despite a lack of comprehension. I could be wrong about the cause and effect, but I could not have reversed it. This raises the question of how people get into a field at all in the first place, when it's still gibberish to them.
To be honest though, I was merely generalizing from my own experience. I've yet to find any branch of science that didn't fascinate me upon close inspection- I've been in many situations where I had no real choice but to study something in detail which i didn't expect to be interesting but needed the knowledge towards a specific end goal. Every time it seemed initially dull and pointless while I was struggling with the nomenclature and basic concepts, until I reached a critical point whereby it became intensely interesting.
However, it's true that I've chosen to do inter-disciplinary work, and that could be due to me having some unusual trait whereby everything is interesting to me.
↑ comment by TheOtherDave · 2012-05-19T14:57:46.007Z · LW(p) · GW(p)
Heh... now I'm feeling nostalgic about Prof Lewin's freshman physics lectures.
Haven't thought about them in years.
comment by Zubon · 2012-05-26T14:40:10.961Z · LW(p) · GW(p)
The more accurate the map, the more it resembles the territory. The most accurate map possible would be the territory, and this would be perfectly accurate and perfectly useless.
-- American Gods by Neil Gaiman.
Replies from: wedrifid↑ comment by wedrifid · 2014-01-31T21:09:39.439Z · LW(p) · GW(p)
The more accurate the map, the more it resembles the territory. The most accurate map possible would be the territory, and this would be perfectly accurate and perfectly useless.
This quote hides a subtle equivocation, which it relies on to jump from "you have X" to "you do not have X" without us noticing.
If I have a map I can look at it, draw marks on it and make plans. I can also tear it to pieces and analyse it with a mass spectrometer without it damaging the territory. Make the map I start with more accurate and I can draw on it in more detail and make more accurate analysis. Make the map nearly perfect and I can get nearly perfect information from the map without destroying breaking anything in the territory. Moving from 'nearly perfect' to 'perfect' does not mean "Oh, actually you don't have one territory and also one map. You only have this one territory".
As a practical example consider a map of a bank I am considering robbing. I could have blueprint of the building layout. I could have detailed photographs. Or I could have a perfect to-scale clone of the building accurate in every detail. That 'map' sounds rather useful to me.
Imprecision is not the only purpose of a map.
Replies from: Laoch↑ comment by Laoch · 2014-01-31T21:19:53.838Z · LW(p) · GW(p)
I know this is probably an ad hominem but isn't Gaiman the guy who wrote Doctor Who episodes? The worst sci-fi show ever.
Replies from: Richard_Kennaway, None, wedrifid↑ comment by Richard_Kennaway · 2014-02-03T12:50:08.205Z · LW(p) · GW(p)
isn't Gaiman the guy who wrote Doctor Who episodes?
Many, many writers have written for Doctor Who. Gaiman has done many, many things in his writing career besides writing for Doctor Who. And Doctor Who is a cultural phenomenon larger than any trite dismissal of it.
Replies from: Laoch↑ comment by wedrifid · 2014-02-01T00:46:46.945Z · LW(p) · GW(p)
I know this is probably an ad hominem but isn't Gaiman the guy who wrote Doctor Who episodes? The worst sci-fi show ever.
Doctor Who is one of my favourite shows (top five, higher if we count only shows that are still running.) I don't know to what extent knowledge of our different preferences regarding Doctor Who could be used to predict differences in our evaluations of the rationality of a given Gaiman quote.
Replies from: Laoch↑ comment by Laoch · 2014-02-03T09:10:45.013Z · LW(p) · GW(p)
Oh I completely agree. It's just my experience of Doctor Who has been that it's a well of irrational story lines. For example why would the TARDIS have a soul?
Replies from: wedrifid↑ comment by wedrifid · 2014-02-03T15:44:39.477Z · LW(p) · GW(p)
It's just my experience of Doctor Who has been that it's a well of irrational story lines.
There does seem to be an awful lot of arbitrariness involved in the plotlines. For whatever reason it doesn't seem to contain much of the particular kind of irrationality that I personally detest so for me it is just a fun adventure with increasingly pretty girls.
For example why would the TARDIS have a soul?
It is closer to an extremely advanced horse than an extremely advanced car. That doesn't bother me too much. Some of the arbitrary 'rules' of time travel are more burdensome.
Replies from: Jiro, Laoch↑ comment by Jiro · 2014-02-03T19:38:46.801Z · LW(p) · GW(p)
What gets me is that you can change the past except when you can't. They've tried to explain it away using "fixed points" which can't be changed but even that doesn't hold together.
For instance, the Doctor just admitted that he could change things that he thought he couldn't change and 1) brought back Gallifrey from the Time War, and 2) brought it back into our universe prevent his death. If I were him, this would be the point where I'd say "Maybe I should go and see if I can bring back Rose's father too. Then I can start on Astrid, and maybe that girl from Waters of Mars".
And Gaiman's episode was bizarre. he had the Tardis acting like a stereotypical wife when at the same time the Tardis crew included an actual husband and wife and they didn't act towards each other like that. And if the Tardis is sentient there's no reason he couldn't hook a voice box into it, except that doing so, thus actually following the logical implications of the Tardis being sentient, would mess up the rest of the series. That episode was just a blatant case of someone wanting to write his pet fan theory into the show and getting to do so because he is Neil Gaiman.
The series also takes a negative attitude towards immortality, despite the Doctor living for a long time.
I'm also sick and tired of the Doctor deciding that a problem whose only obvious solution is violence and killing can be magically solved if he just refuses to accept that the solution is violence and killing. In the real world, such a policy would lead to even more killing.
Replies from: MugaSofer↑ comment by MugaSofer · 2014-02-04T22:23:10.687Z · LW(p) · GW(p)
For instance, the Doctor just admitted that he could change things that he thought he couldn't change and 1) brought back Gallifrey from the Time War, and 2) brought it back into our universe prevent his death. If I were him, this would be the point where I'd say "Maybe I should go and see if I can bring back Rose's father too. Then I can start on Astrid, and maybe that girl from Waters of Mars".
puts on Doctor Who nerd hat
Those were two different forms of "can't change this thing". The time lock prevented him from interfering with the time war at all, to the point where he couldn't even visit - an artificial area-denial system. Fixed points, on the other hand, are ... vague, but essentially they are natural (?) phenomena where Fate will arbitrarily (?) ensure you can never change this thing. They serve to allow for time travel stories designed for can't-change-the-past systems of time travel, Oedipus Rex (or time-turner) style.
The Doctor has tried to change fixed points, in The Waters of Mars. It didn't go well, and was portrayed as him going a bit mad with hubris.
The series also takes a negative attitude towards immortality, despite the Doctor living for a long time.
Does it?
It seems to me that it takes a neutral stance; immortality is unquestionably good for individuals (even the Master! He's evil!), but most of the ways to achieve it are governed by sci-fi genre convention that Things Will Go Wrong, and people don't seem motivated to share it with humanity much.
I'm also sick and tired of the Doctor deciding that a problem whose only obvious solution is violence and killing can be magically solved if he just refuses to accept that the solution is violence and killing. In the real world, such a policy would lead to even more killing.
Well ... yeah. That's really very annoying, and the writers seem to have latched onto it recently.
Then again, this is the same character who, y'know, killed everyone in the Time War. And showed he was willing to do it again in the anniversary special, even if he found a Third Option before they actually did it.
And, hey! The TARDIS was always intelligent. And it's location in mind-space clearly isn't designed for human interaction, even when "possessing" a rewritten human brain. And she wasn't a stereotypical wife. And ...
takes off Doctor Who nerd hat
OK, that's probably enough offtopic nitpicking for one day.
Replies from: Jiro↑ comment by Jiro · 2014-02-04T23:41:33.497Z · LW(p) · GW(p)
Well, this is sort of off-topic, but on the other hand, a lot of this has to do with the side the show takes on topics of LW interest.
Those were two different forms of "can't change this thing".
He didn't just think he couldn't change the destruction of Gallifrey because he was locked out of visiting. In the anniversary special, he was there, but first decided he couldn't change history and had to let the destruction proceed as he had previously done it. He later got an epiphany and decided he could change history by just making it look like the planet was destroyed.
Likewise, in the Christmas special he couldn't change his own death because he had already seen its effects and knew it was going to happen. He was there--he wasn't locked out or unable to visit.
If he could get around that for his own death, it's about time he start doing it for all the others.
It seems to me that it takes a neutral stance
I don't believe that. For instance, look at the Doctor's lecture to Cassandra (several years ago). Furthermore, the genre convention that immortality goes wrong is part and parcel of how much of the genre opposes immortality. Sci-fi loves to lecture the audience on how something is wrong in real life by showing those things going wrong for fantasy reasons (http://tvtropes.org/pmwiki/pmwiki.php/Main/SpaceWhaleAesop and http://tvtropes.org/pmwiki/pmwiki.php/Main/FantasticAesop).
Then again, this is the same character who, y'know, killed everyone in the Time War. And showed he was willing to do it again in the anniversary special, even if he found a Third Option before they actually did it.
It's not the character so much as the story. The story clearly sends the message that it's a bad idea to do such things and that there always is a third option.
comment by Multiheaded · 2012-05-17T15:40:40.122Z · LW(p) · GW(p)
Yet more of St. George:
...I thought of a rather cruel trick I once played on a wasp. He was sucking jam on my plate, and I cut him in half. He paid no attention, merely went on with his meal, while a tiny stream of jam trickled out of his severed œsophagus. Only when he tried to fly away did he grasp the dreadful thing that had happened to him. It is the same with modern man. The thing that has been cut away is his soul, and there was a period — twenty years, perhaps — during which he did not notice it.
It was absolutely necessary that the soul should be cut away. Religious belief, in the form in which we had known it, had to be abandoned. By the nineteenth century it was already in essence a lie, a semi-conscious device for keeping the rich rich and the poor poor. The poor were to be contented with their poverty, because it would all be made up to them in the world beyound the grave, usually pictured as something mid-way between Kew gardens and a jeweller's shop. Ten thousand a year for me, two pounds a week for you, but we are all the children of God. And through the whole fabric of capitalist society there ran a similar lie, which it was absolutely necessary to rip out.
Consequently there was a long period during which nearly every thinking man was in some sense a rebel, and usually a quite irresponsible rebel. Literature was largely the literature of revolt or of disintegration. Gibbon, Voltaire, Rousseau, Shelley, Byron, Dickens, Stendhal, Samuel Butler, Ibsen, Zola, Flaubert, Shaw, Joyce — in one way or another they are all of them destroyers, wreckers, saboteurs. For two hundred years we had sawed and sawed and sawed at the branch we were sitting on. And in the end, much more suddenly than anyone had foreseen, our efforts were rewarded, and down we came. But unfortunately there had been a little mistake. The thing at the bottom was not a bed of roses after all, it was a cesspool full of barbed wire.
It is as though in the space of ten years we had slid back into the Stone Age. Human types supposedly extinct for centuries, the dancing dervish, the robber chieftain, the Grand Inquisitor, have suddenly reappeared, not as inmates of lunatic asylums, but as the masters of the world. Mechanization and a collective economy seemingly aren't enough. By themselves they lead merely to the nightmare we are now enduring: endless war and endless underfeeding for the sake of war, slave populations toiling behind barbed wire, women dragged shrieking to the block, cork-lined cellars where the executioner blows your brains out from behind. So it appears that amputation of the soul isn't just a simple surgical job, like having your appendix out. The wound has a tendency to go septic.
Notes on the Way
Replies from: Desrtopa, Multiheaded, beforearchimedes↑ comment by Desrtopa · 2012-05-21T02:00:41.839Z · LW(p) · GW(p)
...I thought of a rather cruel trick I once played on a wasp. He was sucking jam on my plate, and I cut him in half. He paid no attention, merely went on with his meal, while a tiny stream of jam trickled out of his severed œsophagus.
I find that hard to believe. I would expect even a wasp to notice this.
↑ comment by Multiheaded · 2012-05-17T15:50:51.298Z · LW(p) · GW(p)
Yes, before anyone pitches in with that observation, M.M. would surely quote the above with some glee. I'm confident that he'd refrain from posting the essay's ending, though:
Mr Aldous Huxley's Brave New World was a good caricature of the hedonistic Utopia, the kind of thing that seemed possible and even imminent before Hitler appeared, but it had no relation to the actual future. [1] What we are moving towards at this moment is something more like the Spanish Inquisition, and probably far worse, thanks to the radio and the secret police. There is very little chance of escaping it unless we can reinstate the belief in human brotherhood without the need for a ‘next world’ to give it meaning. It is this that leads innocent people like the Dean of Canterbury to imagine that they have discovered true Christianity in Soviet Russia. No doubt they are only the dupes of propaganda, but what makes them so willing to be deceived is their knowledge that the Kingdom of Heaven has somehow got to be brought on to the surface of the earth. We have not to be the children of God, even though the God of the Prayer Book no longer exists.
The very people who have dynamited our civilization have sometimes been aware of this, Marx's famous saying that ‘religion is the opium of the people’ is habitually wrenched out of its context and given a meaning subtly but appreciably different from the one he gave it. Marx did not say, at any rate in that place, that religion is merely a dope handed out from above; he said that it is something the people create for themselves to supply a need that he recognized to be a real one. ‘Religion is the sigh of the soul in a soulless world. Religion is the opium of the people.’ What is he saying except that man does not live by bread alone, that hatred is not enough, that a world worth living in cannot be founded on ‘realism’ and machine-guns? If he had foreseen how great his intellectual influence would be, perhaps he would have said it more often and more loudly.
[1] Okay, that's the one bit Orwell got wrong... maybe. Industrial murder did mark everything forever, though.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-18T03:10:10.134Z · LW(p) · GW(p)
Yes, before anyone pitches in with that observation, M.M. would surely quote the above with some glee. I'm confident that he'd refrain from posting the essay's ending, though:
Why? My mental model of M.M., admittedly based on the very few things of his that I've read, has him not disagreeing with the above section significantly.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-18T04:05:27.886Z · LW(p) · GW(p)
He's very firmly against all past and future attempts to bring forth the aforementioned Kingdom of Heaven (except, needless to say, his own - which has the elimination of hypocrisy as one of its points). He sneers - I have no other word - at patriotic feeling, and wages a one-man crusade against ideological/religious feeling. He might dislike hatred, but he certainly believes that greed and self-interest are "enough" - are the most useful, safe motives one could have. Etc, etc, etc, etc, etc.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-18T06:11:01.568Z · LW(p) · GW(p)
He sneers - I have no other word - at patriotic feeling, and wages a one-man crusade against ideological/religious feeling.
Orwell wasn't exactly a supporter of patriotism or religion either. In fact, in paragraphs you quoted you can see Orwell sneering at religion even as he admits that it can serve a useful purpose. My understanding of Moldbug's position on religion is that its pretty similar, i.e., he recognizes the important role religion played in Western Civilization including the development of science even if he doesn't like what it's currently evolved into.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-02T15:42:40.162Z · LW(p) · GW(p)
Orwell wasn't exactly a supporter of patriotism or religion either.
No offence, but I think you need to read a dozen of his post-1939 essays before we even talk about that. He was a fervent British patriot, occasionally waxing nostalgic about the better points of the old-time Empire - even as he was talking about the necessity of a socialist state! - and a devout Anglican for his entire life (which was somewhat obscured by his contempt for bourgeois priesthood).
You're simply going off the one-dimensional recycled image of Orwell: the cardboard democratic socialist whose every opinion was clear, liberal and ethically spotless. The truth is far more complicated; I'd certainly say he was more of a totalitarian than the hypocritical leftist intellectuals he was bashing! (I hardly think less of him due to that, mind.)
↑ comment by beforearchimedes · 2012-05-17T16:24:36.027Z · LW(p) · GW(p)
It is as though in the space of ten years we had slid back into the Stone Age. Human types supposedly extinct for centuries, the dancing dervish, the robber chieftain, the Grand Inquisitor, have suddenly reappeared, not as inmates of lunatic asylums, but as the masters of the world. Mechanization and a collective economy seemingly aren't enough. By themselves they lead merely to the nightmare we are now enduring: endless war and endless underfeeding for the sake of war, slave populations toiling behind barbed wire, women dragged shrieking to the block, cork-lined cellars where the executioner blows your brains out from behind. So it appears that amputation of the soul isn't just a simple surgical job, like having your appendix out. The wound has a tendency to go septic.
I don't see how this brutality was lacking when humans were more religiously observant. Furthermore, the quote seems to argue for religion.
Meaning the conclusion and the conclusion's reasoning are both wrong.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-17T16:35:01.387Z · LW(p) · GW(p)
I don't see how this brutality was lacking when humans were more religiously observant.
Not much revolutionary or counter-revolutionary terror, no death camps, comparatively little secret police. Little police and policing in general, actually; you could ride from one end of Europe to another without any prior arrangements, and if you looked alright everyone would let you in. The high and mighty being content with merely existing at the top of traditional "divinely ordained" hierarchy and not having the Will zur Macht that enables really serious tyranny, not attempting to forge new meanings and reality while dragging their subjects to violent insanity.
I agree that it was a cruel, narrow-minded and miserable world that denied whole classes and races a glimpse of hope without a second thought. But we went from one nightmare through a worse one towards a dubious future. There's not much to celebrate so far.
Furthermore, the quote seems to argue for religion.
It argues for a thought pattern and attitude to life that Christianity also exhibits at the best of times, but against the belief in supernatural.
Replies from: JoshuaZ, TimS↑ comment by JoshuaZ · 2012-05-17T16:52:25.558Z · LW(p) · GW(p)
Much of this is simply not the case or ignores the largescale other problems. It may help to read Steven Pinker's book "The Better Angels of Our Nature" which makes clear how murder, and warfare (both large and small) were much more common historically.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-17T17:05:18.788Z · LW(p) · GW(p)
I've read a summary. I'm mostly playing the devil's advocate with this argument, to be honest. I have a habit of entertaining my far-type fears perhaps a touch more than they deserve.
↑ comment by TimS · 2012-05-17T16:56:35.091Z · LW(p) · GW(p)
Not much revolutionary or counter-revolutionary terror, no death camps, comparatively little secret police.
What exactly was the war on heresy?
The high and mighty being content with merely existing at the top of traditional "divinely ordained" hierarchy
Peasant revolts based on oppressive governance costs didn't happen?
If we don't count the denial of a glimpse of hope to "whole classes and races" (and genders) of people, then most of what I personally don't approve of in the time period drops out. But even if that isn't included in the ledger, it wasn't all that great for the vast majority of white Christian men.
Replies from: Multiheaded, BillyOblivion↑ comment by Multiheaded · 2012-05-17T17:02:53.593Z · LW(p) · GW(p)
Dude, I completely agree. I'm far from a reactionary. I'm just thinking aloud. Might the 20th century have indeed been worse than the above when controlled for the benefits as well as downsides of technical progress? I can't tell, and everyone's mind-killed about that - particularly "realist" people like M.M., who claim to be the only sane ones in the asylum.
Replies from: TimS, None↑ comment by TimS · 2012-05-17T18:43:48.737Z · LW(p) · GW(p)
Let's cash this out a little bit - Which was worse, the heresy prosecutions of the Medieval era, or the Cultural Revolution? I think the answer is the Cultural Revolution, if for no other reason than more people were affected per year.
But that's based on technological improvement between the two time periods:
- More people were alive in China during the Cultural Revolution because of improvements in food growth, medical technology, and general wealth increase from technology.
- The government was able to be more effective and uniform in oppressing others because of improvements in communications technology.
Once we control for those effects, I think it is hard to say which is worse.
In contrast, I think the social changes that led to the end of serious calls for Crusades were a net improvement on human, and I'm somewhat doubtful that technological changes drove those changes (what probably did drive them was that overarching unifying forces like the Papacy lost their legitimacy and power to compel large portions of society). Which isn't to say that technology doesn't drive social change (consider the relationship between modern women's liberation and the development of reliable chemical birth control).
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-17T18:48:59.236Z · LW(p) · GW(p)
As a percentage of total planetary population, a large number of historical wars were worse than any 20th century atrocity. Pinker has a list in his book, and there are enough that they include wars most modern people have barely heard of.
Replies from: TimS↑ comment by TimS · 2012-05-17T18:57:53.723Z · LW(p) · GW(p)
I'm trying to compare apples to apples here. Wars are not like ideological purity exercises, nor are they like internal political control struggles (i.e. suppressing a peasant revolt, starving the Kulaks).
I'd have to get a better sense of historical wars before I could confidently opine on the relative suffering of the military portions of WWII vs. the military portions of some ancient war. And then I'd have to decide how to compare similar events that took different amounts of time (e.g. WWI v. Hundred Years War)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-17T19:05:17.303Z · LW(p) · GW(p)
Wars are not like ideological purity exercises, nor are they like internal political control struggles
The line between these is not always so clear. Look at the crusade against the Cathars or look at the Reformation wars for example.
Replies from: TimS↑ comment by TimS · 2012-05-17T19:16:50.438Z · LW(p) · GW(p)
I agree that the categories (war, ideological purification, suppression of internal dissent) are not natural kinds.
But issue is separating the effects of ideological change from the effects of technological change, so meaningful comparisons are important.
↑ comment by BillyOblivion · 2012-05-28T07:48:16.181Z · LW(p) · GW(p)
What exactly was the war on heresy?
You mean then, or now?
Remember what happened to Larry Summers at Harvard when he merely asked the question?
Does the phrase "Denier" cause any mental associations that weren't there in the late 90s?
At least Copernicus was allowed to recant and live his declining years in (relative) peace.
Replies from: Jayson_Virissimo, Eugine_Nier↑ comment by Jayson_Virissimo · 2012-05-28T08:36:38.438Z · LW(p) · GW(p)
At least Copernicus was allowed to recant and live his declining years in (relative) peace.
Nicolaus Copernicus was never charged with heresy (let alone convicted). Moreover, he was a master of canon law, might have been a priest at one point, was urged to publish De Revolutionibus Orbium Coelestium by cardinals (who also offered to pay his expenses), and dedicated the work to Pope Paul III when he did get around to publishing it. Also, one of his students gave a lecture outlining the Copernican system to a crowd that included Pope Clement VII (for which he was rewarded with an expensive Greek Codex). Even had he lived two more decades, it is very unlikely he would ever have been charged with heresy.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-28T09:10:28.505Z · LW(p) · GW(p)
And on that note the Galileo affair was an aberration—it'd be unwise to see it as exemplary of the Church's general attitude towards unorthodox science. The Church was like half Thomist for Christ's sake.
Replies from: wedrifid↑ comment by wedrifid · 2012-05-28T14:20:20.502Z · LW(p) · GW(p)
And on that note the Galileo affair was an aberration—it'd be unwise to see it as exemplary of the Church's general attitude towards unorthodox science.
For instance, most instances of heresy were crushed successfully without them bearing fruit or gaining influence. (In some part because most incidences of heresy are actually false theories. Because most new ideas in general are wrong.) The Galileo incident was an epic failure of both religious meme enforcement and public relations. It hasn't happened often! Usually the little guy loses and nobody cares.
(The above generalises beyond "The Church" to heavy handed belief enforcement by human tribes in general.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-05-28T16:01:34.922Z · LW(p) · GW(p)
Right, but note I said unorthodox science. Heresy was crushed, but it wasn't common for scientific theories to be seen as heretical. Galileo just happened to publish his stuff when the Church was highly insecure because of all the Protestant shenanigans. Heretical religious or sociopolitical teachings, on the other hand, were quashed regularly.
↑ comment by Eugine_Nier · 2012-05-29T01:54:25.755Z · LW(p) · GW(p)
Yes, and Summers has gone on to be a presidential adviser.
comment by Waldheri · 2012-05-03T18:02:52.236Z · LW(p) · GW(p)
All interpretation or observation of reality is necessarily fiction. In this case, the problem is that man is a moral animal abandoned in an amoral universe and condemned to a finite existence with no other purpose than to perpetuate the natural cycle of the species. It is impossible to survive in a prolonged state of reality, at least for a human being. We spend a good part of our lives dreaming, especially when we're awake.
― Carlos Ruiz Zafón, The Angel's Game
comment by NancyLebovitz · 2012-05-10T03:07:29.174Z · LW(p) · GW(p)
The privilege of knowing how, painfully, to frame answerable questions, answers which will lead him to more insights and better questions, as far as his mind can manage and his own life lasts. It is what he wants more than anything in the world, always has.
The Psychologist Who Wouldn't Do Awful Things to Rats by James Tiptree
comment by imbatman · 2012-05-21T16:28:05.165Z · LW(p) · GW(p)
"Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong."
Replies from: Oligopsony, BillyOblivion, MinibearRex↑ comment by Oligopsony · 2012-05-21T16:55:52.108Z · LW(p) · GW(p)
Or that you've made an invalid inference.
↑ comment by BillyOblivion · 2012-05-28T07:32:59.529Z · LW(p) · GW(p)
Or that both of them (to reference a previous Rationality Quotes entry on arguments) are wrong.
↑ comment by MinibearRex · 2012-05-21T18:19:07.220Z · LW(p) · GW(p)
source?
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-05-21T18:29:41.575Z · LW(p) · GW(p)
Pretty sure that was Francisco d'Anconia aka Superman, in Ayn Rand's Atlas Shrugged.
Replies from: imbatmancomment by Multiheaded · 2012-05-18T14:59:03.970Z · LW(p) · GW(p)
"Truth," she thought. "As terrible as death. But harder to find."
Philip K. Dick, The Man in the High Castle
comment by Leonhart · 2012-05-16T22:53:35.073Z · LW(p) · GW(p)
On Fun Theory; by a great, drunken Master of that conspiracy:
It seems that by placing danmaku under the spellcard rule, the rule limits the freedom of the user, but that isn't true. To be unrestricted means to be able to do anything. On the contrary, that means the immediate pursuit of the best, which in turn destroys variation. If one were free, they need to pursue only "the most efficient, the most effective." For danmaku, that would be one with no gaps, or the fastest and largest attacks possible. That kind of attack can't be described as danmaku at all. Therefore, in a world without rules, danmaku is nonsense.
-- Marisa Kirisame, in her Grimoire
comment by [deleted] · 2012-05-12T00:28:14.540Z · LW(p) · GW(p)
It is somewhat remarkable that this reverend divine should be so earnest for setting up new churches and so perfectly indifferent concerning the doctrine which may be taught in them. His zeal is of a curious character. It is not for the propagation of his own opinions, but of any opinions. It is not for the diffusion of truth, but for the spreading of contradiction. Let the noble teachers but dissent, it is no matter from whom or from what.
Edmund Burke on Richard Price, in "Reflections on the Revolution in France" which I am reading for the first time. This Richard Price, who is fascinating. Here is the sermon Burke was complaining about.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-05-12T00:44:11.607Z · LW(p) · GW(p)
Haha, it's hard not to feel a twitch of self-righteous liberal superiority upon reading Burke's words. Even though none of us is really "liberal" in this regard - privately almost none of us value freedom of opinion more than spreading one's own opinions; we're just bound by a prisoner's dilemma in this regard. Our age is more polite and hypocritical about it, though.
Replies from: Nonecomment by CaveJohnson · 2012-05-11T17:10:46.217Z · LW(p) · GW(p)
The fact is that political stupidity is a special kind of stupidity, not well correlated with intelligence, or with other varieties of stupidity.
--John Derbyshire, source
Replies from: Oligopsony↑ comment by Oligopsony · 2012-05-11T17:21:56.588Z · LW(p) · GW(p)
What is the intended extension of "political stupidity" in this quote? (Intended by you in quoting it; I can hardly demand that you engage in telepathy.)
Replies from: CaveJohnson↑ comment by CaveJohnson · 2012-05-11T17:38:11.472Z · LW(p) · GW(p)
What do you think in the context of the link I called "Relevant"?
comment by CasioTheSane · 2012-05-25T01:52:42.522Z · LW(p) · GW(p)
A paradox is simply an error out of control; i.e. one that has trapped so many unwary minds that it has gone public, become institutionalized in our literature, and taught as truth.
-E.T. Jaynes
comment by EndlessEnigma · 2012-05-06T00:39:15.560Z · LW(p) · GW(p)
The discovery of truth is prevented most effectively, not by false appearances which mislead into error, nor directly by weakness of reasoning powers, but by pre-conceived opinion, by prejudice, which as a pseudo a priori stands in the path of truth and is then like a contrary wind driving a ship away from land, so that sail and rudder labour in vain.
- Arthur Schopenhauer "On Philosophy and the Intellect"
↑ comment by blashimov · 2012-05-23T20:53:59.934Z · LW(p) · GW(p)
Why the strike through?
Replies from: JoachimSchipper↑ comment by JoachimSchipper · 2012-05-31T13:46:57.648Z · LW(p) · GW(p)
This post has been retracted.
comment by Nagendran · 2012-05-02T18:38:46.587Z · LW(p) · GW(p)
We, humans, use a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impose it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of reference is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.
--- Brian Authur, The Nature of Technology
comment by gwern · 2012-05-30T00:09:28.471Z · LW(p) · GW(p)
"I was thirsty—no, I was dying. I was parched like the plains of Antarctica. Can you believe that? That Antarctica is drier than any other place on Earth? You wouldn't think so, but I've always believed the truth is strange, so I accepted it right away."
--Chapter One, "The Coin", by Muphrid; see also "Joy in the Merely Real"
Replies from: wedrifid, Document, army1987↑ comment by wedrifid · 2012-05-30T07:00:40.946Z · LW(p) · GW(p)
"I was thirsty—no, I was dying. I was parched like the plains of Antarctica. Can you believe that? That Antarctica is drier than any other place on Earth? You wouldn't think so, but I've always believed the truth is strange, so I accepted it right away."
Huh? That doesn't seem strange at all. It's the first place I would have guessed - based on it being really extreme, really big and really cold.
I guess I can't get so much of "truth is strange, update!" kick out of this one as intended...
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-05-30T10:20:56.500Z · LW(p) · GW(p)
Huh? That doesn't seem strange at all. It's the first place I would have guessed - based on it being really extreme, really big and really cold.
"Cold" isn't typically associated with "dry" in most people's mental maps, as rain tends to be cold, and snow is very cold, and even the most commonly encountered form of ice (icecubes) melts quick enough too; and therefore generally most of everyday coldness gets anti-associated with dryness.
Ofcourse Antarctica is not everyday coldness - the ice in most of Antarctica is very far from temperatures that would make it liquid... But I understand how it could surprise someone who hadn't thought it through.
↑ comment by Document · 2012-05-30T08:29:02.967Z · LW(p) · GW(p)
And glass is a slowly flowing liquid.
Replies from: wedrifid↑ comment by wedrifid · 2012-05-30T08:44:24.395Z · LW(p) · GW(p)
And glass is a slowly flowing liquid.
No it isn't.. But from that same 'misconceptions' list I discovered that meteorites aren't hot when they hit the earth - they are more likely to be below freezing. "Melf" had been deceiving me all this time.
Replies from: Document↑ comment by Document · 2012-05-30T17:56:40.832Z · LW(p) · GW(p)
Rephrasing:
Replies from: wedrifid, Luke_A_SomersCan you believe that? That glass is a liquid? You wouldn't think so, but I've always believed the truth is strange, so I accepted it right away.
↑ comment by Luke_A_Somers · 2013-05-09T16:09:49.406Z · LW(p) · GW(p)
That could have been more clearly put the first time...
↑ comment by A1987dM (army1987) · 2012-05-30T10:25:32.618Z · LW(p) · GW(p)
Taboo dry -- does that mean “containing little water” or “containing little liquid water”?
Replies from: wedrifidcomment by candoattitude · 2012-05-07T20:28:08.294Z · LW(p) · GW(p)
"A little simplification would be the first step toward rational living, I think." ~ Eleanor Roosevelt
http://www.inspiration-oasis.com/eleanor-roosevelt-quotes.html
comment by kdorian · 2012-05-05T14:30:47.104Z · LW(p) · GW(p)
You know, I once read an interesting book which said that, uh, most people lost in the wilds, they, they die of shame. Yeah, see, they die of shame. 'What did I do wrong? How could I have gotten myself into this?' And so they sit there and they... die. Because they didn't do the one thing that would save their lives. Thinking.
- David Mamet
comment by Robert_Fripp · 2012-05-15T20:35:11.744Z · LW(p) · GW(p)
Here’s to the crazy ones. The misfits. The rebels. The trouble-makers. The round heads in the square holes. The ones who see things differently. They’re not fond of rules, and they have no respect for the status-quo. You can quote them, disagree with them, glorify, or vilify them. But the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.
--Apple's “Think Different” campaign
Replies from: MixedNuts, JoshuaZ↑ comment by MixedNuts · 2012-05-16T12:08:59.222Z · LW(p) · GW(p)
Dear people who are "difficulty with everyday tasks" crazy and not "world-changing genius" crazy: we got ripped off.
Replies from: BillyOblivion↑ comment by BillyOblivion · 2012-05-28T10:25:26.335Z · LW(p) · GW(p)
Those two sets aren't always disjoint.
Replies from: MixedNuts↑ comment by JoshuaZ · 2012-05-15T20:39:26.987Z · LW(p) · GW(p)
Upvoted because the idea is good, although I think that a lot of people have already pointed out the irony of "be a rebel by buying our mass-produced product!" slogans in general. (Tangent: In Stross's "Jennifer Morgue" this irony is used as part of a demonic summoning ritual to zombiefy people.)
comment by Vaniver · 2012-05-10T16:08:01.598Z · LW(p) · GW(p)
Replies from: MultiheadedOften, the happiness that results from irrationally formed beliefs goes along with irrationally formed goals. For example, people who think that they are universally liked often have the goal of being liked by everyone. Although it is surely rational to want to be liked, it is, for most people, hopeless to try to be liked by everyone. A balanced, rational view of how things actually are needs to be combined with a balanced, realistic view of how they ought to be, if we are not to be disappointed. If one's goals are as rationally formed as one's beliefs about how well one's goals are being achieved, accurate beliefs need not be disappointing.
↑ comment by Multiheaded · 2012-05-10T16:17:19.914Z · LW(p) · GW(p)
Ah, but goals and desires are different things.
comment by tut · 2012-05-08T18:00:09.123Z · LW(p) · GW(p)
Life's battles don't always go to the stronger or faster man. But sooner or later the man who wins, is the man who thinks he can.
Vince Lombardi
Replies from: Desrtopa, Bruno_Coelho↑ comment by Desrtopa · 2012-05-14T04:12:32.127Z · LW(p) · GW(p)
And if both contestants think they can win, this maxim gets to be right 100% of the time!
Replies from: tut↑ comment by tut · 2012-05-14T15:16:30.870Z · LW(p) · GW(p)
I thought it was that no matter who wins that causes him to become sure of his ability.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-05-14T15:21:21.631Z · LW(p) · GW(p)
Well, if a person is really good at what they do, that could cause them to become confident in their ability to do it well. But if they're really bad at it, that could also cause them to be confident in their ability to do it well.
↑ comment by Bruno_Coelho · 2012-05-24T17:36:48.368Z · LW(p) · GW(p)
Puting whenever you think in terms of fights don't do a good job. People come rapid with ferocious comments.
Replies from: BillyOblivion↑ comment by BillyOblivion · 2012-05-28T10:33:08.838Z · LW(p) · GW(p)
If you're a football (American, not Eurasian) coach you're routinely going to frame your aphorisms in terms of battles, or "fights" as you put it.
comment by Stephanie_Cunnane · 2012-05-04T00:28:48.560Z · LW(p) · GW(p)
I learned that failure is by and large due to not accepting and successfully dealing with the realities of life, and that achieving success is simply a matter of accepting and successfully dealing with all my realities.
-Ray Dalio, Principles
comment by EndlessEnigma · 2012-05-06T01:00:50.960Z · LW(p) · GW(p)
E.T. Jaynes on the Mind Projection Fallacy and quantum mechanics:
"[T]he mysteries of the uncertainty principle were explained to us thus: The momentum of the particle is unknown; therefore it has a high kinetic energy." A standard of logic that would be considered a psychiatric disorder in other fields, is the accepted norm in quantum theory. But this is really a form of arrogance, as if one were claiming to control Nature by psychokinesis."
Replies from: EndlessEnigma↑ comment by EndlessEnigma · 2012-05-06T21:20:33.928Z · LW(p) · GW(p)
Explanation for the down votes please?
Replies from: hairyfigment, MinibearRex↑ comment by hairyfigment · 2012-05-06T22:31:31.393Z · LW(p) · GW(p)
Very good question. People may disagree with the quote, or may think that out of context it misrepresents Jaynes. In the most charitable interpretation that occurs to me, they think you overestimate the clarity and usefulness of the quote.
↑ comment by MinibearRex · 2012-05-08T05:44:36.565Z · LW(p) · GW(p)
I did not downvote, and did not see the post until after it had been redacted. Hairyfigment's description is pretty good. To that, I would add that I recognize the passage from Jaynes that you're quoting, and I do understand why it seems particularly valuable. However, a while after reading it, or without ever having read that particular passage, I do have to say that the section you quoted is much less useful, powerful, whatever, without the remainder of the passage.
It also could have been downvoted by the substantial number of users on less wrong who just generally dislike the present state of discussion on quantum physics.
comment by [deleted] · 2012-05-05T20:57:02.701Z · LW(p) · GW(p)
redacted
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-05-05T22:32:14.922Z · LW(p) · GW(p)
Who's this by?
Replies from: None↑ comment by [deleted] · 2012-05-23T10:59:29.991Z · LW(p) · GW(p)
redacted
Replies from: ciphergoth, None↑ comment by Paul Crowley (ciphergoth) · 2012-05-23T11:20:00.548Z · LW(p) · GW(p)
I'm afraid quoting yourself isn't allowed, sorry!
Replies from: Eugine_Nier, NoneHere's the new thread for posting quotes, with the usual rules:
- Do not quote yourself
- Do not quote comments/posts on LW/OB
↑ comment by Eugine_Nier · 2012-05-24T02:12:24.495Z · LW(p) · GW(p)
I've been thinking of starting a "quote yourself" thread.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-05-24T06:45:32.240Z · LW(p) · GW(p)
Or a "quote yourself, or quote comments/posts on LW/OB" thread?
Replies from: thomblake↑ comment by thomblake · 2012-05-24T13:24:57.769Z · LW(p) · GW(p)
We have those periodically. They are limited to one per quarter by executive order, but they are not popular enough to sustain that frequency.
ETA: Here's the relevant tag: http://lesswrong.com/tag/lwob_quotes/
I agree that "do not quote yourself" is probably not a necessary rule for those threads.