Rationality Quotes September 2013

post by Vaniver · 2013-09-04T05:02:05.267Z · LW · GW · Legacy · 451 comments

Contents

451 comments

Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

451 comments

Comments sorted by top scores.

comment by Cthulhoo · 2013-09-03T10:49:52.738Z · LW(p) · GW(p)

In some species of Anglerfish, the male is much smaller than the female and incapable of feeding independently. To survive he must smell out a female as soon as he hatches. He bites into her releasing an enzime which fuses him to her permanently. He lives off her blood for the rest of his life, providing her with sperm whenever she needs it. Females can have multiple males attached. The morale is simple: males are parasites, women are sluts. Ha! Just kidding! The moral is don't treat actual animal behavior like a fable. Generally speaking, animals have no interest in teaching you anything.

Oglaf (Original comic NSFW)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2013-09-04T00:39:28.483Z · LW(p) · GW(p)

How have I been reading Oglaf for so long without knowing about the epilogues?

Replies from: Eliezer_Yudkowsky, FiftyTwo, Fronken
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-04T04:58:31.792Z · LW(p) · GW(p)

...oh crap, I'm going to have to reread the whole thing, aren't I.

Replies from: Wes_W, RobbBB, David_Gerard
comment by Wes_W · 2013-09-04T05:20:52.807Z · LW(p) · GW(p)

Nah, the wiki makes it much easier.

comment by Rob Bensinger (RobbBB) · 2013-09-04T05:52:32.083Z · LW(p) · GW(p)

bahahahaha

comment by David_Gerard · 2013-09-12T10:58:29.527Z · LW(p) · GW(p)

And the mouseovers. And the alt text, which is different again.

Replies from: accolade, NancyLebovitz
comment by accolade · 2013-09-26T04:53:43.976Z · LW(p) · GW(p)

And the mock ads at the bottom.

ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)

(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)

comment by NancyLebovitz · 2013-09-13T12:43:20.436Z · LW(p) · GW(p)

What's the difference between a mouseover and an alt text?

Replies from: tut
comment by tut · 2013-09-13T14:50:40.049Z · LW(p) · GW(p)

Mouseover is javascript EDIT: or CSS and shows up when you hover your pointer over some trigger area. Alt text is plain HTML and shows up when the image (or whatever it is alt text for) doesn't load.

Replies from: NancyLebovitz, David_Gerard, Dreaded_Anomaly, wedrifid
comment by NancyLebovitz · 2013-09-13T17:06:25.873Z · LW(p) · GW(p)

How do you get alt text to appear if the image loads? Read source?

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-13T18:54:41.696Z · LW(p) · GW(p)

Yup.

comment by David_Gerard · 2013-09-13T19:51:27.849Z · LW(p) · GW(p)

No, mouseover is TITLE= and alt text ls ALT=. Mouseover doesn't rely on Javascript. Alt text is specifically for putting in place of an image; it used to be used for mouseovers as well, but then TITLE= came in for that.

comment by Dreaded_Anomaly · 2013-09-13T16:15:04.007Z · LW(p) · GW(p)

There's also title text (often called a tool tip) which appears when you hover the mouse over an image, but is a plain HTML feature.

comment by wedrifid · 2013-09-14T09:50:00.376Z · LW(p) · GW(p)

Mouseover is javascript and shows up when you hover your pointer over some trigger area.

Javascript is not actually required. CSS handles it.

comment by FiftyTwo · 2013-09-07T20:10:51.741Z · LW(p) · GW(p)

For anyone unaware, SMBC has an additional joke panel when you mouse over the red button at the bottom

Replies from: MugaSofer, PhilGoetz
comment by MugaSofer · 2013-09-07T22:14:35.860Z · LW(p) · GW(p)

Actually, you have to click it now. Just a heads up to anyone reading this and trying to find them.

comment by PhilGoetz · 2013-09-07T20:55:17.548Z · LW(p) · GW(p)

AAAARGH!!! Why do they keep it secret?

That's almost as annoying as that you have to know the name of Zach's wife to create an account and comment, when for a long time the name of Zach's wife was not findable either on the website or via Google.

(I don't remember her name.)

Thank you very much.

Replies from: MugaSofer, Kawoomba, Cyan
comment by MugaSofer · 2013-09-07T22:17:42.367Z · LW(p) · GW(p)

I ... was not aware it was even possible to comment on SMBC.

AAAARGH!!! Why do they keep it secret?

It's an example of failing to update traditions after their original purpose has eroded, for the record. It was originally a reward for voting, which is why SMBC fans still refer to it as a "votey". The voting atrophied, while creating the reward became part of his routine.

comment by Kawoomba · 2013-09-07T21:07:13.821Z · LW(p) · GW(p)

It's usually the funnest panel, too.

comment by Cyan · 2013-09-07T21:34:11.014Z · LW(p) · GW(p)

Kelly Weinersmith.

comment by Fronken · 2013-09-12T14:28:04.097Z · LW(p) · GW(p)

... the what.

Ahh I just finished that.

comment by philh · 2013-09-03T19:46:55.051Z · LW(p) · GW(p)

"However, there is something they value more than a man's life: a trowel."

"Why a trowel?"

"If a bricklayer drops his trowel, he can do no more work until a new one is brought up. For months he cannot earn the food that he eats, so he must go into debt. The loss of a trowel is cause for much wailing. But if a man falls, and his trowel remains, men are secretly relieved. The next one to drop his trowel can pick up the extra one and continue working, without incurring debt."

Hillalum was appalled, and for a frantic moment he tried to count how many picks the miners had brought. Then he realized. "That cannot be true. Why not have spare trowels brought up? Their weight would be nothing against all the bricks that go up there. And surely the loss of a man means a serious delay, unless they have an extra man at the top who is skilled at bricklaying. Without such a man, they must wait for another one to climb from the bottom."

All the pullers roared with laughter. "We cannot fool this one," Lugatum said with much amusement.

Ted Chiang, Tower of Babylon

comment by Stabilizer · 2013-09-02T20:57:21.745Z · LW(p) · GW(p)

Don't ask what they think. Ask what they do.

My rule has to do with paradigm shifts—yes, I do believe in them. I've been through a few myself. It is useful if you want to be the first on your block to know that the shift has taken place. I formulated the rule in 1974. I was visiting the Stanford Linear Accelerator Center (SLAC) for a weeks to give a couple of seminars on particle physics. The subject was QCD. It doesn't matter what this stands for. The point is that it was a new theory of sub-nuclear particles and it was absolutely clear that it was the right theory. There was no critical experiment but the place was littered with smoking guns. Anyway, at the end of my first lecture I took a poll of the audience. "What probability would you assign to the proposition 'QCD is the right theory of hadrons.'?" My socks were knocked off by the answers. They ranged from .01 percent to 5 percent. As I said, by this time it was a clear no-brainer. The answer should have been close to 100 percent. The next day I gave my second seminar and took another poll. "What are you working on?" was the question. Answers: QCD, QCD, QCD, QCD, QCD,........ Everyone was working on QCD. That's when I learned to ask "What are you doing?" instead of "what do you think?"

I saw exactly the same phenomenon more recently when I was working on black holes. This time it was after a string theory seminar, I think in Santa Barbara. I asked the audience to vote whether they agreed with me and Gerard 't Hooft or if they thought Hawking’s ideas were correct. This time I got a 50-50 response. By this time I knew what was going on so I wasn't so surprised. Anyway I later asked if anyone was working on Hawking's theory of information loss. Not a single hand went up. Don't ask what they think. Ask what they do.

-Leonard Susskind, Susskind's Rule of Thumb

Replies from: AndHisHorse, Protagoras, lukeprog, Eliezer_Yudkowsky
comment by AndHisHorse · 2013-09-03T11:17:19.796Z · LW(p) · GW(p)

Not necessarily a great metric; working on the second-most-probable theory can be the best rational decision if the expected value of working on the most probable theory is lower due to greater cost or lower reward.

comment by Protagoras · 2013-09-03T00:59:39.453Z · LW(p) · GW(p)

This is why many scientists are terrible philosophers of science. Not all of them, of course; Einstein was one remarkable exception. But it seems like many scientists have views of science (e.g. astonishingly naive versions of Popperianism) which completely fail to fit their own practice.

Replies from: lukeprog
comment by lukeprog · 2013-09-05T21:04:18.483Z · LW(p) · GW(p)

Yes. When chatting with scientists I have to intentionally remind myself that my prior should be on them being Popperian rather than Bayesian. When I forget to do this, I am momentarily surprised when I first hear them say something straightforwardly anti-Bayesian.

Replies from: shminux
comment by shminux · 2013-09-05T21:15:14.216Z · LW(p) · GW(p)

Examples?

Replies from: lukeprog
comment by lukeprog · 2013-09-08T21:13:08.359Z · LW(p) · GW(p)

Statements like "I reject the intelligence explosion hypothesis because it's not falsifiable."

Replies from: shminux
comment by shminux · 2013-09-08T22:39:59.998Z · LW(p) · GW(p)

I see. I doubt that it is as simple as naive Popperianism, however. Scientists routinely construct and screen hypotheses based on multiple factors, and they are quite good at it, compared to the general population. However, as you pointed out, many do not use or even have the language to express their rejection in a Bayesian way, as "I have estimated the probability of this hypothesis being true, and it is too low to care." I suspect that they instinctively map intelligence explosion into the Pascal mugging reference class, together with perpetual motion, cold fusion and religion, but verbalize it in the standard Popperian language instead. After all, that is how they would explain why they don't pay attention to (someone else's) religion: there is no way to falsify it. I suspect that any further discussion tends to reveal a more sensible approach.

Replies from: lukeprog
comment by lukeprog · 2013-09-08T23:13:38.003Z · LW(p) · GW(p)

Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.

This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.

Replies from: EHeller, Mayo, jsteinhardt
comment by EHeller · 2013-09-08T23:31:57.225Z · LW(p) · GW(p)

The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language

I'm willing to bet most scientists aren't taught these things formally at all. I never was. You pick it up out of the cultural zeitgeist, and you develop a cultural jargon. And then sometimes people who HAVE formally studied philosophy of science try to map that jargon back to formal concepts, and I'm not sure the mapping is that accurate.

they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

I think 'wrong' is too strong here. Its good for some things, bad for others. Look at particle-accelerator experiments- frequentist statistics are the obvious choice because the collider essentially runs the same experiment 600 million times every second, and p-values work well to separate signal from a null-hypothesis of 'just background'.

comment by Mayo · 2013-09-29T06:52:12.384Z · LW(p) · GW(p)

If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsify with Bayes theorem, so you'd have to start out with an exhaustive set of hypotheses that could account for data (already silly), and then you'd never get rid of them---they could only be probabilistically disconfirmed.

Replies from: Cyan
comment by Cyan · 2013-09-30T00:40:44.137Z · LW(p) · GW(p)

Strictly speaking, one can't falsify with any method outside of deductive logic -- even your own Severity Principle only claims to warrant hypotheses, not falsify their negations. Bayesian statistical analysis is just the same in this regard.

A Bayesian analysis doesn't need to start with an exhaustive set of hypotheses to justify discarding some of them. Suppose we have a set of mutually exclusive but not exhaustive hypotheses. The posterior probability of an hypothesis under the assumption that the set is exhaustive is an upper bound for its posterior probability in an analysis with an expanded set of hypotheses. A more complete set can only make a hypotheses less likely, so if its posterior probability is already so low that it would have a negligible effect on subsequent calculations, it can safely be discarded.

But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal.

I'm a Bayesian probabilist, and it doesn't go against my ideal. I think you're attacking philosophical subjective Bayesianism, but I don't think that's the kind of Bayesianism to which lukeprog is referring.

comment by jsteinhardt · 2013-09-15T01:34:42.819Z · LW(p) · GW(p)

For what it's worth, I understand well the arguments in favor of Bayes, yet I don't think that scientific results should be published in a Bayesian manner. This is not to say that I don't think that frequentist statistics is frequently and grossly mis-used by many scientists, but I don't think Bayes is the solution to this. In fact, many of the problems with how statistics is used, such as implicitly performing many multiple comparisons without controlling for this, would be just as large of problems with Bayesian statistics.

Either the evidence is strong enough to overwhelm any reasonable prior, in which case frequentist statistics wlil detect the result just fine; or else the evidence is not so strong, in which case you are reduced to arguing about priors, which seems bad if the goal is to create a societal construct that reliable uncovers useful new truths.

Replies from: lukeprog, Mayo
comment by lukeprog · 2013-09-15T01:42:51.530Z · LW(p) · GW(p)

But why not share likelihood ratios instead of posteriors, and then choose whether or not you also want to argue very much (in your scientific paper) about the priors?

Replies from: private_messaging
comment by private_messaging · 2013-09-15T02:01:57.496Z · LW(p) · GW(p)

What do you think "p<0.05" means?

Replies from: lukeprog, wedrifid
comment by lukeprog · 2013-09-15T02:30:05.078Z · LW(p) · GW(p)

The p-value is "the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true." It is often misinterpreted, e.g. by 68 out of 70 academic psychologists studied by Oakes (1986, pp. 79-82).

The p-value is not the same as the Bayes factor:

The Bayes factor differs in many ways from a P value. First, the Bayes factor is not a probability itself but a ratio of probabilities, and it can vary from zero to infinity. It requires two hypotheses, making it clear that for evidence to be against the null hypothesis, it must be for some alternative. Second, the Bayes factor depends on the probability of the observed data alone, not including unobserved “long run” results that are part of the P value calculation. Thus, factors unrelated to the data that affect the P value, such as why an experiment was stopped, do not affect the Bayes factor...

Replies from: private_messaging
comment by private_messaging · 2013-09-15T07:16:45.769Z · LW(p) · GW(p)

I wasn't saying it was the same, my point is that reporting the data on which one can update in Bayesian manner is the norm. (As is updating, e.g. if the null hypothesis is really plausible, at p<0.05 nobody's really going to believe you anyway)

With regards to the Bayes factor. The issue is that there is a whole continuum of alternate hypotheses. There's no single factor between those that you can report on which could be used for combining evidence in favour of quantitatively different alternative "most supported" hypotheses. The case of the null hypothesis (vs all possible other hypotheses) is special in that regard, and so that is what a number is reported for.

With regards to the case of the ratio between evidence for two point hypotheses, as discussed in the article you link: Neyman-Pearson lemma is quite old.

With regards to the cause of experiment termination, you have to account somewhere for the fact that termination of the experiment has the potential to cherry pick and thus bias the resulting data (if that is what he's talking about, because its not clear to me what is his point and it seems to me that he misunderstood the issue).

Furthermore, the relevant mathematics probably originates from the particle physics, where it serves a different role: a threshold on the p-value is here to quantify the worst-case likelihood that your experimental apparatus will be sending people on the wild goose chase. It has more to do with the value of the experiment than probabilities, given that priors for hypotheses in physics would require a well defined hypotheses space (which is absent). And given that the work on the production of stronger evidence is a more effective way to spend your time there than any debating of the priors. And given that the p-value related issues in any case can be utterly dwarfed by systematic errors and problems with the experimental set up, something the probability of which changes after the publication as other physicists do or do not point towards potential problems in the set up.

A side note: there's a value of information issue here. I know that if I were to discuss Christian theology with you (not the atheism, but the fine points of the life of Jesus, that sort of thing, which I never really had time or inclination to look into), the expected value of information to you would be quite low. Because most of the time that I spent practising mathematics and such, you spent on the former. It would be especially the case if you entered some sort of very popular contest in any way requiring theological knowledge, and scored #10th of all time on a metric that someone else seen fit to chose in advance. The same goes for discussions of mathematics, but the other way around. This is also the case for any experts you are talking to. They're rather rational people, that's how they got to have impressive accomplishments, and a lot of practical rationality is about ignoring low expected value pursuits. Einsteins and Fermis of this world do not get to accomplish so much on so many different occasions without great innate abilities for that kind of thing. They also hold teaching positions and it is more productive for them to correct misconceptions in the eager students who are up to speed on the fundamental knowledge.

(with #10th I'm alluding to this result of mine ).

Replies from: Vaniver
comment by Vaniver · 2013-09-30T15:00:55.428Z · LW(p) · GW(p)

With regards to the Bayes factor. The issue is that there is a whole continuum of alternate hypotheses. There's no single factor between those that you can report on which could be used for combining evidence in favour of quantitatively different alternative "most supported" hypotheses. The case of the null hypothesis (vs all possible other hypotheses) is special in that regard, and so that is what a number is reported for.

Mmm. I've read a lot of dumb papers where they show that their model beats a totally stupid model, rather than that their model beats the best model in the literature. In algorithm design fields, you generally need to publish a use case where your implementation of your new algorithm beats your implementation of the best other algorithms for that problem in the field (which is still gameable, because you implement both algorithms, but harder).

Thinking about the academic controversy I learned about most recently, it seems like if authors had to say "this evidence is n:1 support for our hypothesis over the hypothesis proposed in X" instead of "the evidence is n:1 support for our hypothesis over there being nothing going on" they would have a much harder time writing papers that don't advance the literature, and you might see more scientists being convinced of other hypotheses because they have to implement them personally.

Replies from: private_messaging
comment by private_messaging · 2013-10-02T11:47:33.365Z · LW(p) · GW(p)

In physics a new theory has to be supported over the other theories, for example. What you're talking about would have to be something that happens in sciences that primarily find weak effects in the noise and co-founders anyway, i.e. psychology, sociology, and the like.

I think you need to specifically mention what fields you are talking about, because not everyone knows that issues differ between fields.

With regards to malemployment debate you link, there's a possibility that many of the college graduates have not actually learned anything that they could utilize, in the first place, and consequently there exist nothing worth describing as 'malemployment'. Is that the alternate model you are thinking of?

Replies from: Vaniver
comment by Vaniver · 2013-10-02T17:03:56.344Z · LW(p) · GW(p)

What you're talking about would have to be something that happens in sciences that primarily find weak effects in the noise and co-founders anyway, i.e. psychology, sociology, and the like.

Most of the examples I can think of come from those fields. There are a few papers in harder sciences which people in the field don't take seriously because they don't address the other prominent theories, but which people outside of the field think look serious because they're not aware that the paper ignores other theories.

With regards to malemployment debate you link, there's a possibility that many of the college graduates have not actually learned anything that they could utilize, in the first place, and consequently there exist nothing worth describing as 'malemployment'. Is that the alternate model you are thinking of?

I was thinking mostly that it looked like the two authors were talking past one another. Group A says "hey, there's heterogeneity in wages which is predicted by malemployment" whereas Group B says "but average wages are high, so there can't be malemployment," which ignores the heterogeneity. I do think that a signalling model of education (students have different levels of talent, and more talented students tend to go for more education, but education has little direct effect on talent) explains the heterogeneity and the wage differentials, and it would be nice to see both groups address that as well.

Replies from: private_messaging
comment by private_messaging · 2013-10-02T19:41:55.456Z · LW(p) · GW(p)

I do think that a signalling model of education

Once again, which education? Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can: a training course for driving trucks. A language course, likewise so. Same goes for mathematics, hard sciences, and engineering disciplines. Which may perhaps be likened to necessity of training for a formula 1 driver, irrespective of the level of innate talent (within the human range of ability).

Now, if that was within the realm of actual science, something like this "signalling model of education" would be immediately invalidated by the truck driving example. No excuses. One can mend it into a "signalling model of some components of education in soft sciences". Where there's a big problem for "signalling" model: a PhD in those fields in particular is a poorer indicator of ability, innate and learned, than in technical fields (lower average IQs, etc), and signals very little.

edit: by the way, the innate 'talent' is not in any way exclusive of importance of learning; some recent research indicates that highly intelligent individuals retain neuroplasticity for longer time, which lets them acquire more skills. Which would by the way explain why child prodigies fairly often become very mediocre adults, especially whenever lack of learning is involved.

Replies from: yli, Vaniver
comment by yli · 2013-11-16T19:50:35.522Z · LW(p) · GW(p)

Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can

If there was a glut of trained truck drivers on the market and someone needed to recruit new crane operators, they could choose to recruit only truck drivers because having passed the truck driving course would signal that you can learn to operate heavy machinery reliably, even if nothing you learned in the truck driving course was of any value in operating cranes.

Replies from: private_messaging
comment by private_messaging · 2013-11-17T08:36:59.412Z · LW(p) · GW(p)

OSHA rules would still require that the crane operator passes the crane related training.

The term 'signalling' seem to have heavily drifted and mutated online to near meaninglessness.

If someone attends a truck driving course with the intention of driving trucks - or a math course with the intention of a: learning math and b: improving their thinking skills - that's not signalling behaviour.

And conversely, if someone wants to demonstrate some innate or pre-existing quality (such as mathematical ability), they participate in a relevant contest, and this is signalling.

Now, there may well be a lot of people who start in an educated family and then sort of drift through the life conforming to parental wishes, and end up obtaining, say, a physics PhD. And then they go to economics or something similar where they do not utilize their training in much any way. One could deduce about these people that they are more innately intelligent than average, more wealthy than average, etc etc, and that they learned some thinking skills. The former two things are much more reliably signalled with an IQ test and a statement from IRS.

Replies from: Richard_Kennaway, yli, Jiro
comment by Richard_Kennaway · 2013-11-19T14:08:29.099Z · LW(p) · GW(p)

The term 'signalling' seem to have heavily drifted and mutated online to near meaninglessness.

I think it's more that the concept entered LessWrong via Robin Hanson's expansion of the concept into his "Homo Hypocritus" theory. For examples, see every post on Overcoming Bias with a title of the form "X is not about Y". This theory sees all communicative acts as signalling, that is to say, undertaken with the purpose of persuading someone that one possesses some desirable characteristic. To pass a mathematics test is just as much a signal of mathematical ability as to hang out with mathematicians and adopt their jargon.

There is something that distinguishes actual performance from other signals of ability: unforgeability. By doing something that only a mathematician could do, one sends a more effective signal -- that is, one more likely to be believed -- that one can do mathematics.

This is a radical re-understanding of communication. On this view, not one honest thing has ever been said, not one honest thing done, by anyone, ever. "Honesty" is not a part of how brains physically work. Whether we tell a truth or tell a lie, truth-telling is never part of our purpose, but no more than a means to the end of persuading people of our qualities. It is to be selected as a means only so far as it may happen to be more effective than the alternatives in a given situation. The concept of honesty as a virtue is merely part of such signalling.

The purpose of signalling desirable qualities is to acquire status, the universal currency of social interaction. Status is what determines your access to reproduction. Those who signal status best get to reproduce most. This is what our brains have evolved to do, for as long as they been able to do this at all. Every creature bigger than an earthworm does it.

Furthermore, all acts are communicative acts, and therefore all acts are signalling. Everything we do in front of someone else is signalling. Even in solitude we are signalling to ourselves, for we can more effectively utter false signals if we believe them. Every thought that goes through your head is self-signalling, including whatever thoughts you have while reading this. It's all signalling.

Such, at least, is the theory.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-19T15:16:02.592Z · LW(p) · GW(p)

...which means that describing an act as "signalling" is basically meaningless, insofar as it fails to ascribe to that act a property that distinguishes it from other acts. It's like describing my lunch as "material". True, yes, but uninteresting except as a launching point to distinguish among expensive and cheap signals, forgeable and unforgeable signals, purely external signals and self-signalling, etc.

That said, in most contexts when a behavior is described as "signalling" without further qualification I generally understand the speaker to be referring more specifically to cheap signalling which is reliably treated as though it were a more expensive signal. Hanging out with mathematicians and adopting their jargon without really understanding it usually falls in this category; completing a PhD program in mathematics usually doesn't (though I could construct contrived exceptions in both cases).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-19T16:26:55.089Z · LW(p) · GW(p)

...which means that describing an act as "signalling" is basically meaningless, insofar as it fails to ascribe to that act a property that distinguishes it from other acts.

That a proposition has the form "every X is a Y" does not make it uninteresting. For example: All matter is made of atoms. All humans are descended from apes. God made everything. Every prime number is a sum of three squares. Everyone requires oxygen to live. True or false, these are all meaningful, interesting statements. "All acts are acts of signalling" is similarly so.

That said, in most contexts when a behavior is described as "signalling" without further qualification I generally understand the speaker to be referring more specifically to cheap signalling which is reliably treated as though it were a more expensive signal.

Yes, this subtext is present whenever the concept of signalling is introduced (another example of an "all X is Y" which is nevertheless a meaningful observation).

Replies from: private_messaging, TheOtherDave
comment by private_messaging · 2013-11-20T12:02:03.506Z · LW(p) · GW(p)

"All acts are acts of signalling" is similarly so.

Not really comparable to matter being made of atoms, though, as "signalling" only establishes a tautology (like all communication is communication).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-20T13:35:01.868Z · LW(p) · GW(p)

"Signalling", in this context, means "acts undertaken primarily for the purpose of gaining status by persuading others of one's status-worthy qualities". As such, "All communication is signalling" is not a tautology, but an empirical claim.

Replies from: private_messaging
comment by private_messaging · 2013-11-21T09:11:34.663Z · LW(p) · GW(p)

acts undertaken primarily for the purpose of gaining status by persuading others of one's status-worthy qualities

Many of those acts can be undertaken without having any such qualities, though.

I think Hanson's ideas are far more applicable to Hanson's own personal behaviour than to the world in general.

In particular, what he's trying to do with his "Signalling theory" is not to tell us anything about human behaviour, but instead to try to imply that he is neglecting necessity for actual training, which would be consistent with him having some immense innate abilities but not trying hard.

Meanwhile out there in the real world, if you specifically want to get a job that requires you to speak Chinese, you are going to have to attend a course in Chinese, to actually learn Chinese. Unless you are actually native Chinese in which case you won't have to attend that course. Which applies to most disciplines, with perhaps other disciplines for which the skill may not even exist monkey style imitating the rest.

Replies from: TheOtherDave, Richard_Kennaway
comment by TheOtherDave · 2013-11-21T14:18:43.529Z · LW(p) · GW(p)

Meanwhile out there in the real world, if you specifically want to get a job that requires you to speak Chinese, you are going to have to attend a course in Chinese, to actually learn Chinese. Unless you are actually native Chinese in which case you won't have to attend that course.

Though depending on the situation I might still find that it's useful to attend the course, so I can get certification as having gone through the course, which in the real world might be of more value than speaking Chinese without that certification.

And these sorts of certification-based (as opposed to skill-based) considerations apply to most disciplines as well.

And, of course, the fact that I'm applying for this job, which requires Chinese, is itself a choice I'm making, and we can ask why I'm making that choice, and to what extent my motives are status-seeking vs. truth-seeking vs. improvements-in-the-world-seeking vs. something else.

Conversely, if I am entirely uninterested in certification and I really am solely interested in learning Chinese for the intrinsic value of learning Chinese, I might find it's more useful not to attend a course, but instead study Chinese on my own (e.g. via online materials and spending my afternoons playing Mahjong in Chinatown).

Replies from: private_messaging
comment by private_messaging · 2013-11-21T19:07:03.275Z · LW(p) · GW(p)

If you already speak Chinese, you'd just need to pass an exam, no course attached, and if you are a native speaker, you'd be correctly presumed to speak it better than someone who spent many years on a course, lived in China, etc.

comment by Richard_Kennaway · 2013-11-21T11:28:49.286Z · LW(p) · GW(p)

Many of those acts can be undertaken without having any such qualities, though.

I agree. I'm not defending Hanson's theory, just saying what it is. Perhaps in more starkly extreme terms than he might, but I have never seen him put any limits on the concept. This, I am suggesting, is the origin of the broad application of the concept on LessWrong.

Meanwhile out there in the real world, if you specifically want to get a job that requires you to speak Chinese, you are going to have to attend a course in Chinese, to actually learn Chinese.

Quite so. But you are thinking like an engineer -- that is, you are thinking in terms of actually getting things done. This is the right way to think, but it is not the way of the Hansonian fundamentalist (an imaginary figure that appears in my head when I contemplate signalling theory, and should not be confused with Robin Hanson himself).

The Hansonian fundamentalist would respond that it's still all signalling. The only thing that he aims at getting done is the acquisition of status for himself. All else is means. The role that the actual ability to speak Chinese plays is that of an unforgeable signal, a concept which replaces that of truth, as far as what goes on inside our heads is concerned. Tarski's definition of truth stands, but the Litany of Tarski does not. It is replaced by, "If X is true, I desire whatever attitude to X will maximise my status; if X is false, I desire whatever attitude to X will maximise my status. Let me not become attached to anything but status."

If the job really cannot be done without good spoken Chinese, then to keep that job, you will need that ability. But if in the particular situation you correctly judged that you could get by with English and the help of a Chinese secretary, busk your way through the training course, and pull strings to keep your job if you run into difficulties, then that would be Homo Hypocritus' choice. Homo Hypocritus does whatever will work best to convince his boss of his worthy qualities, with what lawyers call reckless disregard for the truth. Truth is never a consideration, except as a contingent means to status.

ETA:

I think Hanson's ideas are far more applicable to Hanson's own personal behaviour than to the world in general.

In particular, what he's trying to do with his "Signalling theory" is not to tell us anything about human behaviour, but instead to try to imply that he is neglecting necessity for actual training, which would be consistent with him having some immense innate abilities but not trying hard.

He does have tenure at a reputable American university, which I think is not a prize handed out cheaply. OTOH, I am reminded of a cartoon whose caption is "Mad? Of course I'm mad! But I have tenure!"

Replies from: private_messaging
comment by private_messaging · 2013-11-21T15:28:06.544Z · LW(p) · GW(p)

If the job really cannot be done without good spoken Chinese, then to keep that job, you will need that ability. But if in the particular situation you correctly judged that you could get by with English and the help of a Chinese secretary, busk your way through the training course, and pull strings to keep your job if you run into difficulties, then that would be Homo Hypocritus' choice. Homo Hypocritus does whatever will work best to convince his boss of his worthy qualities, with what lawyers call reckless disregard for the truth. Truth is never a consideration, except as a contingent means to status.

At that point we aren't really talking about signalling innate qualities, we're talking of forgeries and pretending. Those only work at all because there are people who are not pretending.

A fly that looks like a wasp is only scary because there are wasps with venom that actually works. And those wasps have venom so potent because they actually use it to defend the hives. They don't merely have venom to be worthy of having bright colours. Venom works directly, not through the bright colour.

One could of course forge the signals and then convince themselves that they are honestly signalling the ability to forge signals... but at the end of the day, this fly that looks like a wasp, it is just a regular fly, and it only gets an advantage from us not being fully certain that it is a regular fly. And the flies that look like wasps are not even close to displacing the other flies - there's an upper limit on those.

He does have tenure at a reputable American university, which I think is not a prize handed out cheaply. OTOH, I am reminded of a cartoon whose caption is "Mad? Of course I'm mad! But I have tenure!"

Well, tenure is an example of status... and in his current field there may not be as many equivalents of "venom actually working" as in other fields so it looks like it is all about colours.

comment by TheOtherDave · 2013-11-19T16:29:19.177Z · LW(p) · GW(p)

Yup, that's true.

comment by yli · 2013-11-18T21:36:04.365Z · LW(p) · GW(p)

You can say that whether it's signaling is determined by the motivations of the person taking the course, or the motivations of the people offering the course, or the motivations of employers hiring graduates of the course. And you can define motivation as the conscious reasons people have in their minds, or as the answer to the question of whether the person would still have taken the course if it was otherwise identical but provided no signaling benefit. And there can be multiple motivations, so you can say that something is signaling if signaling is one of the motivations, or that it's signaling only if signaling is the only motivation.

If you make the right selections from the previous, you can argue for almost anything that it's not signaling, or that it is for that matter.

if someone wants to demonstrate some innate or pre-existing quality (such as mathematical ability), they participate in a relevant contest and this is signalling.

If I wanted to defend competitions from accusations of signaling like you defended education, I could easily come up with lots of arguments. Like people doing them to challenge themselves, experience teamwork, test their limits and meet like-minded people. And the fact that lots of people that participate in competitions even though they know they don't have a serious chance of coming on top, etc.

OSHA rules would still require that the crane operator passes the crane related training.

(Sure, but I meant that only truck drivers would be accepted into the crane operator training in the first place, because they would be more likely to pass it and perform well afterward.)

comment by Jiro · 2013-11-18T04:46:30.043Z · LW(p) · GW(p)

And conversely, if someone wants to demonstrate some innate or pre-existing quality (such as mathematical ability), they participate in a relevant contest, and this is signalling.

Given the way the term is actuallly used, I wouldn't call that "signalling" because "signalling" normally refers to demonstrating that you have some trait by doing something other than performing the trait itself (if it's capable of being performed). You can signal your wealth by buying expensive jewels, but you can't signal your ability to buy expensive jewels by buying expensive jewels. And taking a math test to let people know that you're good at math is not signalling, but going to a mathematicians' club to let people know that you're good at math may be signalling.

Replies from: private_messaging
comment by private_messaging · 2013-11-19T12:36:11.122Z · LW(p) · GW(p)

Given the way the term is actuallly used, I wouldn't call that "signalling" because "signalling" normally refers to demonstrating that you have some trait by doing something other than performing the trait itself

This seem to be the meaning common on these boards, yes.

And taking a math test to let people know that you're good at math is not signalling, but going to a mathematicians' club to let people know that you're good at math may be signalling.

Going to mathematicians club (and the like) is something that you can do if you aren't any good at math, though. And it only works as a "signal" of being good at math because most people go to that club for other reasons (that would be dependent on being good at math).

Signalling was supposed to be about credibly conveying information to another party whenever there is a motivation for you to lie.

It seems that instead signalling is used to refer to behaviours portrayed in "Flowers for Charlie" episode of "It's always sunny in Philadelphia".

comment by Vaniver · 2013-10-02T19:50:15.226Z · LW(p) · GW(p)

Once again, which education?

Generally, the signalling model of education refers to the wage premium paid to holders of associates, bachelors, masters, and doctoral degrees, often averaged across all majors. (There might be research into signalling with regards to vocational degrees, but I think most people that look into that are more interested in licensing / scarcity effects.)

Replies from: private_messaging
comment by private_messaging · 2013-10-02T20:18:34.796Z · LW(p) · GW(p)

Well, in the hard science majors, there's considerable training, which is necessary for a large fraction of occupations. Granted, a physics PhD who became an economist may have been signalling, but it is far from the norm. What is the norm is that vast majority of individuals employed as physics PhDs would be unable to perform some parts of their work if they hadn't undergone relevant training, just as you wouldn't have been able to speak a foreign language or drive a car without training.

comment by wedrifid · 2013-09-15T02:29:40.092Z · LW(p) · GW(p)

(Your point is well taken but...)

What do you think "p<0.05" means?

Approximately it means "I have a financial or prestige incentive to find a relationship and I work in a field that doesn't take its science seriously".

Replies from: EHeller
comment by EHeller · 2013-09-15T02:40:08.614Z · LW(p) · GW(p)

Or, for instance in the case of particle physics, it means the probability you are just looking at background. You are painting with an overly broad brush. Sure, p-values are overused, but there are situations where the p-value IS the right thing to look at.

Replies from: RobbBB, private_messaging, wedrifid
comment by Rob Bensinger (RobbBB) · 2013-09-15T02:48:43.313Z · LW(p) · GW(p)

Or, for instance in the case of particle physics, it means the probability you are just looking at background.

No, it's the probability that you'd see a result that extreme (or more extreme) conditioned on just looking at background. Frequentists can't evaluate unconditional probabilities, and 'probability that I see noise given that I see X' (if that's what you had in mind) is quite different from 'probability that I see X given that I see noise'.

(Incidentally, the fact that this kind of conflation is so common is one of the strongest arguments against defaulting to p-values.)

Replies from: private_messaging
comment by private_messaging · 2013-09-15T09:00:20.499Z · LW(p) · GW(p)

Keep in mind that he and other physicists do not generally consider "probability that it is noise, given an observation X" to even be a statement about the world (it's a statement about one's personal beliefs, after all, one's confidence in the engineering of an experimental apparatus, and so on and so forth), so they are perhaps conflating much less than it would appear under very literal reading. This is why I like the idea of using the word "plausibility" to describe beliefs, and "probability" to describe things such as the probability of an event rigorously calculated using a specific model.

edit: note by the way that physicists can consider a very strong result - e.g. those superluminal neutrinos - extremely implausible on the basis of a prior - and correctly conclude that there is most likely a problem with their machinery, on the basis of ratio between the likelihood of seeing that via noise to likelihood of seeing that via hardware fault. How's that even possible without actually performing Bayesian inference?

edit2: also note that there is a fundamental difference as with plausibilities you will have to be careful to avoid vicious cycles in the collective reasoning. Plausibility, as needed for combining it with other plausibilities, is not a real number, it is a real number with attached description of how exactly it was made, so that evidence would not be double-counted. The number itself is of little use to communication for this reason.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-09-18T21:57:24.428Z · LW(p) · GW(p)

Keep in mind that he and other physicists do not generally consider "probability that it is noise, given an observation X" to even be a statement about the world (it's a statement about one's personal beliefs, after all, one's confidence in the engineering of an experimental apparatus, and so on and so forth)

It's about the probability that there is an effect which will cause this deviation from background to become more and more supported by additional data rather than simply regress to the mean (or with your wording, the other way around). That seems fairly based-in-the-world to me.

Replies from: private_messaging
comment by private_messaging · 2013-09-30T13:32:43.453Z · LW(p) · GW(p)

The actual reality either has this effect, or it does not. You can quantify your uncertainty with a number, that would require you to assign some a-priori probability, which you'll have to choose arbitrarily.

You can contrast this to a die roll which scrambles initial phase space, mapping (approximately but very close to) 1/6 of any physically small region of it to each number on the die, the 1/6 being an objective property of how symmetrical dies bounce.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-09-30T13:36:32.117Z · LW(p) · GW(p)

Such statements are about the world, in a framework of probability.

Replies from: private_messaging
comment by private_messaging · 2013-09-30T13:42:56.840Z · LW(p) · GW(p)

They are specific to your idiosyncratic choice of prior, I am not interested in hearing them (in the context of science), unlike the statements about the world.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-09-30T16:35:06.979Z · LW(p) · GW(p)

That knowledge is subjective doesn't mean that such statements are not about the world. Furthermore, such statements can (and sometimes do) have arguments for the priors...

By this standard, any 'statement about the world' ignores all of the uncertainty that actually applies. Science doesn't require you to sweep your ignorance under the rug.

comment by private_messaging · 2013-09-15T08:29:31.994Z · LW(p) · GW(p)

Or, for instance in the case of particle physics, it means the probability you are just looking at background.

Well, technically, the probability that you will end up with a result given that you are just looking at background. I.e. the probability that after the experiment you will end up looking at background thinking it is not background*, assuming it is all background.

  • if it is used for a threshold for such thinking

It's really awkward to describe that in English, though, and I just assume that this is what you mean (while Bayesianists assume that you are conflating the two).

comment by wedrifid · 2013-09-15T02:50:27.307Z · LW(p) · GW(p)

Or, for instance in the case of particle physics, it means the probability you are just looking at background. You are painting with an overly broad brush. Sure, p-values are overused, but there are situations where the p-value IS the right thing to look at.

Note that the 'brush' I am using is essentially painting the picture "0.05 is for sissies", not a rejection of p-values (which I may do elsewhere but with less contempt). The physics reference was to illustrate the contrast of standards between fields and why physics papers can be trusted more than medical papers.

Replies from: Kawoomba
comment by Kawoomba · 2013-09-15T05:49:07.064Z · LW(p) · GW(p)

That's what multiple testing correction is for.

With the thresholds from physics, we'd still be figuring out if penicillin really, actually kills certain bacteria (somewhat hyperbolic, 5 sigma ~ 1 in 3.5 million).

0.05 is a practical tradeoff, for supposed Bayesians, it is still much too strict, not too lax.

Replies from: private_messaging, somervta, wedrifid
comment by private_messaging · 2013-09-15T07:36:03.747Z · LW(p) · GW(p)

I for one think that 0.05 is way too lax (other than for the purposes of seeing whenever it is worth it to conduct a bigger study and other such value-of-information related uses) and 0.05 results require rather carefully constructed meta-study to interpret correctly. Because a selection factor of 20 is well within the range attainable by dodgy practices that are almost impossible to prevent, and even in the absence of the dodgy practices, selection due to you being more likely to hear of something interesting.

I can only imagine considering it too strict if I were unaware of those issues or their importance (Bayesianism or not)

This goes much more so for weaker forms of information, such as "Here's a plausible looking speculation I came up with". To get anywhere with that kind of stuff one would need to somehow account for the preference towards specific lines of speculation.

edit: plus, effective cures in medicine are the ones supported by very very strong evidence, on par with particle physics (e.g. the same penicillin killing bacteria, you have really big sample sizes when you are dealing with bacteria). The weak stuff - antidepressants for which we don't know if they lower or raise the risk of the suicide, and are uncertain whenever the effect is an artefact from using in any way whatsoever a depression score that includes weight loss and insomnia as symptoms when testing a drug that causes weight gain and sleepiness.

I think it is mostly because priors for finding a strongly effective drug are very low, so when large p-values are involved, you can only find low effect, near-placebo drugs.

edit2: Other issue is that many studies are plagued by at least some un-blinding that can modulate the placebo effect. So, I think a threshold on the strength of the effect (not just p-value) is also necessary - things that are within the potential systematic error margin from the placebo effect may mostly be a result of systematic error.

edit3: By the way, note that for a study of same size, stronger effect will result in much lower p-value, and so a higher standard on p-values does not interfere with detection of strong effects much. When you are testing an antibiotic... well, the chance probability of one bacterium dying in some short timespan may be 0.1, and with antibiotic at a fairly high concentration, 99.99999... . Needless to say, a dozen bacteria put you far beyond the standards from the particle physics, and a whole poisoned petri dish makes point moot, with all the unconfidence coming from the possibility of killing the bacteria in some other way.

comment by somervta · 2013-09-15T06:28:47.811Z · LW(p) · GW(p)

It probably is too lax. I'd settle for 0.01, but 0.005 or 0.001 would be better for most applications (i.e - where you can get it). We have have the whole range of numbers between 1 in 25 and 1 in 3.5 million to choose from, and I'd like to see an actual argument before concluding that the number we picked mostly from historical accident was actually right all along. Still, a big part of the problem is the 'p-value' itself, not the number coming after it. Apart from the statistical issues, it's far too often mistaken for something else, as RobbBB has pointed out elsewhere in this thread.

comment by wedrifid · 2013-09-15T07:04:00.126Z · LW(p) · GW(p)

0.05 is a practical tradeoff, for supposed Bayesians, it is still much too strict, not too lax.

No, it isn't. In an environment where the incentive to find a positive result in huge and there are all sorts of flexibilities in what particular results to report and which studies to abandon entirely, 0.05 leaves far too many false positives. I really does begin to look like this. I don't advocate using the standards from physics but p=0.01 would be preferable.

Mind you, there is no particularly good reason why there is an arbitrary p value to equate with 'significance' anyhow.

Replies from: Kawoomba
comment by Kawoomba · 2013-09-15T07:33:39.492Z · LW(p) · GW(p)

Well, I would find it really awkward for a Bayesian to condone a modus operandi such as "The p-value of 0.15 indicates it is much more likely that there is a correlation than that the result is due to chance, however for all intents and purposes the scientific community will treat the correlation as non-existent, since we're not sufficiently certain of it (even though it likely exists)".

Similar to having choice of two roads to go down, one of which leads into the forbidden forest. Then saying "while I have decent evidence which way goes where, because I'm not yet really certain, I'll just toss a coin." How many false choices would you make in life, using an approach like that? Neglecting your duty to update, so to speak. A p-value of 0.15 is important evidence. A p-value of 0.05 is even more important evidence. It should not be disregarded, regardless of the perverse incentives in publishing and the false binary choice (if (p<=0.05) correlation=true, else correlation=false). However, for the medical community, a p-value of 0.15 might as well be 0.45, for practical purposes. Not published = not published.

This is especially pertinent given that many important chance discoveries may only barely reach significance initially, not because their effect size is so small, but because in medicine sample sizes often are, with the accompanying low power of discovering new effects. When you're just a grad student with samples from e.g. 10 patients (no economic incentive yet, not yet a large trial), unless you've found magical ambrosia, p-values may tend to be "insignificant", even of potentially significant breakthrough drugs .

Better to check out a few false candidates too many than to falsely dismiss important new discoveries. Falsely claiming a promising new substance to have no significant effect due to p-value shenanigans is much worse than not having tested it in the first place, since the "this avenue was fruitless" conclusion can steer research in the wrong direction (information spreads around somewhat even when unpublished, "group abc had no luck with testing substances xyz").

IOW, I'm more concerned with false negatives (may never get discovered as such, lost chance) than with false positives (get discovered later on -- in larger follow-up trials -- as being false positives). A sliding p-value scale may make sense, with initial screening tests having a lax barrier signifying a "should be investigated further", with a stricter standard for the follow-up investigations.

Replies from: private_messaging, wedrifid
comment by private_messaging · 2013-09-15T07:59:25.104Z · LW(p) · GW(p)

Well, I would find it really awkward for a Bayesian to condone a modus operandi such as "The p-value of 0.15 indicates it is much more likely that there is a correlation than that the result is due to chance, however for all intents and purposes the scientific community will treat the correlation as non-existent, since we're not sufficiently certain of it (even though it likely exists)".

And this is a really, really great reason not to identify yourself as "Bayesian". You end up not using effective methods when you can't derive them from Bayes theorem. (Which is to be expected absent very serious training in deriving things).

Better to check out a few false candidates too many than to falsely dismiss important new discoveries

Where do you think the funds for testing false candidates are going to come from? If you are checking too many false candidates, you are dismissing important new discoveries. You are also robbing time away from any exploration into the unexplored space.

edit: also I think you overestimate the extent to which promising avenues of research are "closed" by a failure to confirm. It is understood that a failure can result from a multitude of causes. Keep in mind also that with a strong effect, you have quadratically better p-value for the same sample size. You are at much less of a risk of dismissing strong results.

comment by wedrifid · 2013-09-15T08:19:41.910Z · LW(p) · GW(p)

Well, I would find it really awkward for a Bayesian to condone a modus operandi such as "The p-value of 0.15 indicates it is much more likely that there is a correlation than that the result is due to chance, however for all intents and purposes the scientific community will treat the correlation as non-existent, since we're not sufficiently certain of it (even though it likely exists)".

The way statistically significant scientific studies are currently used is not like this. The meaning conveyed and the practical effect of official people declaring statistically significant findings is not a simple declaration of the Bayesian evidence implied by the particular statistical test returning less than 0.05. Because of this, I have no qualms with saying that I would prefer lower values than p<0.05 to be used in the place where that standard is currently used. No rejection of Bayesian epistemology is implied.

comment by Mayo · 2013-09-29T06:44:56.338Z · LW(p) · GW(p)

No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.

Replies from: lukeprog
comment by lukeprog · 2013-09-29T07:48:44.302Z · LW(p) · GW(p)

Deborah, what do you think of jsteinhardt's Beyond Bayesians and Frequentists?

comment by lukeprog · 2013-09-05T21:01:49.378Z · LW(p) · GW(p)

Great quote.

Unfortunately, we find ourselves in a world where the world's policy-makers don't just profess that AGI safety isn't a pressing issue, they also aren't taking any action on AGI safety. Even generally sharp people like Bryan Caplan give disappointingly lame reasons for not caring. :(

Replies from: private_messaging, Stabilizer
comment by private_messaging · 2013-09-14T08:41:28.205Z · LW(p) · GW(p)

Why won't you update towards the possibility that they're right and you're wrong?

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic but not any topic where truth-finding can be tested reliably*, and they're better truth finders about topics where truth finding can be tested (which is what happens when they do their work), but not this particular topic.

(*because if you expect that, then you should end up actually trying to do at least something that can be checked because it's the only indicator that you might possibly be right about the matters that can't be checked in any way)

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

Replies from: lukeprog
comment by lukeprog · 2013-09-14T15:19:49.194Z · LW(p) · GW(p)

This model should rise up much sooner than some very low prior complex model where you're a better truth finder about this topic...

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time. Also, I think the most productive way to resolve these debates is not to argue the meta-level issues about social epistemology, but to have the object-level debates about the facts at issue. So if Caplan replies to Carl's comment and my own, then we can continue the object-level debate, otherwise... the ball's in his court.

Why are the updates always in one direction only? When they disagree, the reasons are "lame" according to yourself, which makes you more sure everyone's wrong. When they agree, they agree and that makes you more sure you are right.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff. And when have I said that some public figure agreeing with me made me more sure I'm right? See also my comments here.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Replies from: private_messaging
comment by private_messaging · 2013-09-14T16:33:31.707Z · LW(p) · GW(p)

It's not so much that I'm a better truth finder, it's that I've had the privilege of thinking through the issues as a core component of my full time job for the past two years, and people like Caplan only raise points that have been accounted for in my model for a long time.

Yes, but why Caplan did not see it fit to think about the issue for a significant time, and you did?

There's also the AI researchers who have had the privilege of thinking about relevant subjects for a very long time, education, and accomplishments which verify that their thinking adds up over time - and who are largely the actual source for the opinions held by the policy makers.

By the way, note that the usual method of rejection of wrong ideas, is not even coming up with wrong ideas in the first place, and general non-engagement of wrong ideas. This is because the space of wrong ideas is much larger than the space of correct ideas.

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

but to have the object-level debates about the facts at issue.

The first problem with highly speculative topics is that great many arguments exist in favour of either opinion on a speculative topic. The second problem is that each such argument relies on a huge number of implicit or explicit assumptions that are likely to be violated due to their origin as random guesses. The third problem is that there is no expectation that the available arguments would be a representative sample of the arguments in general.

This doesn't appear to be accurate. E.g. Carl & Paul changed my mind about the probability of hard takeoff.

Hmm, I was under the impression that you weren't a big supporter of the hard takeoff to begin with.

If I mention a public figure agreeing with me, it's generally not because this plays a significant role in my own estimates, it's because other people think there's a stronger correlation between social status and correctness than I do.

Well, your confidence should be increased by the agreement; there's nothing wrong with that. The problem is when it is not balanced by the expected decrease by disagreement.

Replies from: lukeprog
comment by lukeprog · 2013-09-14T17:01:19.103Z · LW(p) · GW(p)

What I expect to see in the counter-factual world where the AI risk is a big problem, is that the proponents of the AI risk in that hypothetical world have far more impressive and far more relevant accomplishments and credentials.

There are a great many differences in our world model, and I can't talk through them all with you.

Maybe we could just make some predictions? E.g. do you expect Stephen Hawking to hook up with FHI/CSER, or not? I think... oops, we can't use that one: he just did. (Note that this has negligible impact on my own estimates, despite him being perhaps the most famous and prestigious scientist in the world.)

Okay, well... If somebody takes a decent survey of mainstream AI people (not AGI people) about AGI timelines, do you expect the median estimate to be earlier or later than 2100? (Just kidding; I have inside information about some forthcoming surveys of this type... the median is significantly sooner than 2100.)

Okay, so... do you expect more or fewer prestigious scientists to take AI risk seriously 10 years from now? Do you expect Scott Aaronson and Peter Norvig, within 25 years, to change their minds about AI timelines, and concede that AI is fairly likely within 100 years (from now) rather than thinking that it's probably centuries or millennia away? Or maybe you can think of other predictions to make. Though coming up with crisp predictions is time-consuming.

Replies from: private_messaging
comment by private_messaging · 2013-09-14T17:25:14.476Z · LW(p) · GW(p)

Well, I too expect some form of something that we would call "AI", before 2100. I can even buy into some form of accelerating progress, albeit the progress would be accelerating before the "AI" due to the tools using relevant technologies, and would not have that sharp of a break. I even do agree that there is a certain level of risk involved in all the future progress including progress of the software.

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

You do frequently lament that the AI risk is underfunded, under-supported, and there's under-awareness about it. In the hypothetical world, this is not the case and you can only lament that the rational spending should be 2 billions rather than 1 billion.

edit: and of course, my true rejection is that I do not actually see rational inferences leading there. The imaginary world stuff is just a side-note to explain how non-experts generally look at it.

edit2: and I have nothing against FHI's existence and their work. I don't think they are very useful, or address any actual safety issues which may arise, though, but with them I am fairly certain they aren't doing any harm either (Or at least, the possible harm would be very small). Promoting the idea that AI is possible within 100 years, however, is something that increases funding for AI all across the board.

Replies from: lukeprog
comment by lukeprog · 2013-09-14T17:58:49.433Z · LW(p) · GW(p)

I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.

Right, this just goes back to the same disagreement in our models I was trying to address earlier by making predictions. Let me try something else, then. Here are some relevant parts of my model:

  1. I expect most highly credentialed people to not be EAs in the first place.
  2. I expect most highly credentialed people to not be familiar with the arguments for caring about the far future.
  3. I expect most highly credential people to be mostly just aware of risks they happen to have heard about (e.g. climate change, asteroids, nuclear war), rather than attempting a systematic review of risks (e.g. by reading the GCR volume).
  4. I expect most highly credentialed people to respond fairly well when actuarial risk is easily calculated (e.g. asteroid risk), and not-so-well when it's more difficult to calculate (e.g. many insurance companies went bankrupt after 9/11).
  5. I expect most highly credentialed people to have spent little time on explicit calibration training.
  6. I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.
  7. I expect most highly credentialed people to know very little about AI, and very little about AI risk.
  8. I expect that in general, even those highly credentialed people who intuitively think AI risk is a big deal will not even contact the people who think about AI risk for a living in order to ask about their views and their reasons for them, due to basic VoI failure.
  9. I expect most highly credentialed people to have fairly reasonable views within their own field, but to often have crazy views "outside the laboratory."
  10. I expect most highly credentialed people to not have a good understanding of Bayesian epistemology.
  11. I expect most highly credentialed people to continue working on, and caring about, whatever their career has been up to that point, rather than suddenly switching career paths on the basis of new information and an EV calculation.
  12. I expect most highly credentialed people to not understand lots of pieces of "black swan epistemology" like this one and this one.
  13. etc.
Replies from: ciphergoth, private_messaging
comment by Paul Crowley (ciphergoth) · 2013-09-15T08:43:02.123Z · LW(p) · GW(p)

Luke, why are you arguing with Dmytry?

comment by private_messaging · 2013-09-14T18:47:41.317Z · LW(p) · GW(p)

The question should not be about "highly credentialed" people alone, but about how they fare compared to people who are rather very low "credentialed".

In particular, on your list, I expect people with fairly low credentials to fare much worse, especially at identification of the important issues as well as on rational thinking. Those combine multiplicatively, making it exceedingly unlikely - despite the greater numbers of the credential-less masses - that people who lead the work on an important issue would have low credentials.

I expect most highly credentialed people to not be EAs in the first place.

What's EA? Effective altruism? If it's an existential risk, it kills everyone, selfishness suffices just fine.

e.g. many insurance companies went bankrupt after 9/11

Ohh, come on. That is in no way a demonstration that insurance companies in general follow faulty strategies, and especially is not a demonstration that you could do better.

I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.

Indeed.

Replies from: None, lukeprog
comment by [deleted] · 2013-09-14T22:26:15.211Z · LW(p) · GW(p)

If it's an existential risk, it kills everyone, selfishness suffices just fine.

A selfish person protecting against existential risk builds a bunker and stocks it with sixty years of foodstuffs. That doesn't exactly help much.

Replies from: jsteinhardt, private_messaging
comment by jsteinhardt · 2013-09-15T01:18:52.131Z · LW(p) · GW(p)

For what existential risks is this actually an effective strategy?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-09-15T08:48:33.819Z · LW(p) · GW(p)

A global pandemic that kills everyone?

comment by private_messaging · 2013-09-15T01:23:23.932Z · LW(p) · GW(p)

The quality of life in a bunker is really damn low. Not to mention that you presumably won't survive this particular risk in a bunker.

comment by lukeprog · 2013-09-14T18:54:18.274Z · LW(p) · GW(p)

In particular, on your list, I expect people with fairly low credentials to fare much worse

No doubt! I wasn't comparing highly credentialed people to low-credentialed people in general. I was comparing highly credentialed people to Bostrom, Yudkowsky, Shulman, etc.

Replies from: private_messaging
comment by private_messaging · 2013-09-14T20:08:40.258Z · LW(p) · GW(p)

But why exactly would you expect conventional researchers in AI and related technologies (also including provable software, as used in the aerospace industry, and a bunch of other topics), with credentials and/or accomplishments in said fields, to fare worse on that list's score?

Furthermore, with regards to the rationality, risks of mistake, and such... very little was done that can be checked for correctness in a clear cut way - most is of such nature that even when wrong it would not be possible to conclusively demonstrate it wrong. The few things that can be checked... look, when you write an article like this , discussing irrationality of Enrico Fermi, there's a substantial risk of appearing highly arrogant (and irrational) if you get the technical details wrong. It is a miniature version of AI risk problem - you need to understand the subject, and if you don't, there's negative consequences. It is much, much easier to not goof up in things like that, than AI direction.

As you guys are researching into actual AI technologies, the issue is that one should be able to deem your effort less of a risk. Mere "we are trying to avoid risk and we think they don't" can't do. The cost of a particularly bad friendly AI goof-up is a sadistic AI (to borrow the term from Omohundro). A sadistic AI can probably run far more tortured minds than a friendly AI can run minds, by a very huge factor, so the risk of a goof up must be quite a lot lower than anyone demonstrated.

Replies from: lukeprog
comment by lukeprog · 2013-09-14T20:56:46.692Z · LW(p) · GW(p)

BTW, I went back and numbered the items in my list so they're easier to refer to.

But why exactly would you expect conventional researchers in AI and related technologies... with credentials and/or accomplishments in said fields, to fare worse on that list's score?

Because very few people in general, including credentialed AI people, satisfy (1), (2), (3), (5), (6), (7)†, (8), (10), and (12), but Bostrom, Yudkowsky and Shulman rather uncontroversially do satisfy those items. I also expect B/Y/S to outperform most credentialed experts on (4), (9), and (11), but I understand that's a subjective judgment call and it would take a long time for me to communicate my reasons.

† The AI risk part of 7, anyway. Obviously, AI people specifically know a lot about AI.

Edit: Also, I'll briefly mention that I haven't downvoted any of your comments in this conversation.

Replies from: private_messaging
comment by private_messaging · 2013-09-14T22:09:55.268Z · LW(p) · GW(p)

Because very few people in general, including credentialed AI people, satisfy (1), (2), (3), (5), (6), (7), (8), (10), and (12)

Ok, let's go over your list, for the AI people.

1 I expect most highly credentialed people to not be EAs in the first place.

If EA is effective altruism, that's not relevant because one doesn't have to be an altruist to care about existential risks.

2 I expect most highly credentialed people to not be familiar with the arguments for caring about the far future.

I expect them to be able to come up with that independently if it is a good idea.

3 I expect most highly credential people to be mostly just aware of risks they happen to have heard about (e.g. climate change, asteroids, nuclear war), rather than attempting a systematic review of risks (e.g. by reading the GCR volume).

I expect intelligent people to be able to foresee risks, especially when prompted by the cultural baggage (modern variations on the theme of Golem)

4 I expect most highly credentialed people to respond fairly well when actuarial risk is easily calculated (e.g. asteroid risk), and not-so-well when it's more difficult to calculate (e.g. many insurance companies went bankrupt after 9/11).

Well, that ought to imply some generally better ability to evaluate hard to calculate probabilities, which would imply that you guys should be able to make quite a bit of money.

5 I expect most highly credentialed people to have spent little time on explicit calibration training.

The question is how well are they calibrated, not how much time they spent. You guys see miscalibration of famous people everywhere, even in Enrico Fermi.

6 I expect most highly credentialed people to not systematically practice debiasing like some people practice piano.

Once again, how unbiased is what's important, not how much time spent on a very specific way to acquire an ability. I expect most accomplished people to have encountered far more feedback on being right / being wrong through their education and experience.

7 I expect most highly credentialed people to know very little about AI, and very little about AI risk.

Doesn't apply to people in AI related professions.

8 I expect that in general, even those highly credentialed people who intuitively think AI risk is a big deal will not even contact the people who think about AI risk for a living in order to ask about their views and their reasons for them, due to basic VoI failure.

The way to raise VoI is prior history of thinking about something else for a living, with impressive results.

9 I expect most highly credentialed people to have fairly reasonable views within their own field, but to often have crazy views "outside the laboratory."

Well, less credentialed people are just like this except they don't have a laboratory inside of which they are sane, that's usually why they are less credentialed in the first place.

10 I expect most highly credentialed people to not have a good understanding of Bayesian epistemology.

Of your 3, I only weakly expect Bostrom to have learned the necessary fundamentals for actually applying Bayes theorem correctly in somewhat non-straightforward cases.

Yes, the basic formula is simple, but derivations are subtle and complex for non independent evidence or cases involving loops in the graph or all those other things...

It's like arguing that you are better equipped for a job at Weta Digital than any employee there because you know quantum electrodynamics (the fundamentals of light propagation), and they're using geometrical optics.

I expect many AI researchers to understand the relevant mathematics a lot, lot better than the 3 on your list.

And I expect credentialed people in general to have a good understanding of the variety of derivative tricks that are used to obtain effective results under uncertainty when the Bayes theorem can not be effectively applied.

11 I expect most highly credentialed people to continue working on, and caring about, whatever their career has been up to that point, rather than suddenly switching career paths on the basis of new information and an EV calculation.

Yeah, well, and I expect non-credentialed people to have too much to lose from backing out of it in the event that the studies return a negative.

12 I expect most highly credentialed people to not understand lots of pieces of "black swan epistemology" like this one and this one.

You lose me here.

I would make a different list, anyway. There's my list:

  1. Relevant expertise as measured by educational credentials and/or accomplishments. Expertise is required for correctly recognizing risks (e.g. an astronomer is better equipped for recognizing risks from the outer space, a physicist for recognizing faults in a nuclear power plant design, et cetera)

  2. Proven ability to make correct inferences (largely required for 1).

  3. Self preservation (most of us have it)

Lack of 1 is an automatic dis-qualifier in my list. It doesn't matter how much you are into things that you think are important for identifying, say, faults in a nuclear power plant design. If you are not an engineer, a physicist, or the like, you aren't going to qualify for that job via some list you make yourself, which conveniently omits (1).

edit: list copy paste failed.

Replies from: lukeprog, RobbBB
comment by lukeprog · 2013-09-14T22:23:44.292Z · LW(p) · GW(p)

I disagree with many of your points, but I don't have time to reply to all that, so to avoid being logically rude I'll at least reply to what seems to be your central point, about "relevant expertise as measured by educational credentials and/or accomplishments."

Who has educational credentials and/or accomplishments relevant to future AGI designs or long-term tech forecasting? Also, do you particularly disagree with what I wrote in AGI Impact Experts and Friendly AI Experts?

Also, in general, I'll just remind everyone reading this that I don't think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts (e.g. facts relevant to the theses in this post). Argument screens off authority, and all that.

Edit: Also, my view of Holden Karnofsky might be illustrative. I take Holden Karnofsky more seriously than almost anyone on the cost-effectiveness of global health interventions, despite the fact that he has 0 relevant degrees, 0 papers published in relevant journals, 0 awards for global health work, etc. Degrees and papers and so on are only proxy variables for what we really care about, and are easily screened off by more relevant variables, both for the case of Karnofsky on global health and for the case of Bostrom, Yudkowsky, Shulman, etc. on AI risk.

Replies from: private_messaging, None, private_messaging
comment by private_messaging · 2013-09-15T01:37:52.885Z · LW(p) · GW(p)

For Karnofsky and to some extent Bostrom yes, Shulman is debatable, Yudkowsky tried to get screened (tried to write a programming language, for example, wrote a lot of articles on various topics, many of them wrong, tried to write technical papers (TDT), really badly), and failed to pass the screening by a very big margin. Entirely irrational arguments about 10% counter-factual impact of his are also a part of failure. Omohundro passed with flying colours (his PhD is almost entirely irrelevant at that point, as it is screened off by his accomplishments in AI).

comment by [deleted] · 2013-09-14T22:34:08.257Z · LW(p) · GW(p)

I'll just remind everyone reading this that I don't think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts....

Exactly. All of this is wasted effort once either FAI or UFAI is developed.

comment by private_messaging · 2013-09-14T22:59:35.328Z · LW(p) · GW(p)

Who has educational credentials and/or accomplishments relevant to future AGI designs or long-term tech forecasting?

There's the more relevant accomplishments, there are less relevant accomplishments, and lacks of accomplishment.

Also, in general, I'll just remind everyone reading this that I don't think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts

I agree that a discussion of strategically relevant facts would be much more productive. I don't see facts here. I see many speculations. I see a lot of making things up to fit the conclusion.

If I were to tell you that I can, for example, win a very high stakes programming contest (with a difficult, open problem that has many potential solutions that can be ranked in terms of quality), the discussion of my approach to the contest problem between you and me would be almost useless for your or my prediction of victory (provided that basic standards of competence are met), irrespective of whenever my idea is good. Prior track record, on the other hand, would be a good predictor. This is how it is for a very well defined problem. It is not going to be better for a less well understood problem.

comment by Rob Bensinger (RobbBB) · 2013-09-14T22:50:25.588Z · LW(p) · GW(p)

If EA is effective altruism, that's not relevant because one doesn't have to be an altruist to care about existential risks.

'EA' here refers to the traits a specific community seems to exemplify (though those traits may occur outside the community). So more may be suggested than the words 'effective' and 'altruism' contain.

In terms of the terms, I think 'altruism' here is supposed to be an inclination to behave a certain way, not an other-privileging taste or ideology. Think 'reciprocal altruism'. You can be an egoist who's an EA, provided your selfish calculation has led you to the conclusion that you should devote yourself to efficiently funneling money to the world's poorest, efficiently reducing existential risks, etc. I'm guessing by 'EA' Luke has in mind a set of habits of looking at existential risks that 'Effective Altruists' tend to exemplify, e.g., quantifying uncertainty, quantifying benefit, strongly attending to quantitative differences, trying strongly to correct for a specific set of biases (absurdity bias, status quo bias, optimism biases, availability biases), relying heavily on published evidence, scrutinizing the methodology and interpretation of published evidence....

I expect them to be able to come up with that independently if it is a good idea.

My own experience is that I independently came up with a lot of arguments from the Sequences, but didn't take them sufficiently seriously, push them hard enough, or examine them in enough detail. There seems to be a big gap between coming up with an abstract argument for something while you're humming in the shower, and actually living your life in a way that's consistent with your believing the argument is sound.

Replies from: private_messaging
comment by private_messaging · 2013-09-14T23:52:38.108Z · LW(p) · GW(p)

My own experience is that I independently came up with a lot of arguments from the Sequences, but didn't take them sufficiently seriously, push them hard enough, or examine them in enough detail.

But we are speaking of credentialed people. They're fairly driven.

Furthermore, general non acceptance of an idea is evidence that the idea is not good. You can't seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views. (The biggest problem with "Bayesianism" is dis-balanced/selective updates)

In particular, when it comes to the interview that he linked for reasons why value the future...

First off, if one can support existential risk for non Pascal's wager type reasons then enormous utility of the future should not be relevant. If it is actually a requirement then I don't think there's anything to discuss here.

Secondarily, the most common norm of morality (Assuming we ignore things like Sharia), as specified in the laws of progressive countries, or as extrapolation of legal progress in less progressive ones, is to value the future people (we disapprove of smoking while pregnant), but not value counter-factual creation of future people (we allow abortion, and especially when the child would be disadvantaged and not have a fair chance). Rather than inferring the prevailing morality from the law and discussing it, various bad ideas are invented and discussed to make the argument appear stronger than it really is.

It is not that I am not exposed to this worldview. I am. It is that choosing between A: hurt someone, but a large number of happy people will be created, and B: not hurt someone, but a large number of happy people will not be created (with the deliberate choice having the causal impact on the hurting and creation), A is both illegal and immoral.

Replies from: RobbBB, Vaniver
comment by Rob Bensinger (RobbBB) · 2013-09-15T00:26:00.264Z · LW(p) · GW(p)

general non acceptance of an idea is evidence that the idea is not good. You can't seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views.

When I hear that Joe has a new argument against a belief of mine, then my confidence in my belief lowers a bit, and my confidence in Joe's competence also lowers a bit. If I then go on to actually evaluate the argument in detail and discover that it's an extraordinarily poor one, this should generally increase my confidence to higher than it was before I heard that Joe had an argument, and it should further lower my confidence in Joe's competence.

I've spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don't see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.

if one can support existential risk for non Pascal's wager type reasons then enormous utility of the future should not be relevant.

It sounds like you're committing the Pascal's Wager Fallacy Fallacy. If you aren't, then I'm not understanding your point. Large future utilities should count more than small future utilities, and multiplying by low probabilities is fine if the probabilities aren't vanishingly low.

Choosing between A: hurt someone, but a large number of happy people get created, and B: not hurt someone, but a large number of happy people do not get created, A is both illegal and immoral.

I think there's a quantitative tradeoff between the happiness of currently existent people and the happiness of possibly-created people. A strict rule 'Counterfactual People Have Absolutely No Value' leads to absurd conclusions, e.g., it's not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds. It's at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.

Replies from: private_messaging, private_messaging
comment by private_messaging · 2013-09-15T00:54:40.421Z · LW(p) · GW(p)

I've spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don't see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.

You don't want the outcome to be biased by the availability of the arguments, right? Really, I think you do not account for the fact that the available arguments are merely samples from the space of possible arguments (which make different speculative assumptions, in a very large space of possible speculations). Picked non uniformly, too, as arguments for one side may be more available, or their creation may maximize personal present-day utility of more agents. Individual samples can't be particularly informative in such a situation.

It's at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.

The issue is that the number of people you can speculate you affect grows much faster than the prior for the speculation decreases. Constant factors do not help with that, they just push the problem a little further.

A strict rule 'Counterfactual People Have Absolutely No Value' leads to absurd conclusions, e.g., it's not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds.

I don't see that as problematic. Ponder the alternative for a moment: you may be ok with a shoulder itch, but are you OK with 10 000 years of the absolutely worst torture imaginable, for the sake of creation of 3^^^3 or 3^^^^^3 or however many really happy people? What's about your death vs their creation?

edit: also you might have the value of those people to yourself (as potential mates and whatnot) leaking in.

comment by private_messaging · 2013-09-15T01:12:52.620Z · LW(p) · GW(p)

forgot to address this:

It sounds like you're committing the Pascal's Wager Fallacy Fallacy. If you aren't, then I'm not understanding your point. Large future utilities should count more than small future utilities, and multiplying by low probabilities is fine if the probabilities aren't vanishingly low.

If the probabilities aren't vanishingly low, you reach basically same conclusions without requiring extremely large utilities. 7 billion people dying is quite a lot, too. If you see extremely large utilities on a list of requirements for caring about the issue, when you already have at least 7 billion lives at stake, then it is a Pascal's wager.

Actually, I don't see vanishingly small probabilities problematic, I see small probabilities where the bulk of probability mass is unaccounted for, problematic. E.g. response to low risk from a specific asteroid is fine, because it's alternative positions in space are accounted for (and you have assurance you won't put it on an even worse trajectory)

comment by Vaniver · 2013-09-15T00:17:06.335Z · LW(p) · GW(p)

Furthermore, general non acceptance of an idea is evidence that the idea is not good. You can't seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views. (The biggest problem with "Bayesianism" is dis-balanced/selective updates)

Updating on someone else's decision to accept or reject a position should depend on their reason for their position. Information cascades is relevant.

Replies from: private_messaging
comment by private_messaging · 2013-09-15T00:33:09.668Z · LW(p) · GW(p)

Yes, of course. But also keep in mind that wrong positions are often rejected by the mechanism that generates positions, rather than the mechanism that checks the generated positions.

comment by Stabilizer · 2013-09-09T07:05:10.456Z · LW(p) · GW(p)

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Replies from: wedrifid, lukeprog
comment by wedrifid · 2013-09-14T09:41:53.661Z · LW(p) · GW(p)

After reading Robin's exposition of Bryan's thesis, I would disagree that his reasons are disappointingly lame.

Which could either indicate that the reasons are good or that your standards are lower than Luke's and so trigger no disappointment.

comment by lukeprog · 2013-09-09T16:28:57.712Z · LW(p) · GW(p)

Bryan is expressing a "standard economic intuition" but... did you see Carl's comment reply on Caplan's post, and also mine?

Replies from: private_messaging
comment by private_messaging · 2013-09-13T14:27:30.729Z · LW(p) · GW(p)

I did see Eelco Hoogendoorn 's and it is absolutely spot on.

I'm hardly a fan of Caplan, but he has some Bayesianism right:

  1. Based on how things like this asymptote or fail altogether, he has a low prior for foom.

  2. He has low expectation of being able to identify in advance (without the work equivalent to the creation of the AI) exact mechanisms by which it is going to asymptote or fail, irrespective of whenever it does or does not asymptote or fail, so not knowing such mechanisms does not bother him a whole lot.

  3. Even assuming he is correct he expects a plenty of possible arguments against this position (which are reliant on speculations), as well as expects to see some arguers, because the space of speculative arguments is very huge. So such arguments are not going to move him anywhere.

People don't do that explicitly any more than someone who's playing football is doing Newtonian mechanics explicitly. Bayes theorem is no less fundamental than the laws of motion of the football.

Likewise for things like non-testability: nobody's doing anything explicitly, it is just the case that due to something you guys call "conservation of expected evidence" , when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

Replies from: Estarlio
comment by Estarlio · 2013-09-13T14:40:53.568Z · LW(p) · GW(p)

when there is no possibility of evidence against a proposition, then a possibility of evidence in favour of the proposition would violate the Bayes theorem.

I'm not sure how you could have such a situation, given that absence of expected evidence is evidence of the absence. Do you have an example?

Replies from: private_messaging
comment by private_messaging · 2013-09-13T15:06:49.205Z · LW(p) · GW(p)

Well, the probabilities wouldn't be literally zero. What I mean is that lack of a possibility of strong evidence against something, and only a possibility of very weak evidence against it (via absence of evidence) implies that strong evidence in favour of it must be highly unlikely. Worse, such evidence just gets lost among the more probable 'evidence that looks strong but is not'.

Replies from: Estarlio
comment by Estarlio · 2013-09-13T16:39:30.791Z · LW(p) · GW(p)

Ah, I think I follow you.

Absence of evidence isn't necessarily a weak kind of evidence.

If I tell you there's a dragon sitting on my head, and you don't see a dragon sitting on my head, then you can be fairly sure there's not a dragon on my head.

On the other hand, if I tell you I've buried a coin somewhere in my magical 1cm deep garden - and you dig a random hole and don't find it - not finding the coin isn't strong evidence that I've not buried one. However, there there's so much potential weak evidence against. If you've dug up all but a 1cm square of my garden - the coin's either in that 1cm or I'm telling porkies, and what are the odds that - digging randomly - you wouldn't have come across it by then? You can be fairly sure, even before digging up that square, that I'm fibbing.

Was what you meant analogous to one of those scenarios?

Replies from: private_messaging
comment by private_messaging · 2013-09-13T16:42:44.761Z · LW(p) · GW(p)

Yes, like the latter scenario. Note that the expected utility of digging is low when the evidence against from one dig is low.

edit: Also. In the former case, not seeing a dragon sitting on your head is very strong evidence against there being a dragon. Unless you invoke un-testable invisible dragons which may be transparent to x-rays, let dust pass through it unaffected, and so on. In which case, I should have a very low likelihood of being convinced that there is a dragon on your head, if I know that the evidence against would be very weak.

edit2: Russel's teapot in the Kuiper belt is a better example still. When there can be only very weak evidence against it, the probability of encountering or discovering strong evidence in favour of it must be low also, making it not worth while to try to come up with evidence that there is a teapot in the Kuiper belt (due to low probability of success), even when the prior probability for the teapot is not very low.

Replies from: Estarlio
comment by Estarlio · 2013-09-13T18:19:07.334Z · LW(p) · GW(p)

Then, to extend the analogy: Imagine that digging has potentially negative utility as well as positive. I claim to have buried both a large number of nukes and a magical wand in the garden.

In order to motivate you to dig, you probably want some evidence of magical wands. In this context that would probably be recursively improving systems where, occasionally, local variations rapidly acquire super-dominance over their contemporaries when they reach some critical value. Evolution probably qualifies there - other bipedal frames with fingers aren't particularly dominant over other creatures in the same way that we are, but at some point we got smart enough to make weapons (note that I'm not saying that was what intelligence was for though) and from then on, by comparison to all other macroscopic land-dwelling forms of life, we may as well have been god.

And since then that initial edge in dominance has only ever allowed us to become more dominant. Creatures afraid of wild animals are not able to create societies with guns and nuclear weapons - you'd never have the stability for long enough.

In order to motivate you not to dig, you probably want some evidence of nukes. In this context, recursively - I'm not sure improving is the right word here - systems with a feedback state, that create large amounts of negative value. Well, to a certain extent that's a matter of perspective - from the perspective of extinct species the ascendancy of humanity would probably not be anything to cheer about, if they were in a position to appreciate it. But I suspect it can at least stand on its own that it tends to be the case that failure cascades are easier to make than cascade successes. One little thing goes wrong on your rocket and then the situation multiplies; a small error in alignment rapidly becomes a bigger one; or the timer on your patriot battery is losing a fraction of a second and over time your perception of where the missiles are is off significantly. - it's only with significant effort that we create systems where errors don't multiply.

(This is analogous to altering your expected value of information - like if earlier you'd said you didn't want to dig and I'd said, 'well there's a million bucks there' instead - you'd probably want some evidence that I had a million bucks, but given such evidence the information you'd gain from digging would be worth more.)

This seems to be fairly closely analogous to Elizer's claims about AI, at least if I've understood them correctly, that we have to hit an extremely small target and it's more likely that we're going to blow ourselves to itty-bitty pieces/cover the universe in paperclips if we're just fooling around hoping to hit on it by chance.

If you believe that such is the case, then the only people you're going to want looking for that magic wand - if you let anyone do it at all - are specialists with particle detectors - indeed if your garden is in the middle of a city you'll probably make it illegal for kids to play around anywhere near the potential bomb site.

Now, we may argue over quite how strongly we have to believe in the possible existence of magitech nukes to justify the cost of fencing off the garden - personally I think the statement:

if you take a thorough look at actually existing creatures, it's not clear that smarter creatures have any tendency to increase their intelligence.

Is to constrain what you'll accept for potential evidence pretty dramatically - we're talking about systems in general, not just individual people, and recursively improving systems with high asymptotes relative to their contemporaries have happened before.

It's not clear to me that the second claim he makes is even particularly meaningful:

In the real-world, self-reinforcing processes eventually asymptote. So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.

Sure, I think that they probably won't go to infinity - but I don't see any reason to suspect that they won't converge on a much higher value than our own native ability. Pretty much all of our systems do, from calculators to cars.

We can even argue over how you separate the claims that something's going to foom from the false claims of such (I'd suggest, initially, just seeing how many claims that something was going to foom have actually been made within the domain of technological artefacts, it may be that the base-line credibility is higher than we think.) But that's a body of research that Caplan, as far as I'm aware, hasn't forwarded. It's not clear to me that it's a body of research with the same order of difficulty as creating an actual AI either. And, in its absence, it's not clear to me that to answer in effect, "I'll believe it when I see the mushroom cloud." is a particularly rational response.

Replies from: private_messaging
comment by private_messaging · 2013-09-13T19:13:08.155Z · LW(p) · GW(p)

I was mostly referring to the general lack of interest in the discussion of un-falsifiable propositions by the scientific community. The issue is that un-falsifiable proposition are also the ones for which it is unlikely that in the discussion you will be presented with evidence in favour of them.

The space of propositions is the garden I am speaking of. And digging up false propositions is not harmless.


With regards to the argument of yours, I think you vastly under-estimate the size of the high-dimensional space of possible software, and how distant in this space are the little islands of software that actually does something interesting, as distant from each other as Bolzmann minds are within our universe (Albeit, of course, depending on the basis, possible software is better clustered).

Those spatial analogies, they are a great fallacy generator, a machine for getting quantities off by mind-bogglingly huge factors. In your mental image, you have someone create those nukes and put them in the sand, for the hapless individuals to find. In the reality that's not how you find nuke. You venture into this enormous space of possible designs, as vast as the distance from here to the closest exact replica of The Gadget which spontaneously formed from a supernova by the random movement of uranium atoms. When you have to look in the space this big, you don't find this replica of The Gadget without knowing what you're looking for quite well.

With regards to listing biases to help arguments, given that I have no expectation that one could not handwave up a fairly plausible bias that would work in the direction of a specific argument, the direct evidential value of listing biases in such manner, on the proposition, is zero (or an epsilon). You could have just as well argued that the individuals who are not afraid of cave bears get killed by the cave bears; there's too much "give" in your argument for it to have any evidential value. I can freely ignore it without having to bother to come up with a balancing bias (as people like Caplan rightfully do, without really bothering to outline why).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-03T19:58:40.901Z · LW(p) · GW(p)

Hm. A generalized phenomenon of overwhelming physicist underconfidence could account for a reasonable amount of the QM affair.

comment by philh · 2013-09-08T01:53:01.074Z · LW(p) · GW(p)

Fran: A million billion pounds says you’ll have nothing to show me.

Bernard: Oh, the old million billion. Why don’t we make it interesting, why don’t we say 50?

Black Books, Elephants and Hens. H/t /u/mrjack2 on /r/hpmor.

comment by Eugine_Nier · 2013-09-02T04:46:08.086Z · LW(p) · GW(p)

Opportunity is missed by most people because it is dressed in overalls and looks like work.

Thomas Edison

comment by Dentin · 2013-09-04T16:09:34.319Z · LW(p) · GW(p)

There is no glory, no beauty in death. Only loss. It does not have meaning. I will never see my loved ones again. They are permanently lost to the void. If this is the natural order of things, then I reject that order. I burn here my hopelessness, I burn here my constraints. By my hand, death shall fall. And if I fail, another shall take my place ... and another, and another, until this wound in the world is healed at last.

Anonymous, found written in the Temple at 2013 Burning Man

Replies from: Pavitra
comment by Pavitra · 2013-09-04T17:57:17.659Z · LW(p) · GW(p)

Part of that seems to be from HPMOR. I'm not sure where the rest comes from.

Replies from: Dentin
comment by Dentin · 2013-09-04T18:42:50.446Z · LW(p) · GW(p)

Yeah, almost certainly HPMOR inspired. Eliezer's work has spread far.

comment by Pablo (Pablo_Stafforini) · 2013-09-02T12:56:06.401Z · LW(p) · GW(p)

You should work to reduce your biases, but to say you have none is a sign that you have many.

Nate Silver, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t, New York, 2012, p. 451

comment by XerxesPraelor · 2013-09-20T17:07:43.859Z · LW(p) · GW(p)

There is one very valid test by which we may separate genuine, if perverse and unbalanced, originality and revolt from mere impudent innovation and bluff. The man who really thinks he has an idea will always try to explain that idea. The charlatan who has no idea will always confine himself to explaining that it is much too subtle to be explained. The first idea may be really outree or specialist; it may be really difficult to express to ordinary people. But because the man is trying to express it, it is most probable that there is something in it, after all. The honest man is he who is always trying to utter the unutterable, to describe the indescribable; but the quack lives not by plunging into mystery, but by refusing to come out of it.

G K Chesterton

Replies from: ChristianKl
comment by ChristianKl · 2013-09-22T11:34:30.688Z · LW(p) · GW(p)

The man who really thinks he has an idea will always try to explain that idea.

I don't think that's the case. There are plenty of shy intellectuals who don't push their ideas on other people. Darwin sat more than a decade on his big idea.

There are ideas that are about qualia. It doesn't make much sense to try to explain a blind person what red looks like and the same goes for other ideas that rest of observed qualia instead of resting on theory. If I believe in a certain idea because I experienced a certain qualia and I have no way of giving you the experience of the same qualia, I can't explain you the idea. In some instances I might still try to explain the blind what red looks like but there are also instance where I see it as futile.

One way of teaching certain lessons in buddhism is to give a student a koan that illustrates the lesson and let him meditate over the koan for hours. I don't see anything dishonest about teaching certain ideas that way.

If someone thinks about a topic in terms of black and white it just takes time to teach him to see various shades of grey.

comment by Turgurth · 2013-09-02T00:11:39.863Z · LW(p) · GW(p)

"Not being able to get the future exactly right doesn’t mean you don’t have to think about it."

--Peter Thiel

comment by SatvikBeri · 2013-09-03T21:45:33.993Z · LW(p) · GW(p)

I discovered as a child that the user interface for reprogramming my own brain is my imagination. For example, if I want to reprogram myself to be in a happy mood, I imagine succeeding at a difficult challenge, or flying under my own power, or perhaps being able to levitate objects with my mind. If I want to perform better at a specific task, such as tennis, I imagine the perfect strokes before going on court. If I want to fall asleep, I imagine myself in pleasant situations that are unrelated to whatever is going on with my real life.

My most useful mental trick involves imagining myself to be far more capable than I am. I do this to reduce the risk that I turn down an opportunity just because I am clearly unqualified[...] As my career with Dilbert took off, reporters asked me if I ever imagined I would reach this level of success. The question embarrasses me because the truth is that I imagined a far greater level of success. That's my process. I imagine big.

Scott Adams

comment by JonMcGuire · 2013-09-04T16:03:52.293Z · LW(p) · GW(p)

But, of course, the usual response to any new perspective is "That can't be right, because I don't already believe it."

Eugene McCarthy, Human Origins: Are We Hybrids?

Replies from: army1987
comment by A1987dM (army1987) · 2013-09-11T14:45:52.882Z · LW(p) · GW(p)

As a non-biologist, I kind-of suspect that article is supposed to be some kind of elaborate joke. It sounds convincing to me, but then again, so did Sokal (1996) to non-physicists; my gut feelings' prior probability for that claim is tiny (but probably tinier than rationally warranted; possibly, because it kind-of sounds like a parody of ancient astronaut hypotheses); and I can't find any mention of any mammal inter-order hybrids on Wikipedia.

Replies from: gattsuru, Ishaan, ChristianKl, Manfred
comment by gattsuru · 2013-09-11T16:46:13.542Z · LW(p) · GW(p)

Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke -- Sir Pratchett can at least get the general aspects of quantum physics correctly. Likewise, Mrs. Jenna Moran's RPGs have more meaningful statements on set theory than Sokal's joking conflation of the axiom of equality and feminist/racial equality. I'd expect a lot of non-physicists would consider it unconvincing, especially if you allow them the answer "this paper makes no sense".

((I'd honestly expect false positives, more than false negatives, when asking average persons to /skeptically/ test papers on quantum mechanics for fraud. Thirty pages of math showing a subatomic particle to be charming has language barrier problems.))

The greater concern here is that the evidence Mr. McCarthy uses to support his assertions is incredibly weak. The vast majority of his list of interspecies hybrids, for example, are either intra-familiae or completely untrustworthy (some are simply appeals to legends or internet hoax, like the cabbit or dog-bear hybrids). The only example of remotely similar variation to a chimpanzee-pig hybrid while being remotely trustworthy is an alleged rabbit-rat cross, but chasing the citation shows that the claimed evidence likely had a different (and at the time of the original experiment, unknown) cause and that the fertilization never occurred. Other cases conflate mating behavior and fertility, by which definition humans should be capable of hybridizing with rubber and glass. The sheer number of untrustworthy citations -- and, more importantly, that they're mixed together with the verifiable and known good ones -- is a huge red flag.

The quote's interesting -- and correct! as anyone who's shown the double-slit experiment can show -- but there's probably better ways to say it and theories to associate it with.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-13T19:19:31.066Z · LW(p) · GW(p)

Sokal's paper brought up the possibility of a morphogenetic field affecting quantum mechanics, which sounds slightly less rigorous than a Discworld joke

The concept doesn't come from Sokal but from Rupert Sheldrake who used the term in his 1995 book (http://www.co-intelligence.org/P-morphogeneticfields.html).

There are plenty of New Age people who seriously believe that the world works that way.

Replies from: BIbster
comment by BIbster · 2013-09-24T09:53:11.871Z · LW(p) · GW(p)

There are plenty of New Age people who seriously believe that the world works that way.

Or find it a reasonable / plausible theory... I'm married to one who evolved into one who reads that pseudo-science, instead of the Stephen Hawking she used to read 20 years ago...

comment by Ishaan · 2013-09-14T23:27:59.539Z · LW(p) · GW(p)

This is a blatant parody. Probability of pig+chimp hybrids involved in human origins are at pascal-low levels.

It sounds convincing to me

This is worthy of notice. It really shouldn't have been remotely convincing..

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim? Information about what went wrong here might be useful from a rationality-increasing perspective.

Replies from: army1987
comment by A1987dM (army1987) · 2013-09-21T09:46:48.474Z · LW(p) · GW(p)

Can you identify the factors which caused you to give the statements in this article more credibility than you would have given to any random internet source of an unlikely-sounding claim?

Mostly, the fact that I don't know shit about biology, and the writer uses full, grammatical sentences, cites a few references, anticipates possible counterarguments and responds to them, and more generally doesn't show many of the obvious signs of crackpottery.

Replies from: BIbster
comment by BIbster · 2013-09-24T09:48:22.606Z · LW(p) · GW(p)

This is exactly why I (amongst many?) find it so hard to separate the good-stuff from the bad-stuff. It's the way the matter is brought to you, not the matter itself. Very thoughtful way of bringing it, as Army1987 says, references, anticipation of counterarguments etc.

comment by ChristianKl · 2013-09-13T19:34:16.508Z · LW(p) · GW(p)

I would also very wary of McCarthy arguement. As having studied bioinformatics myself I would say:

Show me the human genes that you think come from pigs. If you name specific genes we can run our algorithms. Don't talk about stuff like the form vertebra when we have sequenced the genomes.

comment by Manfred · 2013-09-11T15:17:09.544Z · LW(p) · GW(p)

Yeah, it's a good quote promoting open-mindedness, but of course that's because crackpots spend a lot of time trying to hide their theories from any criticism in the name of open-mindedness.

comment by satt · 2013-09-02T01:36:41.939Z · LW(p) · GW(p)

I realize that if you ask people to account for 'facts,' they usually spend more time finding reasons for them than finding out whether they are true. [...] They skip over the facts but carefully deduce inferences. They normally begin thus: 'How does this come about?' But does it do so? That is what they ought to be asking.

— Montaigne, Essays, M. Screech's 1971 translation

comment by lukeprog · 2013-09-02T09:57:55.137Z · LW(p) · GW(p)

You cannot have... only benevolent knowledge; the scientific method doesn't filter for benevolence.

Richard Rhodes

Replies from: DanielLC
comment by DanielLC · 2013-09-04T06:29:43.315Z · LW(p) · GW(p)

That only tells you that if you just rely on the scientific method, it won't result in only benevolent knowledge. You could use another method to filter for benevolence.

Replies from: fubarobfusco, AlexanderD, loup-vaillant
comment by fubarobfusco · 2013-09-04T09:06:04.363Z · LW(p) · GW(p)

The same techniques of starting fire can be used to keep your neighbor warm in the winter, or to burn your neighbor's house down.

The same techniques of chemistry can be used to create remedies for diseases, or to create poisons.

The same techniques of business can be used to create mutual benefit (positive-sum exchanges; beneficial trade) or parasitism (negative-sum exchanges; rent-seeking).

The same techniques of rhetorical appeal to fear of contamination can be used to teach personal hygiene and save lives, or to teach racial purity and end them.

It isn't the knowledge that is benevolent or malevolent.

Replies from: Desrtopa, DanielLC
comment by Desrtopa · 2013-09-10T15:36:32.365Z · LW(p) · GW(p)

The same techniques of chemistry can be used to create remedies for diseases, or to create poisons.

Indeed, one fact I am rather fond of is that some deadly poisons are themselves antidotes to other deadly poisons, such as curare to strychnine, and atropine to nerve gas.

comment by DanielLC · 2013-09-04T16:16:30.623Z · LW(p) · GW(p)

That is a completely different reason than presented in the quote.

comment by AlexanderD · 2013-09-07T01:10:13.903Z · LW(p) · GW(p)

That would be wonderful, world-changing, and unlikely. I hope but do not expect to see it happen.

comment by loup-vaillant · 2013-09-04T23:06:52.342Z · LW(p) · GW(p)

Good luck finding one that doesn't also bias you into a corner.

comment by jsbennett86 · 2013-09-11T06:03:45.642Z · LW(p) · GW(p)

If you cannot examine your thoughts, you have no choice but to think them, however silly they may be.

Richard Mitchell - Less Than Words Can Say

comment by johnlawrenceaspden · 2013-09-08T16:53:03.990Z · LW(p) · GW(p)

When you know a thing, to hold that you know it, and when you do not know a thing, to allow that you do not know it. This is knowledge.

Confucius, Analects

comment by arundelo · 2013-09-02T02:44:29.748Z · LW(p) · GW(p)

You argue that it would be wrong to stab my neighbor and take all their stuff. I reply that you have an ugly face. I commit the "ad hominem" fallacy because I'm attacking you, not your argument. So one thing you could do is yell "OI, AD HOMINEM, NOT COOL."

[...] What you need to do is go one step more and say "the ugliness of my face has no bearing on moral judgments about whether it is okay to stab your neighbor."

But notice you could've just said that without yelling "ad hominem" first! In fact, that's how all fallacies work. If someone has actually committed a fallacy, you can just point out their mistake directly without being a pedant and finding a pat little name for all of their logical reasoning problems.

-- TychoCelchuuu on Reddit

Replies from: gwern, snafoo, SaidAchmiz
comment by gwern · 2013-09-02T02:57:16.498Z · LW(p) · GW(p)

Fallacy names are useful for the same reason any term or technical vocab are useful.

'But notice how you could've just you meant the quantity 1+1+1+1 without yelling "four" first! In fact, that's how all 'numbers' work. If someone is actually using a quantity, you can just give that quantity directly without being a mathematician and finding a pat little name for all of their quantities used.'

Replies from: RobbBB, Torello
comment by Rob Bensinger (RobbBB) · 2013-09-02T04:45:31.861Z · LW(p) · GW(p)

Fallacy names are great for chunking something already understood. The problem is that most people who appeal to them don't understand them, and therefore mis-use them. If they spoke in descriptive phrases rather than in jargon, there would be less of an illusion of transparency and people would be more likely to notice that there are discrepancies in usage.

For instance, most people don't understand that not all personal attacks are ad hominem fallacies. The quotation encourages that particular mistake, inadvertently. So it indirectly provides evidence for its own thesis.

Replies from: private_messaging
comment by private_messaging · 2013-09-03T17:14:07.391Z · LW(p) · GW(p)

Yeah, suppose someone argued instead that it should be OK to kill the other person and take their stuff. And were a convicted murderer.

Replies from: DanielLC
comment by DanielLC · 2013-09-04T06:40:55.521Z · LW(p) · GW(p)

If you're assuming that they won't be punished if they convinced the other person, then that's true. That would be a conflict of interest and hint at them starting with the bottom line.

If you don't assume that, then it sounds like ad hominem combined with circular logic. Them being a murderer doesn't mean their argument is wrong. In fact, since they're living the conclusion, it's evidence that they actually believe it, and thus that it's write. Furthermore, them being a murderer is only bad if you already accept the conclusion that it's not OK to kill the other person and take their stuff.

Replies from: private_messaging
comment by private_messaging · 2013-09-04T08:25:12.228Z · LW(p) · GW(p)

You can't say that whenever they are a murderer or not has no relation to the argument they're making, while you can say that for the face being ugly, though.

comment by Torello · 2013-09-02T03:51:31.425Z · LW(p) · GW(p)

I voted your comment up because I agree that the vocabulary is useful for both the person committing the fallacy and (I think this is overlooked) for the person recognizing the fallacy.

However, I think the point of the original quote is probably that when someone points out a fallacy they are probably felling angry and want to insult their interlocutor.

comment by snafoo · 2013-09-05T14:22:22.804Z · LW(p) · GW(p)

Yeah.

It's like when those stupid car buffs say "Hmmm...yeah, transmission fluid" when telling each other what they think is wrong rather than "It sounds like the part that changes the speed and torque with which the wheels turn with respect to the engine isn't properly lubricated and able to have the right hydraulic pressure, so you should add some green oil product."

-rekam

comment by Said Achmiz (SaidAchmiz) · 2013-09-02T15:08:22.347Z · LW(p) · GW(p)

That's not even an example of the ad hominem fallacy.

"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not. It was probably just an insulting rejoinder.

Insults, i.e. "Attacking you, not your argument", is not what ad hominem is. It's a fallacy, remember? It's no error in reasoning to call a person ugly. Only when you conclude from this that they are wrong do you commit the fallacy.

So:

A: It's wrong to stab your neighbor and take their stuff.
B: Your face is ugly.
A: The ugliness of my face has no bearing on moral...
B, interrupting: Didn't say it does! Your face is still ugly!

Replies from: army1987, wedrifid, Transfuturist
comment by A1987dM (army1987) · 2013-09-06T08:53:20.942Z · LW(p) · GW(p)

Did the speaker imply the second part? Maybe... but probably not.

They did not logically entail it but they did conversationally implicate it (see CGEL, p. 33 and following, for the difference). As per Grice's maxim of relation, people don't normally bring up irrelevant information.

B, interrupting: Didn't say it does!

At which point A would be justified in asking, “Why did you bring it up then?” And even if B had (tried to) explicitly cancel the pragmatic implicature (“It's wrong to stab your neighbor and take their stuff” -- ”I won't comment on that; on a totally unrelated note, your face is ugly”), A would still be justified in asking “Why did you change the topic?”

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-09-06T17:20:11.506Z · LW(p) · GW(p)

B here is violating Grice's maxims. That's the point. He's not following the cooperative principle. He's trying to insult A (perhaps because he is frustrated with the conversation). So applying Gricean reasoning to deduce B's intended meaning is incorrect.

If A asks "why are you changing the subject?", B's answer would likely be something along the lines of "And your mother's face is ugly too!".

Replies from: army1987
comment by A1987dM (army1987) · 2013-09-14T14:14:07.113Z · LW(p) · GW(p)

B here is violating Grice's maxims. That's the point. He's not following the cooperative principle.

Then he doesn't get to complain when people mis-get his point.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-09-14T19:34:02.684Z · LW(p) · GW(p)

My point is that Grice's maxims, useful though they are, do not fully capture how human conversation goes — most notably, those cases in which at least one party has a hostile or uncooperative attitude toward the other. People in such cases do get that they're being insulted or whatever; A, as you portray him, comes off as simply bad at understanding non-literal meaning (or he is being intentionally obstructive/pedantic).

comment by wedrifid · 2013-09-05T14:35:34.621Z · LW(p) · GW(p)

"You have an ugly face, so you're wrong" is ad hominem. "You have an ugly face" is not. It's just a statement. Did the speaker imply the second part? Maybe... but probably not.

I contest the empirical claim you are making about human behaviour. That reply in that context very nearly always constitutes arguing against the point the other is making. In particular, the example to which you are replying most definitely is an example of a fallacious ad hominem.

A: The ugliness of my face has no bearing on moral...

In common practice it does. The rules do change based on attractiveness. (Tangential.)

Replies from: army1987
comment by A1987dM (army1987) · 2013-09-06T08:54:57.374Z · LW(p) · GW(p)

In common practice it does. The rules do change based on attractiveness. (Tangential.)

But A hadn't specified who the stabber is or who the stabbee is.

comment by Transfuturist · 2013-09-03T21:52:49.571Z · LW(p) · GW(p)

The effect of the fallacy can be implied, can't it?

Replies from: wedrifid
comment by wedrifid · 2013-09-05T14:40:21.690Z · LW(p) · GW(p)

The effect of the fallacy can be implicated, can't it?

Can be and usually is (implied).

comment by Nomad · 2013-09-03T17:28:00.573Z · LW(p) · GW(p)

The “I blundered and lost, but the refutation was lovely!” scenario is something lovers of truth and beauty can appreciate.

Jeremy Silman

comment by Eugine_Nier · 2013-09-02T04:15:32.310Z · LW(p) · GW(p)

If you don’t study philosophy you’ll absorb it anyway, but you won’t know why or be able to be selective.

idontknowbut@gmail.com

Replies from: Viliam_Bur, DanArmak, wedrifid, somervta
comment by Viliam_Bur · 2013-09-07T15:06:43.031Z · LW(p) · GW(p)

It works similarly for psychology. People who study psychology learn dozen different explanations of human thinking and behavior, so the smarter among them know these things are far from settled, and perhaps there is no simple answer that explains everything. On the other hand, some people just read a random book on psychology, and they believe they understand everything completely.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-09T00:49:29.721Z · LW(p) · GW(p)

Or don't read any books and simply pick it up by osmosis.

comment by DanArmak · 2013-09-02T18:32:02.946Z · LW(p) · GW(p)

The same is broadly true of e.g. pop music or politics: you can't really escape them. It's not necessarily a reason to study them, though.

comment by wedrifid · 2013-09-09T08:02:36.679Z · LW(p) · GW(p)

If you don’t study philosophy you’ll absorb it anyway, but you won’t know why or be able to be selective.

This seems true. What I am curious about is whether it remains true if you substitute "don't" with "do". Those that do study philosophy have not on average impressed me with their ability to discriminate among the bullshit.

Replies from: somervta
comment by somervta · 2013-09-09T09:28:55.794Z · LW(p) · GW(p)

it seems to me that you are identifying 'study philosophy' as 'take philosophy courses/study academic philosophy/etc', which may not have been the intent of the OP

comment by somervta · 2013-09-02T07:01:47.914Z · LW(p) · GW(p)

Won't be as able to be selective, maybe, although many here would argue that studying philosophy will decrease the quality of your bullshit meter rather than improve it.

Replies from: private_messaging
comment by private_messaging · 2013-09-03T18:23:15.828Z · LW(p) · GW(p)

I think that is most definitely false, because many of the the ideas in philosophy contradict each other, and you get good exposure to contradictory good looking arguments, which teaches you to question such arguments in general.

Popular science books, on the other hand, often tend to explain true conclusions using fallacious arguments.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-09-04T23:27:18.654Z · LW(p) · GW(p)

To steel-man somervta's point, it might be that philosophy decreases the quality of your bullshit meter by making it overactive. I don't find it plausible that philosophy generally makes people hyper-credulous, but I could buy that it generally makes people hyperskeptical, quibbling, self-undermining, and/or directionless.

comment by Salemicus · 2013-09-03T19:20:38.967Z · LW(p) · GW(p)

[This claim] is like the thirteenth stroke of a crazy clock, which not only is itself discredited but casts a shade of doubt over all previous assertions.

A. P. Herbert, Uncommon Law.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-04T05:03:45.269Z · LW(p) · GW(p)

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

Replies from: Salemicus, Zvi, wedrifid
comment by Salemicus · 2013-09-04T13:36:55.226Z · LW(p) · GW(p)

I agree. It strengthens your point to note that, although the quote is normally used seriously, the author intended it mischievously. In context, the "thirteenth stroke" is a defendant, who has successfully rebutted all the charges against him, making the additional claim that "this [is] a free country and a man can do what he likes if he does nobody any harm."

This "crazy" claim convinces the judge to convict him anyway.

comment by Zvi · 2013-09-04T11:57:17.157Z · LW(p) · GW(p)

For most people, is it necessarily wrong to lose all respect for someone in response to a true statement? Most people are respecting things other than truth, and the point "anyone respectable would have known not to say that" can remain perfectly valid.

comment by wedrifid · 2013-09-13T00:07:39.604Z · LW(p) · GW(p)

Caution in applying such a principle seems appropriate. I say this because I've long since lost track of how often I've seen on the Internet, "I lost all respect for X when they said [perfectly correct thing]."

I don't lose all respect for X based on one thing they say, but I do increase my respect in them if the controversial or difficult things they say are correct and I conserve expected evidence.

comment by arundelo · 2013-09-26T17:17:23.045Z · LW(p) · GW(p)

We have this shared concept that there's some baseline level of effort, at which point you've absolved yourself of finger-pointing for things going badly. [.... But t]here are exceptional situations where the outcome is more important than what you feel is reasonable to do.

-- Tim Evans-Ariyeh

Replies from: Nectanebo
comment by Nectanebo · 2013-09-27T14:29:44.221Z · LW(p) · GW(p)

Ideally, it would be nice if the world can move towards caring about the full outcome over factors like the satisfication of baseline levels of effort in more and more situations, not just exceptional ones.

comment by NancyLebovitz · 2013-09-13T12:49:38.372Z · LW(p) · GW(p)

Personally, a huge breakthrough for me was realizing I could view social situations as information-gathering opportunities (as opposed to pass-fail tests). If something didn't work - that wasn't a fail, it was DATA. If something did work... also data. I could experiment! People's reactions weren't eternal judgments about my worth, but interesting feedback on the approach I had chosen that day.

parodie

comment by gwern · 2013-09-10T16:18:02.454Z · LW(p) · GW(p)

"...By the end of August, I was mentally drained, more drained, I think, than I had ever been. The creative potential, the capacity to solve problems, changes in a man in ebbs and flows, and over this he has little control. I had learned to apply a kind of test. I would read my own articles, those I considered the best. If I noticed in them lapses, gaps, if I saw that the thing could have been done better, my experiment was successful. If, however, I found myself reading with admiration, that meant I was in trouble."

His Master's Voice, Stanislaw Lem; p. 106 from the Northwestern University Press 3rd edition, 1999

Replies from: Kawoomba
comment by Kawoomba · 2013-09-10T16:35:07.312Z · LW(p) · GW(p)

over this he has little control

I like the self-test idea, but this sort of defeatism is kind of, well, self-defeating.

Replies from: gwern
comment by gwern · 2013-10-02T21:17:21.931Z · LW(p) · GW(p)

I think it's true. Short of crude measures like stimulants, it does seem to ebb and flow for no obvious reasons. And it's useful to know if you're currently in a doldrum - you can give up forcing yourself to try to work on creative material, and turn to all the usual chores and small tasks that build up.

comment by Eugine_Nier · 2013-09-02T20:16:25.146Z · LW(p) · GW(p)

Another bad indication is when we feel sorry for people applying for the program. We used to fund people because they seemed so well meaning. We figured they would be so happy if we accepted them, and that they would get their asses kicked by the world if we didn't. We eventually realized that we're not doing those people a favor. They get their asses kicked by the world anyway.

Paul Graham

comment by lavalamp · 2013-09-23T21:42:20.885Z · LW(p) · GW(p)

You asked us to make them safe, not happy!

--"Adventure Time" episode "The Businessmen": the zombie businessmen are explaining why they are imprisoning soft furry creatures in a glass bowl.

comment by JQuinton · 2013-09-04T15:47:11.585Z · LW(p) · GW(p)

Somebody could give me this glass of water and tell me that it’s water. But there’s a lot of clear liquids out there and I might actually have a real case that this might not be water. Now most cases when something like a liquid is in a cup it’s water.

A good way to find out if it’s water is to test if it has two hydrogens per oxygen in each molecule in the glass and you can test that. If it evaporates like water, if it tastes like water, freezes like water… the more tests we apply, the more sure we can be that it’s water.

However, if it were some kind of acid and we started to test and we found that the hydrogen count is off, the oxygen count is off, it doesn’t taste like water, it doesn’t behave like water, it doesn’t freeze like water, it just looks like water. If we start to do these tests, the more we will know the true nature of the liquid in this glass. That is how we find truth. We can test it any number of ways; the more we test it, the more we know the truth of what it is that we’re dealing with.

  • An ex-Mormon implicitly describing Bayesian updates
Replies from: arborealhominid
comment by arborealhominid · 2013-09-05T00:08:47.903Z · LW(p) · GW(p)

Another good one from the same source:

Truth can be sliced and analyzed in 100 different ways and it will always remain true.

Falsehood on the other hand can only be sliced a few different ways before it becomes increasingly obvious that it is false.

comment by jimmy · 2013-09-03T01:47:02.925Z · LW(p) · GW(p)

"To know thoroughly what has caused a man to say something is to understand the significance of what he has said in its very deepest sense." -Willard F. Day

Replies from: Pavitra
comment by Pavitra · 2013-09-04T17:41:30.727Z · LW(p) · GW(p)

On the other hand, one should consider not only what was said, but also what should have been said.

comment by shminux · 2013-09-16T16:54:16.123Z · LW(p) · GW(p)

Rationality wakes up last:

In those first seconds, I'm always thinking some version of this: "Oh, no!!! This time is different. Now my arm is dead and it's never getting better. I'm a one-armed guy now. I'll have to start drawing left-handed. I wonder if anyone will notice my dead arm. Should I keep it in a sling so people know it doesn't work or should I ask my doctor to lop it off? If only I had rolled over even once during the night. But nooo, I have to sleep on my arm until it dies. That is so like me. What happens if I sleep on the other one tomorrow night? Can I learn to use a fork with my feet?"

Then at about the fifth second, some feeling returns to my arm and I experience hope. I also realize that if people could lose their arms after sleeping on them there wouldn't be many people left on earth with two good arms. Apparently the rational part of my mind wakes up last.

Scott Adams on waking up with a numb arm.

Replies from: ViEtArmis, MugaSofer, Desrtopa
comment by ViEtArmis · 2013-09-17T21:16:38.161Z · LW(p) · GW(p)

I woke up one time with both arms completely numb. I tried to turn the light on and instead fell out of bed. I felt certain that I was going to die right then.

comment by MugaSofer · 2013-09-17T08:10:43.509Z · LW(p) · GW(p)

Never experienced this exact experience - I don't sleep on my arm - but waking up stupid? Definitely.

comment by Desrtopa · 2013-09-16T23:36:55.710Z · LW(p) · GW(p)

Odd, this has never happened to me. Not the experience of waking up with a numb arm, but the experience of being at all worried about it.

I was quite worried the first time I experienced a numb arm which was both completely dead to sensation and totally immobile for multiple minutes, but after that had happened before, successive occurrences were no longer particularly worrying.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-09-17T07:59:46.309Z · LW(p) · GW(p)

I've experienced 'pins and needles' many times, but a totally 'dead' arm only once. I didn't have any control over it, and when I tried to move it I hit myself in the nose. Quite hard, too!

Replies from: Desrtopa
comment by Desrtopa · 2013-09-17T14:12:33.167Z · LW(p) · GW(p)

When I experienced a "totally dead arm," I didn't just not have control over it, I couldn't even wiggle my fingers. It was pretty frightening, since as far as I knew the arm might have experienced extensive cell death from blood deprivation; after all, I had no sign of it being operational at all. My circulation was poor enough that I couldn't even tell if it was still warm, beyond residual heat from my lying on it.

It's happened twice again since then though, and the successive occasions were not particularly distressing.

Replies from: ShardPhoenix, bbleeker
comment by ShardPhoenix · 2013-10-05T04:33:18.075Z · LW(p) · GW(p)

IIRC the numbness is caused by nerve compression, not blood-flow cutoff.

edit: Apparently it can be either way: http://www.wisegeek.org/what-are-the-most-common-causes-of-numbness-while-sleeping.htm

edit2: And another source claims it's due to nerves, so I dunno. I do find the nerve explanation more plausible than the blood-flow one.

comment by Sabiola (bbleeker) · 2013-09-18T07:49:33.334Z · LW(p) · GW(p)

I couldn't do anything with the arm either, it just felt as if it wasn't there. It was decades ago, but I think I used my shoulder muscles to try and move it. I was probably scared too, but that part of the memory is quite vague.

comment by Estarlio · 2013-09-04T23:10:25.570Z · LW(p) · GW(p)

Foundations matter. Always and forever. Regardless of domain. Even if you meticulously plug all abstraction leaks, the lowest-level concepts on which a system is built will mercilessly limit the heights to which its high-level “payload” can rise. For it is the bedrock abstractions of a system which create its overall flavor. They are the ultimate constraints on the range of thinkable thoughts for designer and user alike. Ideas which flow naturally out of the bedrock abstractions will be thought of as trivial, and will be deemed useful and necessary. Those which do not will be dismissed as impractical frills — or will vanish from the intellectual landscape entirely. Line by line, the electronic shanty town grows. Mere difficulties harden into hard limits. The merely arduous turns into the impossible, and then finally into the unthinkable.

[...]

The ancient Romans could not know that their number system got in the way of developing reasonably efficient methods of arithmetic calculation, and they knew nothing of the kind of technological paths (i.e. deep-water navigation) which were thus closed to them.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-06T07:05:03.165Z · LW(p) · GW(p)

This is why it always bothers me how uninterested most LWers appear to be in the shakiness of the foundations of the basic LW belief set.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-06T14:46:32.344Z · LW(p) · GW(p)

Elaborate?

comment by Salemicus · 2013-09-03T19:11:37.058Z · LW(p) · GW(p)

A man who has made up his mind on a given subject twenty-five years ago and continues to hold his political opinions after he has been proved to be wrong is a man of principle; while he who from time to time adapts his opinions to the changing circumstances of life is an opportunist.

A. P. Herbert, Uncommon Law.

comment by Yahooey · 2013-09-04T19:55:15.070Z · LW(p) · GW(p)

There are no absolute certainties in this universe. A man must try to whip order into a yelping pack of probabilities, and uniform success is impossible.

— Jack Vance, The Languages of Pao

Replies from: wedrifid
comment by wedrifid · 2013-09-12T23:59:53.508Z · LW(p) · GW(p)

There are no absolute certainties in this universe [..] is impossible.

Improbable would seem more appropriate.

comment by Eugine_Nier · 2013-09-02T04:44:42.462Z · LW(p) · GW(p)

One of the penalties for refusing to participate in politics is that you end up being governed by your inferiors.

Plato

Replies from: Jayson_Virissimo, Risto_Saarelma, Martin-2
comment by Jayson_Virissimo · 2013-09-02T04:54:21.642Z · LW(p) · GW(p)

In a democratic republic of over 300 million people, whether or not you "participate in politics" has virtually no effect on whether your rulers are inferior or superior than yourself (unless "participate in politics" is a euphemism for coup d'état).

Replies from: Eugine_Nier, DanArmak
comment by Eugine_Nier · 2013-09-02T20:14:29.087Z · LW(p) · GW(p)

Another case of rationalists failing at collective action.

Replies from: DanArmak
comment by DanArmak · 2013-09-02T21:12:33.504Z · LW(p) · GW(p)

It's not a nation of 300 million rationalists, however.

Replies from: scav
comment by scav · 2013-09-04T11:00:18.462Z · LW(p) · GW(p)

Yet.

And you don't even need a majority of rationalists by headcount. You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making. Actual widespread ability in rationality skills comes later.

Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.

Replies from: DanArmak
comment by DanArmak · 2013-09-04T11:43:12.485Z · LW(p) · GW(p)

You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making.

People don't expect rational decision making from politics, because that's not what politics is for. Politics exists for the sake of power (politics), coordination and control, and of tribalism, not for any sort of decision making. When politicians make decisions, they optimize for political purposes, not for anything external such as economic, scientific, cultural, etc. outcomes.

When people try make decisions to optimize something external like that, we don't call them politicians; we call them bureaucrats.

If you tried to do what you suggest, you would end up trying not to improve or reform politics, but to destroy destroy it. Good luck with that.

Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.

Depends on who "we" are. A great many people still believe in the Bible and try to emulate it, or other comparable texts.

Replies from: scav
comment by scav · 2013-09-04T21:33:20.478Z · LW(p) · GW(p)

A little cynical maybe? Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.

But of course, most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it. Anyway, they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.

I don't think the people making decisions to optimise an outcome are well exemplified by bureaucrats. Try engineers.

Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it. I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far. So, good luck with that :)

You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?

Replies from: DanArmak
comment by DanArmak · 2013-09-04T21:51:26.523Z · LW(p) · GW(p)

Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.

Certainly, they're often trying to achieve something outside of politics in order to gain something within politics. We should strive to give them good incentives so the things they do outside of politics are net benefits to non-politicians.

most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it

So teaching them to be more rational would cause them to be less interested in politics, instead of demanding that politicians be more rational-for-the-good-of-all. I'm not sure if that's a good or bad thing in itself, but at least they wouldn't waste so much time obsessing over politics. Being apolitical also enhances cooperation.

they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.

That's very true, it just has nothing to do with politics. I'm all for making people more rational in general.

Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it

Politicians can be rational. It's just that they would still be rational politicians - they would use their skills of rationality to do more of the same things we dislike them for doing today. The problem isn't irrationally practiced politics, it's politics itself.

I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far.

It's changed a lot over the past, but not in this respect: I think no society on the scale millions of people has ever existed that wasn't dominated by one or another form of politics harmful to most of its residents.

You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?

Indeed, it depends on how you measure sanity. On the object level of the rules people follow, things have gotten much better. But on the more meta level of how people arrive at beliefs, judge them, and discard them, the vast majority of humanity is still firmly in the camp of "profess to believe whatever you're taught as a child, go with the majority, compartmentalize like hell, and be offended if anyone questions your premises".

comment by DanArmak · 2013-09-02T18:30:01.127Z · LW(p) · GW(p)

A democratic republic is not necessary. In any kind of political regime encompassing 300 million people, your participation in politics has very small expected effect on whether your rulers are inferior to you.

comment by Risto_Saarelma · 2013-09-02T15:58:19.422Z · LW(p) · GW(p)

This seems a bit mangled. The original in The Republic talks about refusing to rule, not refusing to go into politics. Makes it a bit less of a snappy exhortation for your fellow monkeys to gang up on the other monkeys for the price of actually making more sense.

Replies from: MugaSofer
comment by MugaSofer · 2013-09-02T20:44:06.446Z · LW(p) · GW(p)

"One of the penalties for not ruling the world is that it gets ruled by other people." - clearly superior quote

comment by Martin-2 · 2013-09-06T05:13:16.778Z · LW(p) · GW(p)

One of the penalties for participating in politics is that your superiors end up being governed by their inferiors.

comment by iDante · 2013-09-02T03:08:53.824Z · LW(p) · GW(p)

At which point, Polly decided that she knew enough of the truth to be going on with. The enemy wasn't men, or women, or the old, or even the dead. It was just bleedin' stupid people, who came in all varieties. And no one had the right to be stupid.

  • Terry Pratchett, Monstrous Regiment
Replies from: DanArmak, sketerpot
comment by DanArmak · 2013-09-02T18:37:41.158Z · LW(p) · GW(p)

However, to set yourself against all the stupidity in the world is an insurmountable task.

"Professor, I have to ask, when you see something all dark and gloomy, doesn't it ever occur to you to try and improve it somehow? Like, yes, something goes terribly wrong in people's heads that makes them think it's great to torture criminals, but that doesn't mean they're truly evil inside; and maybe if you taught them the right things, showed them what they were doing wrong, you could change -"

Professor Quirrell laughed, then, and not with the emptiness of before. "Ah, Mr. Potter, sometimes I do forget how very young you are. Sooner you could change the color of the sky."

Replies from: somervta
comment by somervta · 2013-09-04T10:18:38.779Z · LW(p) · GW(p)

Sooner you could change the color of the sky."

You know, that's really not so implausible...

Replies from: CronoDAS, DanArmak
comment by CronoDAS · 2013-09-20T21:59:03.983Z · LW(p) · GW(p)

Apparently, both particulate air pollution and streetlights are both capable of this.

http://blogs.discovermagazine.com/crux/2012/08/23/why-is-the-night-sky-turning-red/

comment by DanArmak · 2013-09-04T11:45:25.447Z · LW(p) · GW(p)

Professor Quirrell was not being ironic.

comment by sketerpot · 2013-09-04T08:31:22.191Z · LW(p) · GW(p)

That's an surprisingly forgiving thing to say. She lives in a place where eating legs to prevent starvation is a venerable military tradition, and a non-zero number of people end up in the Girls' Working School.

comment by Kawoomba · 2013-09-25T19:56:08.422Z · LW(p) · GW(p)

I believe that the final words man utters on this Earth will be "It worked!", it'll be an experiment that isn't misused, but will be a rolling catastrophe. (...) Curiosity killed the cat, and the cat never saw it coming.

Jon Stewart, talking to Richard Dawkins (S18, E156)

Replies from: snafoo
comment by snafoo · 2013-10-01T00:27:16.467Z · LW(p) · GW(p)

Let's get one thing straight: ignorance killed the cat.

Curiosity was framed.

comment by jsbennett86 · 2013-09-14T21:41:19.489Z · LW(p) · GW(p)

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs. You can reject reality and substitute your own, but reality will roll on, eventually crushing you even as you refuse to dodge it. The best you can hope for is to play by reality’s rules and use them to your benefit.

Mark Crislip - Science-Based Medicine

Replies from: ChristianKl
comment by ChristianKl · 2013-09-20T19:43:57.926Z · LW(p) · GW(p)

Reality is one honey badger. It don’t care. About you, about your thoughts, about your needs, about your beliefs.

Reality cares about your beliefs.

People who don't believe in ego depletion don't get as much ego depleted as people who do believe in it.
People who believe that stress is unhealthy have a higher mortality when they have high stress than people who don't hold that belief.

Replies from: DanielLC, Benito
comment by DanielLC · 2013-09-23T01:32:36.800Z · LW(p) · GW(p)

I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion. Similarly, if you're suffering health problems due to stress, it would make you think stress is unhealthy.

Your point still stands. Reality does care about your beliefs when the relevant part of reality is you.

Replies from: army1987, ChristianKl
comment by A1987dM (army1987) · 2013-09-24T14:11:47.217Z · LW(p) · GW(p)

I'd guess that there is causation in both directions to some extent, leading to a positive feedback loop.

comment by ChristianKl · 2013-09-23T10:53:07.359Z · LW(p) · GW(p)

I would expect that if you have more ego depletion than other people it would result in you being more likely to believe in ego depletion.

How high is your confidence that the effect can be completely explained that way?

Replies from: DanielLC
comment by DanielLC · 2013-09-24T00:11:01.838Z · LW(p) · GW(p)

Not that high, but it does throw into question any studies showing a correlation, and it seems strange to site an example there's no evidence for.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-24T13:30:50.441Z · LW(p) · GW(p)

Not that high

What does that mean in numbers?

comment by Ben Pace (Benito) · 2013-09-21T10:00:44.240Z · LW(p) · GW(p)

The placebo effect has little relevant effect. People who believe they can fly don't fare better when pushed of cliffs. A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied, but to say 'Reality cares about your beliefs' sounds far to much like a defence of idealism, or the idea that 'everyone has their own truths'.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-21T12:30:55.325Z · LW(p) · GW(p)

The placebo effect has little relevant effect.

I'm not sure whether that's true, the last time I investigated that claim I don't found the evidence compelling. Placebo's are also a relatively clumsy way of changing beliefs intentionally.

People who believe they can fly don't fare better when pushed of cliffs.

How do you know? If you pick a height that kills 50% of the people who don't believe that they can fly, I'm not sure that the number of people killed is the same for those who hold that belief. The belief is likely to make people more relaxed when they are pushed over the cliff which is helpful for surviving the experience.

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

A world where you believe x is different from a world where you believe not-x, and that has slight physical effects given that we are embodied,

I would call 20000 death Americans per year for the belief that stress is unhealthy more than a slight physical effect.

'Reality cares about your beliefs' sounds far to much like a defence of idealism

I don't think that the fact that you pattern match it that way speaks against the idea. I think the original quote comes from a place of Descartes inspired mind-body dualism. We are embodied and the content of our mind has effects.

Replies from: army1987
comment by A1987dM (army1987) · 2013-09-22T10:17:19.504Z · LW(p) · GW(p)

I doubt that you find many people who hold that belief with the same certainity that they believe the sun rises tomorrow. If you don't like idealism, argue based on the beliefs that people actually hold in reality instead of escaping into thought experiments.

The original quote is taken from an article about the vaccine controversy. People who don't vaccinate because they believe that God will protect them or whatever actually exist, and they may be slightly less likely to fall ill than people who don't vaccinate but don't hold that belief but a lot more likely to fall ill than people who do vaccinate.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-22T10:41:25.166Z · LW(p) · GW(p)

I think that there are few Christians who believe that no Christian who doesn't vaccinate will get Measles.

Many Chrisitan's do believe that there's evil in the world. They believe that for some complicated reason that they don't understand sometimes God will allow evil to exist. God is supposed to be an agent who's actions a mere human can't predict with certainity.

According to that frame, if someone get's Measles it's because God wanted that to happen. If on the other hand a child dies because of adverse reaction of a vaccine that the doctor gave the child then the parent shares responsibility for that harm because he allowed the vaccination.

I also don't now how the example of Japan is supposed to convince any Christian that his supposed belief in God preventing Measles of believing Christians is wrong.

While we are at the topic of the effect of beliefs, I don't think there good research about how beliefs that people hold effect whether they get illnesses. Part of the reason is that most doctors who do studies about the immune system don't think that beliefs are in their domain because they study the body and not the mind.

comment by jsbennett86 · 2013-09-15T22:47:47.584Z · LW(p) · GW(p)

A term that means almost anything means almost nothing. Such a term is a convenient device for those who have almost nothing to say.

Richard Mitchell - Less Than Words Can Say

Replies from: Mestroyer
comment by Mestroyer · 2013-09-16T16:30:36.218Z · LW(p) · GW(p)

Counterexample: "it".

Replies from: Vaniver
comment by Vaniver · 2013-09-16T16:34:29.873Z · LW(p) · GW(p)

That seems to support the quote, actually: "it" typically has a single antecedent, or a small enough set that the correct antecedent can be easily identified by context. When it cannot be identified by context, this is seen as a writing error (such as here, here, or here).

Replies from: Document
comment by Document · 2013-09-16T17:59:03.971Z · LW(p) · GW(p)

Possible counterexamples (there are probably better ones):

Replies from: Vaniver, LM7805
comment by Vaniver · 2013-09-16T19:02:29.041Z · LW(p) · GW(p)

For each of those, the meaning of "it" is clear from context. If it weren't, then it would be uncommunicative writing.

comment by LM7805 · 2013-09-23T21:45:46.108Z · LW(p) · GW(p)

All of these are dummy subjects. English does not allow a null anaphor in subject position; there are other languages that do. ("There", in that last clause, was also a dummy pronoun.)

comment by aausch · 2013-09-07T19:13:02.708Z · LW(p) · GW(p)

“The first magical step you can do after a flood,” he said, “is get a pump and try to redirect water.”

-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article

comment by NancyLebovitz · 2013-09-04T14:43:55.972Z · LW(p) · GW(p)

The merit of The Spy Who Came in from the Cold, then – or its offence, depending where you stood – was not that it was authentic, but that it was credible.

John LeCarre, explaining that he didn't have insider information about the intelligence community, and if he had, he would not have been allowed to publish The Spy Who Came in from the Cold, but that a great many people who thought James Bond was too implausible wanted to believe that LeCarre's book was the real deal.

comment by Eugine_Nier · 2013-09-02T04:48:56.983Z · LW(p) · GW(p)

When you have to talk yourself into something, it’s a bad sign

Paul Graham

Replies from: AndHisHorse, MugaSofer, Stabilizer
comment by AndHisHorse · 2013-09-02T05:03:34.603Z · LW(p) · GW(p)

Yes, but it can be either a bad sign about what you're trying to talk yourself into, or about your state of mind. It simply means that your previous position was held strongly - not because of strong rational evidence alone, because stronger evidence can override that - the act of assimilating the information precludes talking yourself into it. If you have to talk yourself into something, it probably means that there is an irrational aspect to your attachment to the alternative.

And that irrational, often emotional attachment can be either right or wrong; were this not true, gut feeling would answer every question truthfully, and the first plausible explanation one could think of would always be correct.

Replies from: None
comment by [deleted] · 2013-09-02T10:52:49.548Z · LW(p) · GW(p)

I interpreted the quote as saying that if you are not readily enthusiastic about something but have to beat yourself into doing it, then it is a sign that you should not direct (any more) resources to it.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-09-02T13:54:46.156Z · LW(p) · GW(p)

As did I, but I disagreed with the point that enthusiasm is a necessary indicator of a good idea. Consider the act of eating one's vegetables (assuming that one is a small, stereotypical child) - intuitively repulsive, but ultimately beneficial, the sort of thing which one might have to talk oneself into.

comment by MugaSofer · 2013-09-02T20:39:13.852Z · LW(p) · GW(p)

Y'know, there are all sorts of counterexamples to this ... but I think its still a bad sign, if not a definitive one, on the basis that if I had been more suspicious of things I was talking myself into I would have had a definite net benefit to my life. (Not counting times I was neurohacking myself, admittedly, but that's not really the same.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-02T20:50:48.250Z · LW(p) · GW(p)

Yes, there's an unfortunate tendency among some "rationalist" types to dismiss heuristics because they don't apply in every situation.

comment by Stabilizer · 2013-09-02T20:31:12.314Z · LW(p) · GW(p)

I've had to talk myself into going on some crazy roller-coasters. After the experience though, I'm extremely glad that I did.

comment by Polina · 2013-09-25T16:02:57.801Z · LW(p) · GW(p)

Satisfy the need to belong in balance with two other human needs—to feel autonomy and competence—and the typical result is a deep sense of well-being.

Myers, D. G. (2012). Exploring social psychology (6th ed.). New York: McGraw-Hill, P.334.

Replies from: simplicio
comment by simplicio · 2013-09-27T14:16:20.840Z · LW(p) · GW(p)

So basically: be close to friends and family, save some money, find a job you're good at.

Replies from: Polina
comment by Polina · 2013-09-27T16:30:43.861Z · LW(p) · GW(p)

That's close to my understanding of the quote. I suppose, "autonomy" means not just financial independence, but the sense of inner self, something beyond social roles.

comment by Ben Pace (Benito) · 2013-09-07T09:57:19.723Z · LW(p) · GW(p)

Secondly, you might have the nagging feeling that not much has happened, really. We wanted an answer to the question "What is truth?", and all we got was trivial truths-equivalences, and a definition of truth for sentences with certain expressions, that showed up again on the right-hand side of that very definition. If that is on your mind, then you should go back to the beginning of this lecture, and ask yourself, "What kind of answer did expect?" to our initial question. Reconsider, "What is 'grandfather-hood'?". Well, define it in familiar terms. What is 'truth'? Well, define it in familiar terms. That's what we did. If that's not good enough, why?

by Hannes Leitgeb, from his joint teaching course with Stephan Hartmann (author of Bayesian Epistemology) on Coursera entitled 'An Introduction to Mathematical Philoosphy'.

The course topics are "Infinity, Truth, Rational Belief, If-Then, Confirmation, Decision, Voting, and Quantum Logic and Probability". In many ways, a very LW-friendly course, with many mentions and discussions of people like Tarski, Gödel etc.

comment by amitpamin · 2013-09-05T19:39:35.414Z · LW(p) · GW(p)

Professor Zueblin is right when he says that thinking is the hardest work many people ever have to do, and they don't like to do any more of it than they can help. They look for a royal road through some short cut in the form of a clever scheme or stunt, which they call the obvious thing to do; but calling it doesn't make it so. They don't gather all the facts and then analyze them before deciding what really is the obvious thing.

From Obvious Adam, a business book published in 1916.

comment by AlanCrowe · 2013-09-02T11:57:10.435Z · LW(p) · GW(p)

For the most part the objects which approve themselves to us are not so much the award of well-deserved certificates --- which is supposed by the mass of unthinking people to be the main object --- but to give people something definite to work for; to counteract the tendency to sipping and sampling which so often defeats the aspirations of gifted beings,...

--- Sir Hubert Parry, speaking to The Royal College of Music about the purpose of music examinations

Initially I thought this a wonderful quote because, looking back at my life, I could see several defeats (not all in music) attributable to sipping and sampling. But Sir Hubert is speaking specifically about music. The context tells you Sir Hubert's proposed counter to sipping and sampling: individual tuition aiming towards an examination in the form a viva.

The general message is "counter the tendency to sipping and sampling by finding something definite to work for, analogous to working ones way up the Royal College of Music grade system". But working out the analogy is left as an exercise for the reader, so the general message, if Sir Hubert intended it at all, is rather feeble.

comment by Richard_Kennaway · 2013-09-06T16:48:15.884Z · LW(p) · GW(p)

Do not deceive yourself with idle hopes
That in the world to come you will find life
If you have not tried to find it in this present world.

Theophanis the Monk, "The Ladder of Divine Grace"

comment by Jay_Schweikert · 2013-09-22T17:45:50.652Z · LW(p) · GW(p)

Zortran, do you ever wonder if it's all just meaningless?

What's "meaningless?"

It's like... wait, really? You don't have that word? It's a big deal over here.

No. Is it a good word? What does it do?

It's sort of like... what if you aren't important? You know... to the universe.

Wait... so humans have a word to refer to the idea that it'd be really sad if all of reality weren't focused on them individually?

Kinda, yeah.

We call that "megalomania."

Well, you don't have to be a jerk about it.

Saturday Morning Breakfast Cereal

comment by Estarlio · 2013-09-13T14:54:44.385Z · LW(p) · GW(p)

"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

  • Nietzsche, Morgenröte. Gedanken über die moralischen Vorurteile
comment by ITakeBets · 2013-09-04T21:00:31.001Z · LW(p) · GW(p)

Q: Why are Unitarians lousy singers? A: They keep reading ahead in the hymnal to see if they agree with it.

comment by Eugine_Nier · 2013-09-06T07:32:37.475Z · LW(p) · GW(p)

Furthermore, to achieve justice -- to deter, to exact retribution, to make whole the victim, or to heal the sick criminal, whichever one or more of these we take to be the goal of justice -- we must almost always respond to force with force. Taken in isolation that response will itself look like an initiation of force. Furthermore, to gather the evidence we need in most cases to achieve sufficient high levels of confidence -- whether balance of the probabilities, clear and convincing evidence, or beyond a reasonable doubt -- we often have to initiate force with third parties -- to compel them to hand over goods, to let us search their property, or to testify. If politics could be deduced this might be called the Central Theorem of Politics -- we can't properly respond to a global initiation of force without local initiations of force.

Nick Szabo

Replies from: Torello
comment by Torello · 2013-09-06T13:23:53.001Z · LW(p) · GW(p)

Is this a similar message to Penn Jillette saying:

"If you don’t pay your taxes and you don’t answer the warrant and you don’t go to court, eventually someone will pull a gun. Eventually someone with a gun will show up. "

or did I miss the boat?

Replies from: Manfred
comment by Manfred · 2013-09-06T18:27:44.369Z · LW(p) · GW(p)

Well, it's similar, but for two differences:

1) It uses a different and wider category of examples. Viz. "initiate force [...] to compel them to hand over goods, to let us search their property, or to testify."

2) It makes a consequentialist claim about forcing people to e.g. let us search their property for evidence: "we can't properly respond to a global initiation of force without local initiations of force."

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences." The first difference is rhetorically important because it is a place where people's gut reaction is more likely to endorse the use of force, and people have been less exposed to memes about forcibly searching peoples' property (compared to the ubiquity of people disliking taxes) that would cause them to automatically respond rather than thinking.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-09T01:27:38.127Z · LW(p) · GW(p)

The second difference here is important because it directly contradicts the typical libertarian claim of "if we force people to do things much less than we currently do, that will lead to good consequences."

Actually that isn't what Szabo is saying. His point is to contradict the claim of the anarcho-capitalists that "if we never force people to do things, that will lead to good consequences."

comment by brainoil · 2013-09-05T11:41:37.770Z · LW(p) · GW(p)

I was instructed long ago by a wise editor, "If you understand something you can explain it so that almost anyone can understand it. If you don't, you won't be able to understand your own explanation." That is why 90% of academic film theory is bullshit. Jargon is the last refuge of the scoundrel.

Roger Ebert

Replies from: Manfred
comment by Manfred · 2013-09-05T21:25:25.770Z · LW(p) · GW(p)

Would be nice if this were true.

Replies from: brainoil
comment by brainoil · 2013-09-06T08:10:56.179Z · LW(p) · GW(p)

It's probably true for academic film theory. I mean how hard could it really be?

comment by PhoenixWright · 2013-09-02T17:34:17.992Z · LW(p) · GW(p)

Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power, and want a certain result with all your heart, you naturally express your wishes in law, and sweep away all opposition. To allow opposition by speech seems to indicate that you think the speech impotent, as when a man says that he has squared the circle, or that you do not care wholeheartedly for the result, or that you doubt either your power or your premises... But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas -- that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out.

  • Oliver Wendell Holmes, Jr.
comment by A1987dM (army1987) · 2013-09-29T14:42:26.085Z · LW(p) · GW(p)

Here’s the bigger point: Americans (and maybe all humans, I’m not sure) are more obsessed with words than with their meanings. I will never understand this as long as I live. Under FCC rules, in broadcast TV you can talk about any kind of depraved sex act you wish, as long as you do not use the word “fuck.” And the word itself is so mysteriously magical that it cannot be used in any way whether the topic is sex or not. “What the fuck?” is a crime that carries a stiff fine –– “I’m going to rape your 8-year-old daughter with a trained monkey,” is completely legal. In my opinion, today’s “gluten-free” cartoon is far more suggestive in an unsavory way than the vampire cartoon, but it doesn’t have a “naughty” word so it’s okay.

Are we a nation permanently locked in preschool? The answer, in the case of language, is yes.

Bizarro Blog

Replies from: Jiro, Eugine_Nier
comment by Jiro · 2013-09-30T00:41:50.411Z · LW(p) · GW(p)

Sorry, this is nonsense. It's not hard to Google up a copy of the FCC rules. http://www.fcc.gov/guides/obscenity-indecency-and-profanity :

"The FCC has defined broadcast indecency as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.”

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed. Just because you don't use a list of words doesn't mean that what you say will be automatically allowed.

Furthermore, the Wikipedia page on the seven words ( http://en.wikipedia.org/wiki/Seven_dirty_words ) points out that " The FCC has never maintained a specific list of words prohibited from the airwaves during the time period from 6 a.m. to 10 p.m., but it has alleged that its own internal guidelines are sufficient to determine what it considers obscene." It points out cases where the words were used in context and permitted.

In other words, this quote is based on a sound-bite distortion of actual FCC behavior and as inaccurate research, is automatically ineligible to be a good rationality quote.

Replies from: Lumifer
comment by Lumifer · 2013-09-30T17:09:17.740Z · LW(p) · GW(p)

I am fairly sure that "I’m going to rape your 8-year-old daughter with a trained monkey" would count as describing sexual activities in patently offensive terms, and would not be allowed when direct use of swear words would not be allowed.

What is the basis for you being sure?

Howard Stern, a well-known "shock jock" spent many years on airwaves regulated by the FCC. He more or less specialized in "describing sexual activities in patently offensive terms" and while he had periodic run-ins with the FCC, he, again, spent many years doing this.

The FCC rule is deliberately written in a vague manner to give the FCC discretionary power. As a practical matter, the seven dirty words are effectively prohibited by FCC and other offensive expressions may or may not be prohibited. Broadcasters occasionally test the boundaries and either get away with it or get slapped down.

Replies from: hairyfigment
comment by hairyfigment · 2013-09-30T17:18:45.932Z · LW(p) · GW(p)

Yes, and this illustrates another problem: if we agreed on what to ban, it would make more sense to use discretionary human judgment than rules which might be manipulated or Munchkin-ed. We don't agree.

I do think it would make sense in the abstract to ban speech if we had scientific reason to think it harmed people, the way we had reason to think leaded gasoline harmed people in the 1920s. But I only know one class of speech where that might apply, and it'll never get on TV anyway. ^_^

comment by Eugine_Nier · 2013-09-29T18:31:24.328Z · LW(p) · GW(p)

The reason is that banning certain words works much better as a Schelling point.

Replies from: Desrtopa, Nornagest
comment by Desrtopa · 2013-09-29T19:44:23.964Z · LW(p) · GW(p)

Better for what, and better than what alternatives?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-10-02T04:23:04.399Z · LW(p) · GW(p)

You wind up in endless arguments about whether this particular show is beyond the pail.

Replies from: Desrtopa
comment by Desrtopa · 2013-10-02T14:43:51.851Z · LW(p) · GW(p)

That doesn't seem like it answers my question.

What's the goal in this case? This sounds like it's only attempting to address effectiveness at avoiding disputes over standards, but that could more easily be achieved by not having any restrictions at all.

comment by Nornagest · 2013-09-29T21:29:46.609Z · LW(p) · GW(p)

I don't buy it. Even if we accept for the sake of argument that limiting sexual references on broadcast TV is a good plan (a point that I don't consider settled, by the way), using dirty words as a proxy runs straight into Goodhart's law: the broadcast rules are known in advance, and innuendo's bread and butter to TV writers. A good Schelling point has to be hard to work around, even if you can't draw a strict line; this doesn't qualify.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-12T17:36:28.406Z · LW(p) · GW(p)

I was just attempting to troll with some of the most rage-inducing stuff from LW-ish, tech/glibertarian/American elite circles that I've seen recently.

User "Multiheaded" designated self-confessed troll.

Replies from: Multiheaded, Fronken
comment by Multiheaded · 2013-09-12T17:47:16.880Z · LW(p) · GW(p)

...achievement unlocked?

P.S.: Eliezer, as long as you're replying to my drivel anyway, I implore you: please, please read the essay I mentioned - "The Californian Ideology" - in its entirety, if you haven't already.

I'm completely frank and earnest here. It might prompt you to look at your... social and intellectual environment a bit differently in certain regards. I'd say that, looking from the outside in, I'm finding it uncannily spot-on and descriptive for an 18 year old piece.

comment by Fronken · 2013-09-12T18:36:35.942Z · LW(p) · GW(p)

... seriously Eliezer?

comment by shminux · 2013-09-02T23:54:59.651Z · LW(p) · GW(p)

The idea that God would have an inadequate computer strikes me as somewhat blasphemous

Peter Shor replying in the comment section of Scott Aaronson's blog post Firewalls.

comment by James_Miller · 2013-09-02T16:07:41.449Z · LW(p) · GW(p)

There is no problem, no matter how difficult, or painful, or seemingly unsolvable, that violence won't make worse.

Breaking Bad, episode Rabid Dog.
(Although "won't" should be "can't".)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-02T20:09:14.075Z · LW(p) · GW(p)

Depending on how the violence is applied, it can also make it better.

comment by shminux · 2013-09-05T17:44:37.497Z · LW(p) · GW(p)

I'm avoiding the term "free will" here because experience shows that using that term turns into a debate about the definition. I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others. We don't have a good alternative to that system. But since we are all a bunch of particles bumping around according to the laws of physics (or perhaps the laws of our programmers) there is no sense of "fault" that is natural to the universe.

Slightly edited from Scott Adams' blog.

And a similar sentiment from SMBC comics.

Replies from: simplicio
comment by simplicio · 2013-09-06T20:20:12.970Z · LW(p) · GW(p)

I prefer to say we're all just particles bumping around. Personally, I don't see how any of those particles, no matter how they are arranged, can sometimes choose to ignore the laws of physics and go their own way.

I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will. Also:

For purely practical reasons, the legal system assigns "fault" to some actions and excuses others.

Just the legal system? Gah. Everybody on earth does this about 200 times a day.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-10T18:19:37.036Z · LW(p) · GW(p)

I personally can't see how a monkey turns into a human. But that's irrelevant because that is not the claim of natural selection. This makes a strawman of most positions that endorse something approximately like free will.

I don't think that most positions that endorse free will don't believe at all that evolution happens.

When it comes to contempory philosophers I think a minority of those who advocate for the existence of free will deny evolution.

Replies from: simplicio
comment by simplicio · 2013-09-10T21:21:34.853Z · LW(p) · GW(p)

I know. I was making an analogy between a strawman of NS and a strawman of free will. Please read the "this" in "This makes a strawman" as referring to the OP.

comment by Torello · 2013-09-04T21:50:49.687Z · LW(p) · GW(p)

But it's not who you are underneath, it's what you do that defines you.

-Rachel Dawes, Batman Begins

comment by Pablo (Pablo_Stafforini) · 2013-09-02T13:41:25.192Z · LW(p) · GW(p)

In sports, […] arguments are not particularly damaging—in fact, they can be fun. The problem is that these same biased processes can influence how we experience other aspects of our world. These biased processes are in fact a major source of escalation in almost every conflict, whether Israeli-Palestinian, American-Iraqi, Serbian-Croatian, or Indian-Pakistani.

In all these conflicts, individuals from both sides can read similar history books and even have the same facts taught to them, yet it is very unusual to find individuals who would agree about who started the conflict, who is to blame, who should make the next concession, etc. In such matters, our investment in our beliefs is much stronger than any affiliation to sport teams, and so we hold on to these beliefs tenaciously. Thus the likelihood of agreement about “the facts” becomes smaller and smaller as personal investment in the problem grows. This is clearly disturbing. We like to think that sitting at the same table together will help us hammer out our differences and that concessions will soon follow. But history has shown us that this is an unlikely outcome; and now we know the reason for this catastrophic failure.

But there’s reason for hope. In our experiments, tasting beer without knowing about the vinegar, or learning about the vinegar after the beer was tasted, allowed the true flavor to come out. The same approach should be used to settle arguments: The perspective of each side is presented without the affiliation—the facts are revealed, but not which party took which actions. This type of “blind” condition might help us better recognize the truth.

Dan Ariely, Predictably Irrational: The Hidden Forces that Shape Our Decisions, New York, 2008, pp. 171-172

Replies from: DanArmak
comment by DanArmak · 2013-09-02T17:55:50.061Z · LW(p) · GW(p)

In all these conflicts, individuals from both sides can read similar history books and even have the same facts taught to them, yet it is very unusual to find individuals who would agree about who started the conflict, who is to blame, who should make the next concession, etc.

In my experience, who started the conflict, who is to blame, etc. is explicitly taught as fact to each side's children. Israelis and Palestinians don't agree on facts at all. A civilized discussion of politics generally requires agreeing not to discuss most past facts.

comment by pragmatist · 2013-09-15T09:58:02.801Z · LW(p) · GW(p)

The failure mode of clever is "asshole."

-- John Scalzi

Replies from: DanArmak
comment by DanArmak · 2013-09-15T10:36:47.165Z · LW(p) · GW(p)

So is the failure mode of many people who are not, and don't hold themselves to be, clever. I fail to see the correlation.

ETA: Scalzi addresses a very specific topic, and even then he really seems to address some specific anecdote that he doesn't share. I don't think it's a rationality quote.

comment by Fronken · 2013-09-12T18:41:39.986Z · LW(p) · GW(p)

I would not dare to call that "Dark Arts".

Fortunately someone else already invented the term "Dark Arts" and that's what it means.

comment by Estarlio · 2013-09-10T15:57:27.072Z · LW(p) · GW(p)

Why are extremism and fanaticism correlated? In a world of Bayesians, there'd be a negative correlation. People would hold extreme views lightly, for at least three reasons. [...]

For fairness sake.

comment by RolfAndreassen · 2013-09-04T16:30:32.676Z · LW(p) · GW(p)

Trouble rather the tiger in his lair than the sage among his books. For to you kingdoms and their armies are things mighty and enduring, but to him they are but toys of the moment, to be overturned with the flick of a finger.

-- Gordon R. Dickson, "The Tactics of Mistake".

comment by Stabilizer · 2013-09-02T20:52:38.440Z · LW(p) · GW(p)

He had also learned that the sick and unfortunate are far more receptive to traditional magic spells and exorcisms than to sensible advice; that people more readily accept affliction and outward penances than the task of changing themselves, or even examining themselves; that they believe more easily in magic than reason, in formulas than experience.

-Hermann Hesse, The Glass Bead Game

comment by Peter Wildeford (peter_hurford) · 2013-09-02T00:08:07.364Z · LW(p) · GW(p)

"[G]et wisdom: and with all thy getting, get understanding." -- Proverbs 4:7

Replies from: jazmt, jazmt
comment by Yaakov T (jazmt) · 2013-09-02T02:42:28.315Z · LW(p) · GW(p)

Based on the Hebrew original a more accurate translation would be: "The beginning of knowledge is to acquire knowledge, and in all of your acquisitions acquire understanding" pointing to two important principles.

  1. First to gain the relevant body of knowledge and only then to begin theorizing
  2. to focus our wealth and energy on knowledge
comment by Yaakov T (jazmt) · 2013-09-02T02:43:38.493Z · LW(p) · GW(p)

It seems like Proverbs has a lot of important content for gaining rationality, perhaps it should be added to our reading lists

Replies from: gwern, AlexanderD
comment by gwern · 2013-09-02T02:59:56.563Z · LW(p) · GW(p)

The wisdom books of the Bible are pretty unusual compared to the rest of the Bible, because they're an intrusion of some of the best surviving wisdom literature. As such, they're my favorite parts of the Bible, and I've found them well worth reading (in small doses, a little bit at a time, so I'm not overwhelmed).

comment by AlexanderD · 2013-09-02T04:28:51.430Z · LW(p) · GW(p)

I highly recommend Robert Alter's translation in "The Wisdom Books," if you're interested in reading it.

Replies from: jazmt
comment by Yaakov T (jazmt) · 2013-09-02T15:19:43.019Z · LW(p) · GW(p)

thanks but I prefer reading in the original Hebrew to reading in translation.

Replies from: somervta
comment by somervta · 2013-09-04T10:22:23.937Z · LW(p) · GW(p)

Ah, excellent. I've always wanted to ask someone who read Hebrew - Is the writing in the bible of lesser or greater quality in the original (compared to the english - I know translation vary, but is there a distinct difference, or is the Hebrew within the range?)

Replies from: jazmt
comment by Yaakov T (jazmt) · 2013-09-08T02:16:58.865Z · LW(p) · GW(p)

the original is superior in a number of ways(to any translation have seen, but I suspect that it is superior to all translations since much is of necessity lost in translation generally). But is there a specific aspect you are wondering about so that I could address your question more particularly?

comment by Vaniver · 2013-09-02T15:00:48.238Z · LW(p) · GW(p)

Now, what I want is, Facts. Teach these boys and girls nothing but Facts. Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.

--Mr. Gradgrind, from Hard Times by Charles Dickens.

The character is portrayed as a villain, but this quote struck me as fair (if you take a less confused view of "Facts" than Gradgrind).

Replies from: AndHisHorse, ChristianKl, bentarm
comment by AndHisHorse · 2013-09-02T16:30:55.899Z · LW(p) · GW(p)

Facts alone are fairly useless without processes for using them to gather more. A piece of paper can have facts inscribed upon it more durably than the human mind can, yet we rely on the latter rather than the former to guide us through life because it is capable of using those facts, not merely possessing them.

comment by ChristianKl · 2013-09-09T22:52:50.965Z · LW(p) · GW(p)

I think teaching kids facts is mostly useless as most facts get forgotten.

Skills like self motivation and critical thinking seem much more important to me than facts.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-10T02:07:38.523Z · LW(p) · GW(p)

Hell, just "skills" seem like something besides "facts" that would be useful. Alas that modern education does not agree.

comment by bentarm · 2013-09-02T18:54:53.985Z · LW(p) · GW(p)

http://xkcd.com/863/

Replies from: Vaniver
comment by Vaniver · 2013-09-02T19:21:57.425Z · LW(p) · GW(p)

It looks to me like you're making the sophisticated point that some facts vary in usefulness. I agree.

The point being made by Gradgrind is much more basic: children should focus on Fact over Fancy. As an example, he refuses to teach his children fairy tales, deciding that they should learn science instead. (Unfortunately, Dickens presents science as dull collections in cabinets, and so the children are rather put out by this.)

Replies from: bentarm, None
comment by bentarm · 2013-09-05T16:22:51.403Z · LW(p) · GW(p)

The point being made by Gradgrind is much more basic: children should focus on Fact over Fancy.

ah, ok. I interpreted it as a preference for teaching Fact rather than Theory.

comment by [deleted] · 2013-09-02T19:33:59.700Z · LW(p) · GW(p)

"children should focus on Fact over Fancy"

The superiority of facts over fancy in [early] education is an empirical question though, right?

Replies from: DanArmak, Vaniver
comment by DanArmak · 2013-09-02T21:09:05.949Z · LW(p) · GW(p)

The superiority of facts over fancy in [early] education is an empirical question though, right?

It is in fact, but not in fancy.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-02T21:18:05.876Z · LW(p) · GW(p)

Witty, but completely unclear - I have no idea what your point is.

Replies from: DanArmak
comment by DanArmak · 2013-09-03T08:01:44.349Z · LW(p) · GW(p)

It's an empirical question if you deal in facts. But if you deal in fancies, everyone's got their own fancy and nobody's right or wrong, so there are no properly empirical questions.

comment by Vaniver · 2013-09-02T20:32:28.916Z · LW(p) · GW(p)

The superiority of facts over fancy in [early] education is an empirical question though, right?

Yep, though I'll point out that the quote isn't limited to what we refer as 'early' education. I'm not an expert in education, so I won't pretend to know a solid answer to that empirical question, but anecdotal evidence from various famous, clever, and productive people suggests that a childhood focused on facts is beneficial.

Replies from: None
comment by [deleted] · 2013-09-02T21:08:56.331Z · LW(p) · GW(p)

I think we can assume that no one would suggest that an education omit facts entirely (hence, 'early'). I also agree that a fact-focused early education would be beneficial. The question raised by your quote is whether it would be beneficial to largely or entirely omit fancy. I do think that's a tough empirical question, though that's the kind of thing where empirical answers are not likely to be forthcoming.

Clearly, education in biology, mathematics, and the like should be factual. No one would argue with that. So what sort of thing are we talking about? What is the subject matter for which someone would even suggest fiction as a mode of education?

My guess is that we're talking about something like moral education. I can't think of any alternatives, anyway (other than education in the history of literature, but that suggestion would be question begging). Can we think of another way to provide a moral education that omits fiction?

Well we could certainly teach moral philosophy (though where that lies on the fact-fiction axis I don't know) rather than literature. There we have another empirical question, though my experience has been that moral philosophy doesn't go over very well with the very young. Tends to do more harm than good. Do you have a suggestion here?

One alternative (the alternative that Gradgrind had in mind, I think) is to omit moral education entirely. I take it Dickens' thought was that this is the sort of thing you wouldn't need if you were educating slaves in more sophisticated forms of labor, because their behavior is managed externally and they don't need to give any thought to how to live their own lives. That's my impression, anyway.

Replies from: ChristianKl, Vaniver
comment by ChristianKl · 2013-09-09T23:04:50.577Z · LW(p) · GW(p)

Clearly, education in biology, mathematics, and the like should be factual. No one would argue with that.

I don't think my mathematics education was 100% fact based. We did discuss various thought experiments. We also did puzzles that were designed to train thinking skills.

The ability to work an hour with focus on a math proof is a lot more valuable than the axioms and theorems that I learned in the process.

Instead of trying to teach math facts it makes a lot more sense to concentrate on creating situation of deliberate math practice.

In university we had math courses we were allowed to bring us much paper into the exam as we wanted because the things that they wanted to teach us wasn't written down facts but our ability to deal with them.

Replies from: None
comment by [deleted] · 2013-09-10T21:45:55.529Z · LW(p) · GW(p)

Math is a funny case, being very much a skill that needs training rather than a body of knowledge that needs learning. But it's not as if you were learning mathematics on the basis of 'fancy' or fiction either.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-10T22:02:26.632Z · LW(p) · GW(p)

Most of the problems that were printed in our school math textbooks were fictional. They were made up by the author of the book to illustrate some mathematical principle.

Replies from: None
comment by [deleted] · 2013-09-11T17:04:42.519Z · LW(p) · GW(p)

I don't see how the sense in which those problems are fictional is relevant to the discussion. Tapping out.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-11T17:56:57.960Z · LW(p) · GW(p)

The topic is about whether to teach through fiction or facts. Whether something is fictional seems relevant.

comment by Vaniver · 2013-09-02T21:56:19.304Z · LW(p) · GW(p)

Clearly, education in biology, mathematics, and the like should be factual. No one would argue with that.

Dickens actually mocks Gradgrind for this:

No little Gradgrind had ever associated a cow in a field with that famous cow with the crumpled horn who tossed the dog who worried the cat who killed the rat who ate the malt, or with that yet more famous cow who swallowed Tom Thumb: it had never heard of those celebrities, and had only been introduced to a cow as a graminivorous ruminating quadruped with several stomachs.

I would suspect another major point of contention is how much weight mathematics and biology should get relative to other subjects. (Now, Gradgrind does have the confusion, more obvious elsewhere, that classifications are important facts rather than fuzzy collections, and this is a confusion worth criticizing.)

One alternative (the alternative that Gradgrind had in mind, I think) is to omit moral education entirely. I take it Dickens' thought was that this is the sort of thing you wouldn't need if you were educating slaves in more sophisticated forms of labor, because their behavior is managed externally and they don't need to give any thought to how to live their own lives.

It's not clear to me what you mean by "moral education." Gradgrind puts a lot of effort into cultivating the "moral character" of his children (in fact, this seems to be the primary reason for his banishment of fancy). Very little effort is put into teaching them how to cultivate their own character, which is what I would take moral philosophy to mean (but even that may be too practical an interpretation of it!).

comment by MugaSofer · 2013-09-12T16:01:35.168Z · LW(p) · GW(p)

I intended to lure out someone on LW who'd defend his shit by saying how provocative and delightfully iconoclastic he is. Happened many times in LW-ish places/with LW-ish people, whenever I complained about the shit reactionaries spew. ...Instead, a good little liberal came along and offered the observation that Mr. Dickinson... might be mildly ethically challenged!

You might want to use an anonymous account next time. Still, I can get behind tricking people into endorsing evil so you can point out what they did wrong.

comment by fubarobfusco · 2013-09-10T15:35:31.084Z · LW(p) · GW(p)

All true, though I'll ADBOC a little since I think these are viewpoints of a loud but tiny minority.

I'm not sure I see the connection, though. That there are a few racial pseudoscience believers in the audience doesn't change genocide being wrong, just as there being a few homeopathy users in the audience doesn't change fraud being wrong.

Replies from: Multiheaded
comment by Multiheaded · 2013-09-10T15:56:37.599Z · LW(p) · GW(p)

That there are a few racial pseudoscience believers in the audience doesn't change genocide being wrong, just as there being a few homeopathy users in the audience doesn't change fraud being wrong.

Perhaps you haven't read much of those folks? (Not that I blame you, it can be stomach-turning.) They claim that they're the voice of Actual Science on human sociobiology. It is the accepted consensus of polite society today - that xenophobia is wrong and immoral and destructive, that non-"white" people aren't, as a group, cognitively inferior/inherently antisocial/undesirable - that they accuse of being ideologically corrupt pseudoscience.

They're very insistent on the fact that theirs is the True Enlightened Scientific racism, and that, therefore, there's nothing irrational or wrong with the stereotypes they deal in. Many - like, say, Mencius Moldbug - fancy placing themselves in opposition to the "vulgar" and "unreasoned" xenophobes, even as they espouse similar policy measures (barbed wire and apartheid 2.0).

P.S.: "in the audience"? In these circles at least, a few of them - like the aforementioned bloggers - are undoubtedly on the stage as well. Hell, Anissimov held the post of Media Director or something. (He claims his firing to be unrelated, and not damage control by SIAI.)

Replies from: Vaniver
comment by Vaniver · 2013-09-10T21:27:02.828Z · LW(p) · GW(p)

It is the accepted consensus of polite society today - that xenophobia is wrong and immoral and destructive, that non-"white" people aren't, as a group, cognitively inferior/inherently antisocial/undesirable - that they accuse of being ideologically corrupt pseudoscience.

One helpful tactic when discussing views you dislike is to try and be as precise as possible about those views. One unpleasant result of interaction between factual matters and social dynamics is intellectual hipsterism, where different tiers of engagement with an issue seem stacked so as to maximize the difference with the previous tier. But a tier above you and a tier below you are unlikely to be similar, even though both feel like The Enemy.

In this particular case, there are a couple parts of your comment that comes off as an "invisible dragon," where you know those two groups are different but want to pretend they aren't. Everyone agrees that racial purists like Ploetz aren't right that the Nordic race is the master race. Everyone includes Razib and Mike, except you're still calling them racial purists. In order to do so, you need to put scare quotes on "white" or put "insufficiently" in front of white.

Why that looks like an invisible dragon to me is you know that Razib and Mike don't particularly care about skin color. It's just cosmetic. What they care about is what's inside skulls, and every scientific racist will agree that the IQ progression goes roughly Jews > East Asians >= Europeans > Hispanics > Africans. (I'm using >= because there are some subtleties in the comparisons between Asians and Europeans, but there are several large groups who seem to do noticeably better than Europeans. Also, Nordics do score higher on IQ tests than southern Europeans- but the difference is tiny compared to the difference between Jews and Africans.)

Now, everyone knows that color is just color, and ascribing moral value to it does little. But the claim that smarts is "just smarts," and that it shouldn't have any impact on our decision-making, is contentious (and I would go so far as to call it silly). The claim that some people are "insufficiently white" doesn't fit with modern societies, but the claim that some people are "insufficiently smart" does, and so the association of "white" with "smart" looks like a rhetorical trick at best.

comment by fortyeridania · 2013-09-02T03:56:33.313Z · LW(p) · GW(p)

Most don't even know why they believe what they believe, man

Never taking a second to look at life

Bad water in our seeds, y'all, still growing weeds, dawg

-- CunninLynguists featuring Immortal Technique, Never Know Why, A Piece of Strange (2006)

comment by Rain · 2013-09-15T13:39:50.269Z · LW(p) · GW(p)Replies from: wedrifid
comment by wedrifid · 2013-09-15T14:00:30.061Z · LW(p) · GW(p)

What is the intended lesson or rationality insight here?

comment by Gunnar_Zarncke · 2013-09-14T09:53:42.121Z · LW(p) · GW(p)

Researchers have long identified the crucial role that the primary caregiver plays in the infant’s development. Over the past eight years, neuroscience has begun to shed light on the neurological processes underpinning these phenomena.neuroscientists are discovering that human brains are specialized for receiving and understanding stimulation from other people and the kinds of early experiences that are necessary for the optimal functioning of these neural pathways.1

EARLY YEARS STUDY 2, Hon. Margaret Norrie McCain et al 2007

http://earlyyearsstudy.ca/media/uploads/more-files/early_years_study2-en.pdf

Replies from: Gunnar_Zarncke, Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-12-22T22:57:24.767Z · LW(p) · GW(p)

I'd really like to know the reason for all this downvoting.

comment by Gunnar_Zarncke · 2013-09-15T22:47:39.246Z · LW(p) · GW(p)

Judging from the voting the relation is not obvious: The development of an AI may involve a caregiver - and not only one that just defines the goals but one who e.g. trains the AIs levels of neural networks. Presuming that neural networks play a crucial role in an AI of course - but that may be the case as human values are vague and complex. Symbolic reasoning may not (alone) be up to the task.

comment by [deleted] · 2013-09-12T16:04:26.039Z · LW(p) · GW(p)

They used to be afraid of us once, damnit! Even the most brazen scum among capitalists once had something to fear! The fear of revolution made them look for a compromise, it let social democracy happen, for a while...

The fear of revolution is still real. The pendulum may very well swing back the other way.

And now the party's over for my side. I hate how fucking open and one-sided the new class "warfare" is. To the extent there's no warfare, just the ruthless young new elite realizing what a jolly time it's in for. When even the lackeys of our good and great masters can get away with something like this - in Forbes, a supposedly elite publication! - what future do the common people have to look forward to?

The party's not even begun yet. Just hang in there a bit longer.

comment by Gunnar_Zarncke · 2013-09-10T12:11:54.924Z · LW(p) · GW(p)

From http://metamodern.com/2009/05/17/how-to-understand-everything-and-why/

To avoid blunders and absurdities, to recognize cross-disciplinary opportunities, and to make sense of new ideas, requires knowledge of at least the outlines of every field that might be relevant to the topics of interest. By knowing the outlines of a field, I mean knowing the answers, to some reasonable approximation, to questions like these:

What are the physical phenomena? What causes them? What are their magnitudes? When might they be important? How well are they understood? How well can they be modeled? What do they make possible? What do they forbid?

And even more fundamental than these are questions of knowledge about knowledge:

What is known today? What are the gaps in what I know? When would I need to know more to solve a problem? How could I find what I need?

Replies from: ChristianKl
comment by ChristianKl · 2013-09-10T18:11:27.153Z · LW(p) · GW(p)

I don't think that's true. I can learn that I feel better when I'm exposed to sunlight without knowing the in- and outs- of vitamin D biochemistry.

The things that matters is to accurately measure whether I feel better and to measure when I'm exposed to sunlight.

comment by anandjeyahar · 2013-09-06T20:28:10.096Z · LW(p) · GW(p)

The biggest problem in the world is too many words. We should be able to communicate, distribution graphs of past experiences, directly from one human brain to another. ~Aang Jie

comment by Anatoly_Vorobey · 2013-09-04T23:03:05.761Z · LW(p) · GW(p)

Isherwood was evidently anxious to convince the youth that the relationship he desired was that of lovers and friends rather than hustler and client; he felt possessive and was jealous of Bubi's professional contacts with other men, and the next day set off to resume his attempt to transform the rent boy into the Ideal Friend. Coached by Auden, whose conversational German was a good deal better than his own at this stage, he delivered a carefully prepared speech; he had, however, overlooked the Great Phrase-book Fallacy, and was quite unable to understand Bubi's reply.

-- Norman Page, Auden and Isherwood: The Berlin Years

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-09-06T04:28:23.251Z · LW(p) · GW(p)

Not quite seeing this as a rationality quote. What's your reasoning?

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2013-09-06T13:56:52.111Z · LW(p) · GW(p)

"The Great Phrase-book Fallacy" is both amusing and instructive. I laughed when I read it because I remembered I'd been a victim of it too once, in less seedier circumstances.

comment by Moss_Piglet · 2013-09-04T16:38:15.178Z · LW(p) · GW(p)

For to translate man back into nature; to master the many vain and fanciful interpretations and secondary meanings which have been hitherto scribbled and daubed over that eternal basic text homo natura; to confront man henceforth with man in the way in which, hardened by the discipline of science, man today confronts the rest of nature, with dauntless Oedipus eyes and stopped-up Odysseus ears, deaf to the siren songs of old metaphysical bird-catchers who have all too long been piping to him 'you are more! you are higher! you are of a different origin!' - that may be a strange and extravagant task but it is a task - who would deny that? Why did we choose it, this extravagant task? Or, to ask the question differently; 'why knowledge at all?' - Everyone will ask us about that. And we, thus pressed, we who have asked ourselves that same question a hundred times, we have found and can find no better answer...

Friedrich Nietzsche

Replies from: Estarlio
comment by Estarlio · 2013-09-10T13:13:41.253Z · LW(p) · GW(p)

Because it's really really useful?

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-09-10T14:37:14.234Z · LW(p) · GW(p)

Jeez, people really don't appreciate poetic language around here, huh?

(That would probably be close to my answer too, I'm just a little stunned by all the downvotes.)

comment by Darklight · 2013-09-02T19:46:50.213Z · LW(p) · GW(p)

The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life.

-- Albert Einstein

Replies from: Eliezer_Yudkowsky, Grant
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-03T20:02:40.358Z · LW(p) · GW(p)

Checking Google failed to yield an original source cited for this quote.

Replies from: Darklight
comment by Darklight · 2013-09-03T20:32:41.962Z · LW(p) · GW(p)

I got it from the biography, "Einstein: His Life and Universe" by Walter Isaacson, page 393.

The Notes for "Chapter Seventeen: Einstein's God" on page 618 state that the quote comes from:

Einstein to the Rev. Cornelius Greenway, Nov. 20, 1950, AEA 28-894.

Replies from: ESRogs
comment by ESRogs · 2013-09-03T22:12:37.627Z · LW(p) · GW(p)

Great book, by the way.

comment by Grant · 2013-09-02T21:50:50.496Z · LW(p) · GW(p)

These (nebulous) assertions seem unlikely on many levels. Psychopaths have few morals but continue to exist. I have no idea what "inner balance" even is.

He may be asserting that morals are necessary for the existence of humanity as a whole, in which case I'd point to many animals with few morals who continue to exist just fine.

Replies from: Darklight
comment by Darklight · 2013-09-02T22:05:59.523Z · LW(p) · GW(p)

I know of no animals other than humans who have nuclear weapons and the capacity to completely wipe themselves out on a whim.

Replies from: Grant, DanArmak
comment by Grant · 2013-09-02T22:16:08.632Z · LW(p) · GW(p)

True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.

Though the rest of it still doesn't make any sense to me.

comment by DanArmak · 2013-09-03T08:00:47.180Z · LW(p) · GW(p)

The existence of nuclear weapons should be taken as evidence that humans are not very moral. (And yet survive so far.)

Replies from: Darklight, Lumifer, Grant
comment by Darklight · 2013-09-03T15:48:00.211Z · LW(p) · GW(p)

Einstein is not saying that humans are necessarily moral, but rather that they ought to be moral.

Furthermore, it is arguable that nuclear weapons are not necessarily immoral in and of themselves. Like any tool or weapon, they can be used for moral and immoral ends. For instance, nuclear weapons may well be one of the most effective means of destroying Earth-directed masses such as Existential Risk threatening asteroids. They may also be extremely effective for deterring conventional warfare between major powers.

The only previous actual use of nuclear weapons against human targets was for the ends of ending a world war, and it did so rather successfully. That we have chosen not to use nuclear weapons irresponsibly may well suggest that those with the power to wield nuclear weapons have in fact been more morally responsible than we give them credit.

Replies from: soreff
comment by soreff · 2013-09-07T21:35:06.945Z · LW(p) · GW(p)

suggest that those with the power to wield nuclear weapons have in fact been more morally responsible than we give them credit.

Perhaps. Alternatively, when faced with a similarly-armed opponent, even our habitually bloody rulers can be detered by the prospect of being personally burned to death with nuclear fire.

Replies from: Estarlio, MugaSofer
comment by Estarlio · 2013-09-10T13:20:19.392Z · LW(p) · GW(p)

I've always wondered why, on discovering nuclear weapons, the leaders of America didn't continually pour a huge budget into it - stockpile a sufficient number of them and then destroy all their potential peers.

I can't think of any explanation bar the morality in their culture. They could certainly have secured sufficient material for the task.

comment by MugaSofer · 2013-09-07T22:11:45.484Z · LW(p) · GW(p)

More like our supposedly bloody soldiers, at least in some of the more alarming close calls.

I was about to say your point stands, but actually, wouldn't at least some of them have been in bunkers? I'll have to check that, now...

comment by Lumifer · 2013-09-03T16:57:55.666Z · LW(p) · GW(p)

The existence of nuclear weapons should be taken as evidence that humans are not very moral.

Huh? Can you unpack this for me, I don't see how it can make sense.

Replies from: JoachimSchipper
comment by JoachimSchipper · 2013-09-03T19:09:21.428Z · LW(p) · GW(p)

Start from "The very existence of flame-throwers proves that some time, somewhere, someone said to themselves, You know, I want to set those people over there on fire, but I'm just not close enough to get the job done", I guess.

Replies from: Lumifer
comment by Lumifer · 2013-09-03T19:33:37.717Z · LW(p) · GW(p)

Doesn't help me much. The purpose of weapons -- all weapons -- is to kill. What exactly is the moral difference between a nuclear bomb and a conventional bomb?

Replies from: Salemicus, polymathwannabe
comment by Salemicus · 2013-09-03T19:40:59.205Z · LW(p) · GW(p)

The purpose of weapons -- all weapons -- is to kill.

Not true. The purpose of some weapons is to incapacitate or subdue. For example, stun guns, tear gas, truncheons, flashbangs, etc.

comment by polymathwannabe · 2013-09-03T20:07:04.557Z · LW(p) · GW(p)

More exactly, the purpose of a weapon is to use pain to change behavior--which matches a general definition of "punishment." Sometimes the mere threat of pain suffices to change behavior. In cases of mutual deterrence (or less drastic, like everyday border patrols) that's the point: to make you behave differently from what you would otherwise, by appealing merely to your expectation of pain.

Replies from: Lumifer
comment by Lumifer · 2013-09-03T20:22:47.234Z · LW(p) · GW(p)

More exactly, the purpose of a weapon is to use pain to change behavior

No, I don't think so. But to avoid the distraction of trying to define "weapons", let me assert that we are talking about military weapons -- instruments devised and used with the express purpose of killing other humans. The issue is whether nuclear weapons have any special moral status, so we're not really concerned with tear gas and tasers.

Why are nuclear weapons morally different from conventional bombs or machine guns or cannons?

Replies from: DanArmak, polymathwannabe
comment by DanArmak · 2013-09-03T21:11:06.566Z · LW(p) · GW(p)

Why are nuclear weapons morally different from conventional bombs or machine guns or cannons?

Strategic nuclear weapons - the original and most widespread nuclear weapons - cannot be used with restraint. They have huge a blast radius and they kill everyone in it indiscriminately.

The one time they were used demonstrated this well. They are the most effective and efficient way, not merely to defeat an enemy army (which has bunkers, widely dispersed units, and retaliation capabilities), but to kill the entire civilian population of an enemy city.

To kill all the inhabitants of an enemy city, usually by one or another type of bombardment, was a goal pursued by all sides in both world wars. Nuclear weapons made it much easier, cheaper, and harder to defend against.

Tactical nuclear weapons are probably different; they haven't seen (much? any?) use in real wars to be certain.

Replies from: Estarlio, Lumifer
comment by Estarlio · 2013-09-10T13:25:13.946Z · LW(p) · GW(p)

Strategic nuclear weapons - the original and most widespread nuclear weapons - cannot be used with restraint.

They can. One of the problems that America had, going into the 80s, was that its ICBM force was becoming vulnerable to a potential surprise attack by the CCCP. This concerned them because only the ICBM force, at the time, had the sort of accuracy necessary for taking out hardened targets in a limited strike - like their opponent's strategic forces. And they were understandably reluctant to rely on systems that could only be used for city busting - i.e. the submarine force.

If you're interested in this, I suggest the - contemporary with that problem - documentary First Strike.

Replies from: TobyBartels
comment by TobyBartels · 2013-09-14T08:07:59.127Z · LW(p) · GW(p)

CCCP

You mean СССР?

comment by Lumifer · 2013-09-03T21:26:17.876Z · LW(p) · GW(p)

Strategic nuclear weapons - the original and most widespread nuclear weapons - cannot be used with restraint. They have huge a blast radius and they kill everyone in it indiscriminately.

What do you mean by "restraint"?

For example, the nuclear bombing of Nagasaki killed around 70,000 people. The fire-bombing of Tokyo in March of 1945 (a single bombing raid) killed about 100,000 people.

Was the bombing of Nagasaki morally worse?

Replies from: DanArmak, nshepperd, Grant
comment by DanArmak · 2013-09-04T08:19:39.142Z · LW(p) · GW(p)

In addition to nshepperd's point, there's the fact that sending a bomber fleet to destroy an enemy city is very expensive - the costs of planes, fuel, and bombs add up quickly, not to mention pilots' lives. And if the defenders can destroy 50% of the planes before they drop their bombs, the bombing campaign becomes 50% less effective.

Whereas a strategic nuclear warhead only requires one plane to deliver it (or one ICBM). Much cheaper, much less risky, and much more stealthy. If you build a small fleet of nuclear bombers (or, again, a small stable of ICBMs), you can theoretically destroy all of the enemy's cities in one night.

If the atom bomb had by chance been developed a few years earlier, when the US still faced serious opposition in Europe, then quite probably they would have used it to wipe out all the major German and German-held cities.

Replies from: TobyBartels
comment by TobyBartels · 2013-09-14T08:04:50.796Z · LW(p) · GW(p)

quite probably they would have used it to wipe out all the major German and German-held cities

German-held? Like Paris?

Replies from: DanArmak
comment by DanArmak · 2013-09-14T08:29:11.464Z · LW(p) · GW(p)

Not Paris, of course, which had a lot of diplomatic and sentimental value and little industrial value. I meant cities of high industrial value in occupied Czechoslovakia, Austria, Poland, and other countries the West didn't care as much about.

comment by nshepperd · 2013-09-04T01:44:22.298Z · LW(p) · GW(p)

It's one thing to create a weapon that can be used to kill O(100,000) people at once (though, it's not really "at once" if you do it by dropping N bombs consecutively). It's another thing to create a weapon that can only be used to kill O(100,000) people at once.

Or something. Of course, if inventing nukes is evidence humans aren't very moral, the fact that people chose to kill a hundred thousand people in Tokyo with conventional weapons is a different kind of evidence for humans being not very moral.

Replies from: mcallisterjp, Lumifer
comment by mcallisterjp · 2013-09-04T11:29:39.347Z · LW(p) · GW(p)

That's not how Big O notation works: O(100,000) = O(1).

You presumably mean "in the order of 100,000", which is sometimes written "~100,000".

comment by Lumifer · 2013-09-04T02:01:25.914Z · LW(p) · GW(p)

It's another thing to create a weapon that can only be used to kill O(100,000) people at once.

Clearly a nuke is not that.

evidence for humans being not very moral

Given that both humans and moralities are quite diverse, I don't see any information content in the phrase "humans are not very moral". It's just trivially true and pretty meaningless.

Replies from: DanArmak
comment by DanArmak · 2013-09-04T08:22:24.021Z · LW(p) · GW(p)

Given that both humans and moralities are quite diverse, I don't see any information content in the phrase "humans are not very moral". It's just trivially true and pretty meaningless.

I agree, and besides I'm not a moral realist. I was originally responding to people in this thread who discussed whether humans could be described as moral.

comment by Grant · 2013-09-03T23:24:17.643Z · LW(p) · GW(p)

If the bombing of Nagasaki contributed more to the end of the war than the bombing of Tokyo, then we could easily say it was morally superior. That is not to say there weren't better options of course.

Replies from: TobyBartels, Randaly
comment by TobyBartels · 2013-09-14T08:20:46.344Z · LW(p) · GW(p)

We can debate endlessly the wisdom of bombing Hiroshima, but does anybody have a defence for bombing Nagasaki? Since this is the quotation thread, I'll quote Dave Barry:

It was Truman who made the difficult decision to drop the first atomic bomb on the Japanese city of Hiroshima, the rationale being that only such a devastating, horrendous display of destructive power would convince Japan that it had to surrender. Truman also made the decision to drop the second atomic bomb on Nagasaki, the rationale being, hey, we had another bomb.

I'm seriously curious. (Reasonably rational arguments, of course.)

Replies from: Vaniver, Eugine_Nier
comment by Vaniver · 2013-09-15T20:57:00.150Z · LW(p) · GW(p)

Recommend reading the actual history, rather than comedians.

Replies from: TobyBartels
comment by TobyBartels · 2013-09-17T00:03:07.277Z · LW(p) · GW(p)

I read that, amongst other WP articles, while researching my comment. That one doesn't even attempt to explain the reasons for dropping the second bomb. (The quotation from the comedian is not meant to be an argument either.)

Replies from: Vaniver
comment by Vaniver · 2013-09-17T00:33:21.468Z · LW(p) · GW(p)

This section seems relevant:

At first, some refused to believe the United States had built an atomic bomb. The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be.[74] Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more.[75] American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply.[59][76]

Emphasis mine.

Replies from: TobyBartels
comment by TobyBartels · 2013-09-17T03:03:09.195Z · LW(p) · GW(p)

OK, thanks, I must have missed that anticipating the immediately following section.

Looking over my posts, I see that I may have given the impression that I doubted that there was any rational argument in favour of dropping the second bomb. I only meant to say that I didn't know one, because the discussion (here and elsewhere) always seems to focus on the first one.

comment by Eugine_Nier · 2013-09-15T18:20:58.072Z · LW(p) · GW(p)

Well, the Japanese just barely surrendered even after Nagasaki.

Replies from: gwern
comment by gwern · 2013-09-15T21:12:31.547Z · LW(p) · GW(p)

It would be more accurate to say 'barely surrendered even after the simultaneous bombing of Nagasaki and their most feared enemy Soviet Russia declaring war on them'.

comment by Randaly · 2013-09-03T23:36:29.205Z · LW(p) · GW(p)

Many (most?) historians believe that the Soviet entry into the war induced the Japanese surrender. Some historians believe that American decision makers expected Japan to surrender soon and wanted to use atomic bombs before the end of the war, to demonstrate their power to the Soviets. Gaddis Smith:

It has been demonstrated that the decision to bomb Japan was centrally connected to Truman's confrontational approach to the Soviet Union.

A very small number of historians believe that the atomic bomb on net cost American lives. Martin Sherwin:

Many more American soldiers... might have had the opportunity to grow old if Truman had accepted Grew's advice. [that all that would be needed to induce Japanese surrender starting in May was explaining that US occupying forces would follow the rules of war, not exploit the Japanese, etc, and that delaying to use the atomic bomb was wasteful]

Replies from: Lumifer
comment by Lumifer · 2013-09-04T01:18:30.037Z · LW(p) · GW(p)

Some historians believe that American decision makers expected Japan to surrender soon and wanted to use atomic bombs before the end of the war, to demonstrate their power to the Soviets.

I favor this hypothesis, it seems to me the demonstration of the power of atomic bombs was as much for Stalin's benefit as it was for the Japanese leadership's. One can make a reasonable case that Hiroshima and Nagasaki were the real reason why the battle-hardened Soviet army stopped in Germany and didn't just roll over the rest of Western Europe.

Replies from: Grant
comment by Grant · 2013-09-04T01:56:42.029Z · LW(p) · GW(p)

That is no reason to drop the bomb on a city though; there are plenty of non-living targets that can be blown up to demonstrate destructive power. I suppose doing so wouldn't signal the will to use the atomic bomb, but in a time when hundreds of thousands died in air raids I would think such a thing would be assumed.

I suppose this highlights the fundamental problem of the era: the assumption that targeting civilians with bombs was the best course of action.

Replies from: Lumifer
comment by Lumifer · 2013-09-04T02:05:31.421Z · LW(p) · GW(p)

If you drop a nuke on a Japanese city you kill three birds with one stone: you get to test how it works for intended use (remember, it was the first real test so uncertainty was high); you get to intimidate Japan into surrender; and you get to hint to Stalin that he should behave himself or else.

Replies from: Grant
comment by Grant · 2013-09-04T02:08:12.716Z · LW(p) · GW(p)

True. Some sources indicate that some Japanese cities were left intact precisely so the American military could test the effects of a nuke!

comment by polymathwannabe · 2013-09-03T21:25:37.533Z · LW(p) · GW(p)

What I think places the atom bomb on its own category is that its potential for destruction is completely out of proportion with whatever tactical reason you may have for using it. Here we're dealing with destruction on a civilization level. This is the first time in human history when the end of the world may come from our own hands. Nothing in our evolutionary past could have equipped us to deal with such a magnitude of danger. In the Middle Ages, the Pope was shocked at the implications of archery--you could kill from a distance, almost as effectively as with a sword, but without exposing yourself too much. He thought it was a dishonorable way of killing. By the time cannons were invented, everyone was more or less used to seeing archers in battle, but this time it was the capacity for devastation brought by cannons that was beyond anything previously experienced. Ditto for every increasing level of destructive power: machine guns, bomber airplanes, all the way up to the atom bomb. But the atom bomb is a gamechanger. No amount of animosity or vengefulness or spite can possibly justify vaporizing millions of human lives in an instant. Even if your target were a military citadel, the destruction will inevitably reach countless innocents that the post-WW2 international war protocols were designed to protect. Throwing the atom bomb is the Muggle equivalent of Avada Kedavra--there is no excuse that you can claim in your defense.

Replies from: Nornagest, Lumifer
comment by Nornagest · 2013-09-04T01:45:30.589Z · LW(p) · GW(p)

In the Middle Ages, the Pope was shocked at the implications of archery--you could kill from a distance, almost as effectively as with a sword, but without exposing yourself too much. He thought it was a dishonorable way of killing. By the time cannons were invented, everyone was more or less used to seeing archers in battle...

Er, archery's been around since at least the Mesolithic and has been used to kill people for almost as long, if skeletal evidence is anything to go by. That's actually older than the sword, which originated as a Bronze Age weapon.

Canon 29 of the Second Lateran Council under Pope Innocent II is often cited as banning the use of projectile weapons against Christians, but as the notes through the link imply it's not clear that a military prohibition was intended in context. In any case, deadly novelty is unlikely as a motivation; crossbows had been known in Europe since Classical Greece, bows and slings far longer. And their military use, of course, continued even after the council.

comment by Lumifer · 2013-09-04T00:55:38.402Z · LW(p) · GW(p)

But the atom bomb is a gamechanger

Why? Your arguments boil down to "it's very destructive". Note that during WW2 at least two air raids using conventional bombs killed more people than atomic weapons (Tokyo and Dresden).

there is no excuse that you can claim in your defense.

Why not? It's just like saying there's no excuse for killing. That's not correct, there are lots of justifications for killing. Again, I don't see what makes nukes special.

Replies from: Nornagest
comment by Nornagest · 2013-09-04T02:18:08.819Z · LW(p) · GW(p)

From a strategic perspective the initial significance of the atomic bomb was to skew air warfare even further toward the attacking side. As early as the Thirties, strategic bombing had been understood to favor attackers -- the phrase at the time was "the bomber will always get through" -- but the likes of Tokyo and Dresden required massive effort, hundreds of bombers flying near-concurrent sorties. After the invention of the atomic bomb, that was no longer true -- bomber groups that earlier would have been considered trivial now could destroy cities. Suddenly there was no acceptable penetration of air defenses.

Still, defensive efforts continued. Surface-to-air missiles were a great improvement over anti-aircraft gunnery, and nuclear-armed missiles like the AIM-26 were intended to provide high kill probabilities in a defensive role or even take out entire formations at a shot. The development of ICBMs in the late Fifties and early Sixties may have led to more extensive changes in strategy; these could not be effectively stopped by air defenses (though anti-ballistic missile programs continued until the START treaties killed them), leaving mutually assured destruction as the main defensive option.

Replies from: Lumifer
comment by Lumifer · 2013-09-04T02:28:26.868Z · LW(p) · GW(p)

the phrase at the time was "the bomber will always get through"

That was before the radar, though.

(though anti-ballistic missile programs continued until the START treaties killed them)

ABM programs are alive and well at the moment. The US withdrew from the ABM treaty with Russia in 2002.

Replies from: Nornagest
comment by Nornagest · 2013-09-04T02:35:02.386Z · LW(p) · GW(p)

That was before the radar, though.

The radar changed tactics and contributed to some successful defenses, but I don't think it had much long-term effect on the overall strategic balance. We can use the strategic bombing of Germany during WWII for comparison: before the Axis possessed radar, bombers had been distributed as widely as possible so that few could be predictably intercepted. After, bombers were concentrated into a stream to overwhelm local air defenses. This proved effective, although Allied air superiority had largely been established by that time. The development of long-range radar-guided air-to-air or surface-to-air missiles, or for that matter better fire control radars, would have changed things back in the defenders' favor, but by that point nuclear weapons had already made their mark.

ABM programs are alive and well at the moment.

Quite, but I didn't want to clutter an already long comment with post-Cold War development.

comment by Grant · 2013-09-03T23:21:22.810Z · LW(p) · GW(p)

Consider what "the cold war" might have been like if we hadn't of had nuclear weapons. It probably would have been less cold. Come to think of it, cold wars are the best kind of wars. We could use more of them.

Yes nukes have done terrible things, could have done far worse, and still might. However since their invention conventional weapons have still killed far, far more people. We've seen plenty of chances for countries to use nukes where they've not, so I think its safe to say the existence of nukes isn't on average more dangerous than the existence of other weapons. The danger in them seems to come from the existential risk which is not present when using conventional weapons.

Replies from: DanArmak, RolfAndreassen
comment by DanArmak · 2013-09-04T08:24:22.170Z · LW(p) · GW(p)

Consider what the last big "hot war" would have been like if the atom bomb had been developed even a couple of years earlier, or by another side.

Replies from: Lumifer
comment by Lumifer · 2013-09-04T15:05:16.700Z · LW(p) · GW(p)

The war would have been over faster, with possibly lower total number of casualties?

Replies from: DanArmak
comment by DanArmak · 2013-09-04T18:52:22.655Z · LW(p) · GW(p)

The war might have been over faster, but I think with a much higher number of casualties.

Replies from: Lumifer
comment by Lumifer · 2013-09-04T19:15:00.566Z · LW(p) · GW(p)

That's not obvious to me. Consider empirical data: the casualties from conventional bombing raids. And more empirical data: the US did not drop a nuke on Tokyo. Neither did it drop a nuke on Kyoto or Osaka. The use of atomic bombs was not designed for maximum destruction/casualties.

Replies from: DanArmak
comment by DanArmak · 2013-09-04T19:39:36.730Z · LW(p) · GW(p)

The actual use of the atom bomb against Japan was against an already defeated enemy. The US had nothing to fear from Japan at that point, and so they didn't need to strike with maximum power.

On the other hand, imagine a scenario where use of the Bomb isn't guaranteed to end the war at one stroke, and you have to worry about an enemy plausibly building their own Bomb before being defeated. What would Stalin, or Hitler, or Churchill, do with an atom bomb in 1942? The same thing they tried to do with ordinary bombs, scaled up: build up an arsenal of at least a few dozen (time permitting), then try to drop one simultaneously on every major enemy city within a few days of one another.

WW2 casualties were bad enough, but they never approached the range of "kill 50% of the population in each of the 50 biggest enemy cities, in a week's bombing campaign, conditional only on getting a single bomber with a single bomb to the target".

Replies from: ChristianKl
comment by ChristianKl · 2013-09-05T13:05:28.757Z · LW(p) · GW(p)

Given that neither Hitler nor Churchill choose to use the chemical weapons that they had on the enemy I don't see the argument for why they would have used atom bombs the same way as conventional bombs.

Replies from: DanArmak
comment by DanArmak · 2013-09-05T13:41:32.277Z · LW(p) · GW(p)

I don't know the details of why they didn't use chemical weapons, and what they might have accomplished if they had. But I'm not sure what your argument is here. Do you think that they thought they could achieve major military objectives with chemical weapons, but refrained because of the Geneva Protocols, or because of fear of retaliation in kind?

Replies from: ChristianKl
comment by ChristianKl · 2013-09-05T20:59:41.927Z · LW(p) · GW(p)

The point is that the war in Europe wasn't waged with a goal of creating a maximum numbers of casualties.

Replies from: DanArmak
comment by DanArmak · 2013-09-05T21:47:46.314Z · LW(p) · GW(p)

Many bombing campaigns were indeed waged with an explicit goal of maximum civilian casualties, in order to terrorize the enemy into submission, or to cause the collapse of order among enemy civilians. This includes the German Blitz of London and the V-1 and V-2 campaigns, most of the British Bomber Command war effort, US bombing attacks against German cities such as Hamburg and Dresden, Japanese bombing of Nanjing and Canton, and US fire-bombing of Japanese cities including Tokyo. That's not taking the Eastern Front in account, which saw the majority of the fighting.

Wikipedia has a lot of details (excepting the Eastern Front) given and linked here.

If any of the combatants had had the atom bomb, possibly including the US when they were not yet confident of being close to victory, they would surely have used them. After all, dead is dead, and it's better to build and field only one plane and (expensive) bomb per city, not a fleet of thousands. Given the power of even a single bomb, they would surely have gone on to bomb other cities, stopping only when the enemy surrendered.

Replies from: ChristianKl
comment by ChristianKl · 2013-09-05T22:24:18.530Z · LW(p) · GW(p)

Many bombing campaigns were indeed waged with an explicit goal of maximum civilian casualties, in order to terrorize the enemy into submission, or to cause the collapse of order among enemy civilians.

If Germany would have wanting to maximize causalities they would have bombed London with chemical weapons. They decided against doing so.

They wanted to destroy military industry and reduce civilian moral. They didn't want to kill as many civilian's as possible but demoralize them.

Replies from: DanArmak, Estarlio
comment by DanArmak · 2013-09-06T00:27:39.266Z · LW(p) · GW(p)

Estarlio seems to be correct: they didn't use chemical weapons because they feared retaliation in kind. Quoting Wikipedia:

During the war, Germany stockpiled tabun, sarin, and soman but refrained from their use on the battlefield. In total, Germany produced about 78,000 tons of chemical weapons.[2] By 1945 the nation produced about 12,000 tons of tabun and 1,000 pounds (450 kg) of sarin.[2] Delivery systems for the nerve agents included 105 mm and 150 mm artillery shells, a 250 kg bomb and a 150 mm rocket.[2] Even when the Soviets neared Berlin, Adolf Hitler was persuaded not to use tabun as the final trump card. The use of tabun was opposed by Hitler's Minister of Armaments, Albert Speer, who, in 1943, brought IG Farben's nerve agent expert Otto Ambros to report to Hitler. He informed Hitler that the Allies had stopped publication of research into organophosphates (a type of organic compound that emcompasses nerve agents) at the beginning of the war, that the essential nature of nerve gases had been published as early as the turn of the century, and that he believed that Allies could not have failed to produce agents like tabun. This was not in fact the case (Allied research into organophosphates had been kept secret to protect DDT), but Hitler accepted Ambros's deduction, and Germany's tabun arsenal remained unused.

However, one doesn't fear retaliation in kind if one can win with a first strike. Chemical weapons used as bombs would not be that much more effective than firebombing. Atom bombs are far more effective and also easier to deliver and possibly cheaper per city destroyed. Since Hitler (as well as the other sides) accepted the premise that sufficient bombing of enemy civilian populations would cause the enemy to seek terms, if they had had atom bombs and thought their enemies didn't yet have them, they would likely have used them.

comment by Estarlio · 2013-09-05T22:51:39.125Z · LW(p) · GW(p)

IIRC they decided not to use chemical weapons because they were under the impression that the Allies had developed comparable capabilities.

Replies from: TobyBartels
comment by TobyBartels · 2013-09-14T08:29:20.776Z · LW(p) · GW(p)

Ah, so no chemical weapons because MAD, but atomic weapons (by the first to get them) would be different.

comment by RolfAndreassen · 2013-09-04T04:16:40.961Z · LW(p) · GW(p)

Indeed, I'm pretty sure that if not for nuclear weapons, some right-thinking Russian would have declared war over the phrase "hadn't of had". And very rightly so. The slaughter inflicted by mere armies of millions, with a few tens of thousands of tanks, would have been a small price to pay to rid the world of abominations like that one.

comment by anandjeyahar · 2013-09-20T11:48:23.721Z · LW(p) · GW(p)

Every new "true" thing I learn seems to shrink the domain where I can hold useful moral opinions. There is no point having a moral opinion about the law of gravity.So truth is also about increasing moral minimalism.As you learn more, you should have less need for moral opinions. -- Venkatesh G Rao(Be Slightly Evil)

Replies from: TheOtherDave, ChristianKl
comment by TheOtherDave · 2013-09-20T13:25:15.659Z · LW(p) · GW(p)

I find myself unsure what the author thinks moral opinions are for.
Or possibly about what they think truth is for.

Is gravity a good thing or a bad thing, or does it depend, and if so on what? Should there be more gravity, or less, or different types of gravity, or does it depend, and if so on what? These questions matter only to the extent that I can influence gravity. There's no point to having a moral opinion about gravity unless I can influence gravity.

And learning truths about gravity increases the likelihood that I will be able to influence gravity.

As I learn more, I have more need for moral opinions... or, at least, my moral opinions have more effect on the world.

Replies from: anandjeyahar
comment by anandjeyahar · 2013-09-20T19:23:47.614Z · LW(p) · GW(p)

Perhaps, i missed to convey the context or atleast the implicit inferences i made out of the parts before. The basic idea, I inferred is that moral opinions are useful educated heuristics for dire, time-crunched situations one makes beforehand to save the time required for decision-making. I can think of a situation in HPMOR where Harry wants to tell Dumbledore, there are things that are exactly worth thinking about(even though they are horrible) beforehand, because there wouldn't be time to think when they happen. Dumbledore prefers and/or recommends the opposite, not think, but act whatever your natural instincts say. And from that context, i can see the point he makes. Besides, i would be surprised if there were no theists who used to attribute things falling down to god, started to atleast make complicated explanations about gravity after Newton's Discovery.

I must say that, I am tempted to agree with the last part though. i.e: the more I learn, the more effect my moral opinions have on the world. If anything, I would say it is immoral to fail to form moral opinions the more truth you learn about the world.

comment by ChristianKl · 2013-09-20T16:05:24.631Z · LW(p) · GW(p)

I don't think learning the law of gravity tells you anything about the usefulness of having a moral opinion about things falling down.

It would surprise me if Newton changed much about peoples having moral opinions about things falling down. The example seems to be purely chosen for learning new things that change one's moral beliefs.

Replies from: anandjeyahar
comment by anandjeyahar · 2013-09-20T19:13:36.280Z · LW(p) · GW(p)

I would be surprised if there were no theists who used to attribute things falling down to god, started to atleast make complicated explanations about gravity after Newton's Discovery.

Replies from: Wes_W, ChristianKl
comment by Wes_W · 2013-09-20T21:30:45.388Z · LW(p) · GW(p)

Newton himself was a theist who attributed things falling down to God. Although he claimed "hypotheses non fingo" ("I make no hypotheses", or possibly "I feign no hypotheses") for why gravity actually works, he seemed unafraid of implying that it was in some way a function of the Holy Spirit. Still, I'm unaware of anyone attaching moral significance to gravity, whether before Newton or after.

Well, except Yvain, but that implication runs the other way!

comment by ChristianKl · 2013-09-20T19:39:03.645Z · LW(p) · GW(p)

Newton was no atheist. Newton's theory of gravity is about attributing that things are falling down due to God. Newton wanted to explore God's nature.

God fits into it as well as he did in Aristotelian physics in which educated people believed before Newton came along.

Newton did get into trouble for suggesting that the heavens follow the same laws as things on earth got he didn't go against the idea of God.

comment by Thomas · 2013-09-02T15:10:07.409Z · LW(p) · GW(p)

[be cause of all those bad critics] I cried all the way to the bank!

  • Liberace
Replies from: Salemicus
comment by Salemicus · 2013-09-02T15:28:45.687Z · LW(p) · GW(p)

That's not a rationality quote at all, more a Dark Arts quote.

This quote isn't about "all those bad critics," it is about a specific critic (William Connor) who wrote an article in the Daily Mirror which implied that Liberace was gay. This was of course true, but Liberace denied it. He claimed his reputation was being damaged by this claim, and sued (successfully) for libel, winning quite a lot of money for 1956. Incidentally, the case set quite an important precedent in English libel law.

So the meaning of this quote, in context, is "I lied and feigned injury, and used the legal system to suppress the truth, and made a lot of money out of it." That's not rationality, that's Dark Arts.

Replies from: Thomas
comment by Thomas · 2013-09-02T15:50:59.520Z · LW(p) · GW(p)

It is not what Wikipedia said:

He used a similar response to subsequent poor reviews, famously modifying it to "I cried all the way to the bank."

OTOH and not very much related: A gay man in his times in England could be jailed for homosexuality. No wonder he was forced to sue those, who pointed at him (rightfully or not). He has to protect himself.

comment by linkhyrule5 · 2013-09-07T07:54:17.378Z · LW(p) · GW(p)

So when you look at it through the lens of memetic evolution, Christianity is essentially a sabre-toothed lolcat.

Anonymous

comment by anandjeyahar · 2013-09-06T20:17:42.096Z · LW(p) · GW(p)

Trust is a complex variable, because it has both real and imaginary parts. ~ Nabin Hait

Replies from: anandjeyahar
comment by anandjeyahar · 2013-09-08T18:59:52.384Z · LW(p) · GW(p)

Am confused about the down votes. Does it mean that you think: a, this is a trivial quote(not having an interesting/thought-provoking insight?) b, It's too vague and ambiguous. c, We just don't know the source?

Can someone elaborate, am genuinely curious. This quote came up over a discussion with a colleague, and was such an "Aha" moment for me i remember it even now after 3 years. May be it's too obvious for other less-wrongers?

Replies from: Richard_Kennaway, ChristianKl
comment by Richard_Kennaway · 2013-09-08T20:51:24.622Z · LW(p) · GW(p)

It's nothing more than a pun.

comment by ChristianKl · 2013-09-10T16:01:15.194Z · LW(p) · GW(p)

The trust that I place in another person is not two dimensional where there a real and imaginary part.

Even trust that isn't based on good information isn't imaginary.

comment by Multiheaded · 2013-09-12T12:45:26.652Z · LW(p) · GW(p)

"Aw, you can't feed your family on minimum wage? well who told you to start a fucking family when your skills are only worth minimum wage?"

Pax Dickinson, former Chief Technology Officer at Business Insider, on rational family planning in the context of modern capitalism.

(in response to "That's perhaps an argument for the parents to starve but the children are moral innocents wrt their creation. Solutions?") - "If you remove all consequences to children from their parents stupid behavior, how will they ever learn any better?"

Him again on personal responsibility, setting proper incentives for the lazy masses, and learning one's place in society early on.

Replies from: MugaSofer, Fronken, Estarlio
comment by MugaSofer · 2013-09-12T16:06:31.913Z · LW(p) · GW(p)

Upvoted on the basis of the intended point, but honestly, you kinda deserved all the downvotes you got for that, man.

Also, I can kinda see saying this to the people in question, as a sort of Heroic Responsibility thing, but what's actually going on here is the precise opposite of Heroic Responsibility.

comment by Fronken · 2013-09-12T14:23:12.372Z · LW(p) · GW(p)

... that is not rationality that is a mild infohazard trying to hack you into taking actions that make people starve. It should be kept away from people and counteragents spread to defend against further outbreaks. Seriously why would you post that as a rationality quote.

Replies from: Multiheaded
comment by Multiheaded · 2013-09-12T14:55:25.952Z · LW(p) · GW(p)

... that is not rationality that is a mild infohazard trying to hack you into taking actions that make people starve.

Exactly.

Seriously why would you post that as a rationality quote.

Because many people in the LW sphere seem to love the same ideas in better and more refined wrapping! See the aforementioned Caplan, Anissimov, etc for just a couple of famous ones.

(Oh, and Vladimir_M, who before his departure also often provided respectability to awful shit, like rampant anti-feminism, demonizing people on welfare, etc. He's probably my most hated high-karma poster here.)

comment by Estarlio · 2013-09-12T14:04:36.125Z · LW(p) · GW(p)

This isn't rational. It's just elitist snobbery. You can use the exact same structure of argument with respect to anything:

Aw, you got raped? Well who told you to go into a room with your friend without a handgun on you? Didn't you know you should be prepared to kill every man around you in case they turn on you?

Structurally identical.

It's an ideology of knives in the dark, the screams of the dying and enslaved, and the blood red light of fire on steel. Those who honestly endorse its underlying principles would just as happily endorse any barbarism on the strength of the defeated's inability to escape it, provided it went on at some suitable distance from them.

Why not be honest and sum up the only real thing it says? - Vae victus.

Replies from: Desrtopa, pragmatist, Dorikka, Fronken, private_messaging
comment by Desrtopa · 2013-09-12T14:18:57.492Z · LW(p) · GW(p)

If you ignore differences in probability of outcome, you'll end up conflating arguments of enormously difficult meaningful content. For instance, both of the above also have the same structure as

Aw, you broke your leg? Well, who told you to jump off the roof of a three story building?

That an argument have the same structure need not imply that they be equally valid, if the implications of the premises are different.

Getting raped may be a possible consequence of walking into a room with a friend without a means to defend oneself, but it's by no means a probable consequence, and we have to weigh risks against the limitations precautions impose on us. If the odds of rape in that circumstance were, say, a predictable eighty percent, then for all that the advice pattern matches to the widely condemned act of "victim shaming," walking into the room without a means of self defense was a bad idea (disregarding for the sake of an argument of course everything that led to that risk arising in the first place.)

Replies from: Estarlio
comment by Estarlio · 2013-09-12T15:10:35.882Z · LW(p) · GW(p)

Upvoted.

It is true that a woman in such a situation would be well advised to arm herself. However, a complaint about being raped - personal emotional traumas aside - would be a complaint about the necessity of doing so as much as anything else. The response that she should'a armed herself then doesn't address the real meat of the issue; what sort of society we live in, how we want to relate to one another; whether we're to respond with compassion or dismissive brutalism (or at what point on that scale.)

There are things that are the result of natural laws - you jump off a building with no precautions, then you're probably gonna go splat. It makes limited sense to interpret those as complaints about the laws of physics. So, the balance in those cases swings more towards preventative advice in a way that's rarely the case with issues that are the result of human action.

Replies from: Desrtopa
comment by Desrtopa · 2013-09-12T15:38:35.987Z · LW(p) · GW(p)

There's certainly a concern, very pressing in the case of the rape example, that if the risk is too high then there's a responsibility upon society to mitigate it. In the case of the jumping off the roof example, building codes could mandate that the building be made impossible to jump off of or the surroundings be cushioned, but in this case most people would probably agree that the costs on society are too high to be justified in light of the minimal and easily avoidable risk. The case of the minimum wage worker falls somewhere in the middle ground between these, where the consequences are highly predictable, and the actions that would cause them avoidable, but with a significant cost of avoidance, like being unable to trust one's acquaintances, and unlike being unable to jump off a roof. And of course, as in both the other examples, limiting that risk comes with an associated cost.

Whether society should be structured to allow people to raise families while working on minimum wage is a question of cost/benefit analysis, which in this case is likely to be quite difficult, so it doesn't help to declare the question structurally similar to other, easier questions of cost/benefit analysis.

Replies from: Estarlio
comment by Estarlio · 2013-09-12T16:04:09.730Z · LW(p) · GW(p)

I don't disagree with you on any particular point there. However, the quote I was responding to wasn't, as I see it, attempting to explore the cost/benefit of raising minimum wage or subsidising the future of children. It was stating that they just shouldn't have kids - and in that much represented an effective blank cheque. That seems the opposite of your, much more nuanced, approach; bound by implications of fact and reason that are going to be specific to particular issues and cases and thus can't be generalised in the same way.

Replies from: Desrtopa
comment by Desrtopa · 2013-09-12T16:12:42.884Z · LW(p) · GW(p)

Well, I'm not particularly in agreement with the original quote either, I just don't endorse treating it as a Boo Light, against which any sort of argument is praiseworthy.

comment by pragmatist · 2013-09-12T14:21:03.588Z · LW(p) · GW(p)

This isn't rational. It's just elitist snobbery.

I'm pretty sure Multiheaded realizes this, and intended this post as satire. It might help to read his post in the context of this comment.

Replies from: MugaSofer
comment by MugaSofer · 2013-09-12T15:58:37.928Z · LW(p) · GW(p)

Looks like you were right. Upvoted.

comment by Dorikka · 2013-09-12T22:26:09.241Z · LW(p) · GW(p)

It's an ideology of knives in the dark, the screams of the dying and enslaved, and the blood red light of fire on steel.

Independently from the conversation as a whole, thank you for this sentence -- I like the imagery. :)

comment by Fronken · 2013-09-12T14:18:22.404Z · LW(p) · GW(p)

This comment, while pointing out real and serious issues - I agree with it - contains way too much Dark Arts for a LessWrong comment.

comment by private_messaging · 2013-09-12T14:07:05.081Z · LW(p) · GW(p)

Signalling too, probably. The guy is also saying that he thinks he'd fare well in the dog eats dog world, sort of like early 90s Russia, but I kind of doubt he got the skill set for that.

Replies from: Multiheaded
comment by Multiheaded · 2013-09-12T14:42:49.319Z · LW(p) · GW(p)

Haha, no shit.

(Source: family experience.)

comment by Eugine_Nier · 2013-09-14T02:14:36.651Z · LW(p) · GW(p)

Bayes' theorem is prejudiced.

Michael Anissimov

comment by anandjeyahar · 2013-09-06T20:07:14.003Z · LW(p) · GW(p)

Growing up: That mysterious inflection point(temporal) in your decision-making timeline, when you decide to use what (habits,signs,etc) parts of you will take time to change, and instead use them as a signal to change working strategy on the problem you’re solving. ~Aang Jie

Replies from: shminux
comment by shminux · 2013-09-06T22:50:49.604Z · LW(p) · GW(p)

Who on Earth is Aang Jie?

comment by johnswentworth · 2013-09-02T02:32:32.838Z · LW(p) · GW(p)

"You will begin to touch heaven, Jonathan, in the moment that you touch perfect speed. And that isn't flying a thousand miles per hour, or a million, or flying at the speed of light. Because any number is a limit, and perfection doesn't have limits. Perfect speed, my son, is being there."

--Richard Bach, Jonathan Livingston Seagull

Replies from: scav
comment by scav · 2013-09-04T10:46:09.737Z · LW(p) · GW(p)

That is incoherent at best. Is there any context to the quote that might explain why it is here?

Replies from: johnswentworth
comment by johnswentworth · 2013-09-07T01:19:40.822Z · LW(p) · GW(p)

I included the spiritual junk just to stay true to the original wording. The meat of it is the "Perfect speed ... is being there" part. Setting aside concerns of relativistic accuracy, the point I think it makes quite well is that people often fail to think far enough outside the box to even realize what optimal would look like. Instead, people just settle for some intuitively obvious good-enough objective. Given the severe down voting, I will omit spiritual junk in future quotes.

comment by Manfred · 2013-09-09T11:57:35.383Z · LW(p) · GW(p)

What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination? Moloch! Solitude! Filth! Ugliness! Ashcans and unobtainable dollars! Children screaming under the stairways! Boys sobbing in armies! Old men weeping in the parks! Moloch! Moloch!

-Alan Ginsberg

comment by Eugine_Nier · 2013-09-02T20:17:46.346Z · LW(p) · GW(p)

Offending people is a necessary and healthy act. Every time you say something that's offensive to another person, you just caused a discussion. You just forced them to have to think.

Louis C.K.

Replies from: Salemicus, MugaSofer, Dirk
comment by Salemicus · 2013-09-02T23:15:59.926Z · LW(p) · GW(p)

Offending people is like vomiting. It can be a necessary and healthy act, but it ought to be a last resort, and most of the time it's an indication of bad health, and people who take a special pride in it* are immature and anti-social. To quote A.P. Herbert:

Nothing is more difficult to do than to make a verbal observation which will give no offence and bring about more good than harm; and many great men die in old age without ever having done it.

*e.g. "Oh man, I got so wasted last night and threw up everywhere!"

comment by MugaSofer · 2013-09-02T20:57:52.691Z · LW(p) · GW(p)

No, you caused an argument. And if you keep annoying them, you'll never get an actual discussion, either.

comment by Dirk · 2013-09-02T20:24:22.136Z · LW(p) · GW(p)

However, strong emotions can cause us to think in feelings rather than in abstract concepts. Usually some emotional distance is required for rational debate.