Open thread, August 4 - 10, 2014

post by polymathwannabe · 2014-08-04T12:20:18.540Z · LW · GW · Legacy · 309 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

309 comments

Comments sorted by top scores.

comment by Bakkot · 2014-08-05T02:34:00.496Z · LW(p) · GW(p)

I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I'll also post this next time SSC has a new open thread (unless Yvain happens to notice this).

Replies from: Creutzer, Risto_Saarelma, NancyLebovitz, army1987
comment by Creutzer · 2014-08-05T20:39:00.763Z · LW(p) · GW(p)

Great idea and nicely done! It also had the additional benefit of constituting my very first interaction with javascript because I needed to modify somethings. (Specifically, avoid the use of localStorage.)

Replies from: Bakkot
comment by Bakkot · 2014-08-05T20:46:53.295Z · LW(p) · GW(p)

I'm curious what you used instead (cookies?), or did you just make a historyless version? Also, why did you need that? localStorage isn't exactly a new feature (hell, IE has supported it since version 8, I think).

Replies from: Creutzer
comment by Creutzer · 2014-08-05T21:02:29.363Z · LW(p) · GW(p)

It appears that my Firefox profile has some security features that mess with localStorage in a way that I don't understand. I used Greasemonkey's GM_[sg]etValue instead. (Important and maybe obvious, but not to me: their use has to be desclared with @grant in the UserScript preamble.)

comment by Risto_Saarelma · 2014-08-05T06:49:34.052Z · LW(p) · GW(p)

This looks excellent.

comment by NancyLebovitz · 2014-08-06T19:03:25.419Z · LW(p) · GW(p)

I tried downloading it by clicking on "install the extension", but it doesn't seem to get to my browser (Chrome). Am I missing something?.

Replies from: Bakkot
comment by Bakkot · 2014-08-06T21:07:19.013Z · LW(p) · GW(p)

"Install the extension" is a link bringing you to the chrome web store, where you can install it by clicking in the upper-right. The link is this, in case it's Github giving you trouble somehow.

If the Chrome web store isn't recognizing that you're running Chrome, that's probably not a thing I can fix, though you could try saving this link as something.user.js, opening chrome://extensions, and dragging the file onto the window.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-08-07T05:01:20.299Z · LW(p) · GW(p)

Thank you. That worked. I never would have guessed that an icon which simply had the word "free" on it was the download button.

Would it be worth your while to do this for LW? It makes me crazy that the purple edges for new comments are irretrievably lost if the page is downloaded again.

Replies from: Bakkot
comment by Bakkot · 2014-08-07T19:57:34.547Z · LW(p) · GW(p)

Would it be worth your while to do this for LW?

Sure. Remarkably little effort required, it turned out. (Chrome extension is here.)

I guess I'll make a post about this too, since it's directly relevant to LW.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-08-10T08:23:10.539Z · LW(p) · GW(p)

This doesn't seem to handle stuff deep enough in the reply chain to be behind "continue this thread" links. On the massive threads where you most need the thing, a lot of the discussion is going to end up beyond those.

Replies from: Bakkot
comment by Bakkot · 2014-08-10T15:29:52.866Z · LW(p) · GW(p)

It seems to work for me. "Continue this thread" brings you to a new page, so you'll have to set the time again, is all. Comments under a "Load more" won't be properly highlighted until you click in and out of the time textbox after loading them.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-08-11T03:38:46.946Z · LW(p) · GW(p)

The use case is that I go to the top page of a huge thread, the only new messages are under a "Continue this thread" link, and I want the widget to tell me that there are new messages and help me find them. I don't want to have to open every "Continue" link to see if there are new messages under one of them.

Replies from: Bakkot
comment by Bakkot · 2014-08-11T04:48:01.158Z · LW(p) · GW(p)

Ah. That's much more work, since there's no way of knowing if there's new comments in such a situation without fetching all of those pages. I might make that happen at some point, but not tonight.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-08-15T16:49:53.726Z · LW(p) · GW(p)

Thanks very much. I think there's an "unpack the whole page" program somewhere. Anyone remember it?

comment by A1987dM (army1987) · 2014-08-05T22:00:08.740Z · LW(p) · GW(p)

Thanks a million!

comment by Stuart_Armstrong · 2014-08-07T14:07:15.453Z · LW(p) · GW(p)

In your open thread inbox, less wrong comments have the options "context" and "report" (in that order), whereas private messages have "report" and "reply" (in that order). Many times I've accidentally pressed "report" on a private message, and fortunately caught myself before continuing.

I'd suggest reversing the order of "report" and "reply", so that they fit with the comments options.

Right, that's my tiny suggestion for this month :-)

comment by Bakkot · 2014-08-05T02:22:27.757Z · LW(p) · GW(p)

I wrote a userscript to add a delay and checkbox reading "I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities." before allowing you to comment on LW. Done in response to a comment by army1987 here.

Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.

Replies from: NancyLebovitz, ChristianKl, army1987
comment by NancyLebovitz · 2014-08-05T14:59:25.978Z · LW(p) · GW(p)

"To the very best of my abilities" seems excessive to me, or at least I seem to do reasonably well with "according to the amount of work I'm willing to put in, and based on pretty good habits".

I'm not even sure what I could do to improve my posting much. I could be more careful to not post when I'm tired or angry, and that probably makes sense to institute as a habit. On the other hand, that's getting rid of some of the dubious posting, which is not the same thing as improving the average or the best posts.

Replies from: satt
comment by satt · 2014-08-07T02:01:16.547Z · LW(p) · GW(p)

Even when I'd only been here a few weeks, your posting had already caught my eye as unusually mindful & civil, and nothing since has changed my impression that you're far better than most of us at conversing in good faith and with equanimity.

comment by ChristianKl · 2014-08-05T13:38:42.757Z · LW(p) · GW(p)

Given the recent discussion about how rituals can give the appearance of cultishness, it's probably not good time to bring that up at the moment ;)

comment by A1987dM (army1987) · 2014-08-05T22:03:07.957Z · LW(p) · GW(p)

Testing this...

Replies from: army1987
comment by A1987dM (army1987) · 2014-08-05T22:06:03.252Z · LW(p) · GW(p)

Nope, doesn't seem to work. (I am probably doing something wrong as I never used Greasemonkey before.)

Replies from: Bakkot
comment by Bakkot · 2014-08-05T22:35:50.381Z · LW(p) · GW(p)

Just tested this on a clean FF profile, so it's almost certainly something on your end. Did you successfully install the script? You should've gotten an image which looks something like this, and if you go to Greasemonkey's menu while on a LW thread, you should be able to see it in the list of scripts run for that page. Also, note that you have to refresh/load a new page for it to show up after installation.

Oh, and it only works for new comments, not new posts. It should look something like this, and similarly for replies.

ETA: helpful debugging info: if you can, let me know what page it's not working on, and let me know if there's any errors in the developer console (shift-control-K or command-option-K for windows and Mac respectively).

Replies from: army1987
comment by A1987dM (army1987) · 2014-08-09T08:59:57.502Z · LW(p) · GW(p)

I had interpreted “Save this file as” in an embarrassingly wrong way. It works now!

(Maybe editing the comment should automatically uncheck the box, otherwise I can hit “Reply”, check the box straight away, then start typing my comment.)

comment by BereczFereng · 2014-08-04T23:30:07.989Z · LW(p) · GW(p)

Does anyone know if something urgent has been going on with MIRI, other than the Effective Altruism Summit? I am a job application candidate -- I have no idea about my status as one. But I was promised a chat today, days ago, and nothing was arranged regarding time or medium. Now it is the end of the day. I sent my application weeks ago and have been in contact with 3 of the employees who seem to work on the management side of things. This is a bit frustrating. Ironically, I applied as Office Manager, and hope that (if hired) I would be doing my best to take care of these things -- putting things on a calendar, working to help create a protocol for 'rejecting' or 'accepting' or 'deferring' employee applications, etc. Have other people had similar, disorganized correspondences with MIRI? Or have they mostly been organized, suggesting that I take this experience as a sure sign of rejection?

Replies from: None, eggman
comment by [deleted] · 2014-08-05T13:41:24.315Z · LW(p) · GW(p)

Have other people had similar, disorganized correspondences with MIRI?

Yes.

comment by eggman · 2014-08-08T06:25:09.045Z · LW(p) · GW(p)

Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.

Replies from: BereczFereng
comment by BereczFereng · 2014-08-10T03:53:11.116Z · LW(p) · GW(p)

Still haven't heard anything back from them in any sort of way. But thanks for making their circumstances even more clear!

Replies from: BereczFereng
comment by BereczFereng · 2014-08-13T19:57:54.829Z · LW(p) · GW(p)

Heard back & talked with them. My personal issue is now resolved.

comment by sixes_and_sevens · 2014-08-04T13:55:32.734Z · LW(p) · GW(p)

Oblique request made without any explanation: can anyone provide examples of beliefs that are incontrovertibly incorrect, but which intelligent people will nonetheless arrive at quite reasonably through armchair-theorising?

I am trying to think up non-politicised, non-controversial examples, yet every one I come up with is a reliable flame-war magnet.

ETA: I am trying to reason about disputes where on the one hand you have an intelligent, thoughtful person who has very expertly reasoned themselves into a naive but understandable position p, and on the other hand, you have an individual who possesses a body of knowledge that makes a strong case for the naivety of p.

What kind of ps exist, and do they have common characteristics? All I can come up with are politically controversial ps, but I'm starting my search from a politically-controversial starting point. The motivating example for this line of reasoning is so controversial that I'm not touching it with a shitty-stick.

Replies from: drethelin, satt, pianoforte611, Alejandro1, ChristianKl, None, solipsist, Lumifer, Manfred, philh, falenas108, pragmatist, philh, KnaveOfAllTrades, None, NancyLebovitz, Leonhart, John_Maxwell_IV
comment by drethelin · 2014-08-04T18:34:29.363Z · LW(p) · GW(p)

Mathematical arguments happen all the time over whether 0.99999...=1 but I'm not sure if that's interesting enough to count for what you want.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2014-08-06T23:19:30.079Z · LW(p) · GW(p)

That "0.99999...." represents a concept that evaluates to 1 is a question of notation, not mathematics. 0.99999... does not inherently equal 1; rather, by convention, it is understood to mean 1. The debate is not about the territory, it is about what the symbols on the map mean.

Replies from: KnaveOfAllTrades, Adele_L
comment by KnaveOfAllTrades · 2014-08-07T00:22:58.706Z · LW(p) · GW(p)

Where does one draw the line, if at all? "1+1 does no inherently equal 2; rather, by convention, it is understood to mean 2. The debate is not about the territory, it is about what the symbols on the map mean." It seems to me like that--very 'mysteriously'--people who understand real analysis never complain "But 0.999... doesn't equal 1"; sufficient mathematical literacy seems to kill any such impulse, which seems very telling to me.

Replies from: tut, ThisSpaceAvailable
comment by tut · 2014-08-07T15:31:57.864Z · LW(p) · GW(p)

Yes, and that's a case of "you don't understand mathematics, you get used to it." Which applies exactly to notation and related conventions.

Edit:

More specifically, if we let a_k=9/10^k, and let s_n be the sum from k=1 to n of a_k, then the limit of s_n as n goes to infinity will be 1, but 1 won't be in {s_n|n in R}.

When somebody who is used to calculus sees ".99..." What they are thinking of is the limit, which is 1.

But before you get used to that, most likely what you think of is some member of {s_n|n in R} with an n that's large enough that you can't be bothered to write all the nines, but which is still finite.

comment by ThisSpaceAvailable · 2014-08-07T01:07:06.822Z · LW(p) · GW(p)

Exactly. The arguments about whether 0.99999.... = 1 are lacking a crucial item: a rigorous definition of what "0.9999..." refers to. The argument isn't "Is the limit as n goes to infinity of sum from 1 to n of 9*10^-n equal to 1?" It's "Here's a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?" Which is just a silly argument to have. If someone says "I don't believe that 0.9999.... = 1", the correct response (unless they have sufficient real analysis background) is not "Well, here's a proof that of that claim", it's "Well, there are various axioms and definitions that lead to that being treated as being equal to 1".

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-08-07T01:42:06.824Z · LW(p) · GW(p)

It's "Here's a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?" Which is just a silly argument to have.

It's not. The "0.999... doesn't equal 1" meme is largely crackpottery, and promotes amateur overconfidence and (arguably) mathematical illiteracy.

Terms are precious real estate, and their interpretations really are valuable. Our thought processes and belief networks are sticky; if someone has a crap interpretation of a term, then it will at best cause unnecessary friction in using it (e.g. if you define the natural numbers to include -1,...,-10 and have to retranslate theorems because of this), and at worst one will lose track of the translation between interpretations and end up propagating false statements ("2^n can sometimes be less than 2 for n natural")

the correct response (unless they have sufficient real analysis background) is not "Well, here's a proof that of that claim", it's "Well, there are various axioms and definitions that lead to that being treated as being equal to 1".

It would be an accurate response (even if not the most pragmatic or tactful) to say, "Sorry, when you pin down what's meant precisely, it turns out to be a much more useful convention to define the proposition 0.999...=1 such that it is true, and you basically have to perform mental gymnastics to try to justify any usage where it's not true. There are technically alternative schemas where this could fail or be incoherent or whatever, but unless you go several years into studying math (and even then maybe only if you become a logician or model theorist or something), those are not what you'll be encountering."

One could define 'marble' to mean 'nucleotide'. But I think that somebody who looked down on a geneticist for complaining about people using 'marble' as if it means 'nucleotide', and who said it was a silly argument as if the geneticist and the person who invented the new definition were Just As Bad As Each Other, would be mistaken, and I would suspect they were more interested in signalling their Cleverness via relativist metacontrarianism than getting their hands dirty figuring out the empirical question of which definitions are useful in which contexts.

Replies from: KnaveOfAllTrades, ThisSpaceAvailable
comment by KnaveOfAllTrades · 2014-08-07T02:03:50.312Z · LW(p) · GW(p)

Actually, I could imagine you reading that comment and feeling it still misses your point that 0.999... is undefined or has different definitions or senses in amateur discussions. In that case, I would point to the idea that one can makes propositions about a primitive concept that turn out to be false about the mature form of it. One could make claims about evidence, causality, free will, knowledge, numbers, gravity, light, etc. that would be true under one primitive sense and false under another. Then minutes or days or month or years or centuries or millennia later it turns out that the claims were false about the correct definition.

It would be a sin of rationality to assume that, since there was a controversy over definitions, and some definitions proved the claim and some disproved it, that no side was more right than another. One should study examples of where people made correct claims about fuzzy concepts, to see what we might learn in our own lives about how these things resolve. Were there hints that the people who turned out to be incorrect ignored? Did they fail to notice their confusion? Telltale features of the problem that favoured a different interpretation? etc.

comment by ThisSpaceAvailable · 2014-08-07T04:34:32.106Z · LW(p) · GW(p)

It's not. The "0.999... doesn't equal 1" meme is largely crackpottery

A lot (in fact, all of them that don't involve a rigorous treatment of infinite series) of the "proofs" that it does equal 1 are fallacious, and so the refusal to accept them is actually a reasonable response.

You seem to making an assertion about me in your last paragraph, but doing so very obliquely. Your analogy is not very good, as people do not try to argue that one can logically prove that "marble" does not mean "nucleotide", they just say that it is defined otherwise.

If we're analogizing ".9999... = 1" to "marble doesn't mean't nucleotide", then "

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-08-07T04:51:47.218Z · LW(p) · GW(p)

You seem to making an assertion about me in your last paragraph, but doing so very obliquely.

Apologies for that. I don't think that that specific failure mode is particularly likely in your case, but it seems plausible to me that other people thinking in that way has shifted the terms of discourse such that that form of linguistic relativism is seen as high-status by a lot of smart people. I am more mentioning it to highlight the potential failure mode; if part of why you hold your position is that it seems like the kind of position that smart people would hold, but I can account for those smart people holding it in terms of metacontrarianism, then that partially screens off that reason for endorsing the smart people's argument.

It looks like you submitted your comment before you meant to, so I shall probably await its completion before commenting on the rest.

comment by Adele_L · 2014-08-09T19:01:54.326Z · LW(p) · GW(p)

And yet I somehow doubt most of these people reject connectedness.

comment by satt · 2014-08-07T00:56:59.933Z · LW(p) · GW(p)

I thought about this on & off over the last couple of days and came up with more candidates than you can shake a shitty stick at. Some of these are somewhat political or controversial, but I don't think any are reliable flame-war magnets. I expect some'll ring your cherries more than others, but since I can't tell which, I'll post 'em all and let you decide.

  1. The answer to the Sleeping Beauty puzzle is obviously 1/2.

  2. Rational behaviour, being rational, entails Pareto optimal results.

  3. Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.

  4. Truth is an absolute defence against a libel accusation.

  5. If a statistical effect is so small that a sample of several thousand is insufficient to reliably observe it, the effect's too small to matter.

  6. Controlling for an auxiliary variable, or matching on that variable, never worsens the bias of an estimate of a causal effect.

  7. Human nature being as brutish as it is, most people are quite willing to be violent, and their attempts at violence are usually competent.

  8. In the increasingly fast-paced and tightly connected United States, residential mobility is higher than ever.

  9. The immediate cause of death from cancer is most often organ failure, due to infiltration or obstruction by spreading tumours.

  10. Aumann's agreement theorem means rationalists may never agree to disagree.

  11. Friction, being a form of dissipation, plays no role in explaining how wings generate lift.

  12. Seasons occur because Earth's distance from the Sun changes during Earth's annual orbit.

  13. Beneficial mutations always evolve to fixation.

  14. Multiple discovery is rare & anomalous.

  15. The words "male" & "female" are cognates.

  16. Given the rise of online piracy, the ridiculous cost of tickets, and the ever-growing convenience of other forms of entertainment, cinema box office receipts must be going down & down.

  17. Looking at voting in an election from the perspective of timeless decision theory, my voting decision is probably correlated and indeed logically linked with that of thousands of people relatively likely to agree with my politics. This could raise the chance of my influencing an election above negligibility, and I should vote accordingly.

  18. The countries with the highest female life expectancies are approaching a physiologically fixed hard limit of 65 — sorry, 70 — sorry, 80 — sorry, 85 years.

  19. The answer to the Sleeping Beauty puzzle is obviously 1/3.

Language in general might be a rich source of these, between false etymologies, false cognates, false friends, and eggcorns.

Replies from: pragmatist, sixes_and_sevens, bramflakes
comment by pragmatist · 2014-08-08T06:09:07.476Z · LW(p) · GW(p)

Thanks for that list. I believed (or at least, assigned a probability greater than 0.5 to) about five of those.

comment by sixes_and_sevens · 2014-08-07T09:48:31.121Z · LW(p) · GW(p)

Thanks for this. These are all really good.

Replies from: satt
comment by satt · 2014-08-08T03:00:03.090Z · LW(p) · GW(p)

Now I just need to think of another 21 and I'll have enough for a philosophy article!

comment by bramflakes · 2014-08-11T15:49:16.861Z · LW(p) · GW(p)

Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.

... don't they? (in the long run)

Replies from: Lumifer, satt
comment by Lumifer · 2014-08-11T16:12:46.034Z · LW(p) · GW(p)

... don't they?

No, they don't -- look at contemporary Western countries and their birth rates.

Replies from: bramflakes
comment by bramflakes · 2014-08-11T17:05:10.819Z · LW(p) · GW(p)

Oh yes I know that, I just meant in the long-long run. This voluntary limiting of birth rates can't last for obvious evolutionary reasons.

Replies from: Lumifer
comment by Lumifer · 2014-08-11T17:19:34.429Z · LW(p) · GW(p)

I have no idea about the "long-long" run :-)

The limiting of birth rates can last for a very long time as long as you stay at replacement rates. I don't think "obvious evolutionary reasons" apply to humans any more, it's not likely another species will outcompete us by breeding faster.

Replies from: bramflakes
comment by bramflakes · 2014-08-11T18:54:11.018Z · LW(p) · GW(p)

Any genes that make people defect by having more children are going to be (and are currently being) positively selected.

Besides, reducing birthrates to replacement isn't anything near a universal phenomenon, see the Mormons and Amish.

It's got nothing to do with another species out-competing us - competition between humans is more than enough.

Replies from: Lumifer
comment by Lumifer · 2014-08-11T18:57:27.892Z · LW(p) · GW(p)

Any genes that make people defect by having more children are going to be (and are currently being) positively selected.

This observation should be true throughout the history of the human race, and yet the birth rates in the developed countries did fall off the cliff...

Replies from: bramflakes, Azathoth123
comment by bramflakes · 2014-08-11T19:16:42.788Z · LW(p) · GW(p)

And animals don't breed well in captivity.

Until they do.

comment by Azathoth123 · 2014-08-13T05:42:01.323Z · LW(p) · GW(p)

and yet the birth rates in the developed countries did fall off the cliff...

This happened barely half a generational cycle ago. Give evolution time.

Replies from: Lumifer
comment by Lumifer · 2014-08-13T14:44:47.025Z · LW(p) · GW(p)

Give evolution time.

So what's your prediction for what will happen when?

comment by satt · 2014-08-11T23:34:18.456Z · LW(p) · GW(p)

... don't they? (in the long run)

In the "long-long run", given ad hoc reproductive patterns, yeah, I'd expect evolution to ratchet average human fertility higher & higher until much of humanity slammed into the Malthusian limit, at which point "when people have more food they have more kids" would become true.

Nonetheless, it isn't true today, it's unlikely to be true for the next few centuries unless WWIII kicks off, and may never come to pass (humanity might snuff itself out of existence before we go Malthusian, or the threat of Malthusian Assured Destruction might compel humanity to enforce involuntary fertility limits). So here in 2014 I rate the idea incontrovertibly false.

comment by pianoforte611 · 2014-08-04T16:25:42.556Z · LW(p) · GW(p)

That's a tall order. I'll try:

Noticing that people who are the best in any sport practice the most and concluding that being good at a sport is simply a matter of practice and determination. Tabula Rasa in general.

The supply-demand model of minimum wage? Is this political? I'm not saying minimum wage is good or bad, just that the supply-demand model can't settle the question yet people learning about economics tend to be easily convinced by the simple explanation.

That thermodynamics proves that weight loss + maintenance is simply a matter of diet and exercise (this is more Yudkowsky's fight than mine).

comment by Alejandro1 · 2014-08-04T14:32:41.107Z · LW(p) · GW(p)

I doubt it is possible to find non-controversial examples of anything, and especially of things plausible enough to be believed by intelligent non-experts, outside of the hard sciences.

If this is true, the only plausible examples would be such as "an infinity cannot be larger than another infinity", "time flows uniformly regardless of the observer", "biological species have unchanging essences", and other intuitively plausible statements unquestionably contradicted by modern hard sciences.

comment by ChristianKl · 2014-08-04T15:11:03.847Z · LW(p) · GW(p)

Most drug new drugs fail clinical trials.

Intelligent people make theories about how a drug is supposed to work and think it would help to cure some illness. Then when the drug is brought into clinical trials more than 90% of new drugs still fail to live up to the theoretical promise that the drug held.

Replies from: gwern, satt
comment by gwern · 2014-08-04T15:28:38.956Z · LW(p) · GW(p)

A fun one which came up recently on IRC: everyone thinks that how your parents raise you is incredibly important, this is so obvious it doesn't need any proof and is universal common sense (how could influencing and teaching a person from scratch to 18 years old not have deep and profound effects on them?), and you can find extended discussions of the best way to raise kids from Plato's Republic to Rousseau's Emile to Spock.

Except twin studies consistently estimate that the influence of 'shared environment' (the home) is small or near-zero for many traits compared to genetics and randomness/nonshared-environment.

If you want to predict whether someone will be a smoker or smart, it doesn't matter whether they're raised by smokers or not (to borrow an example from The Nurture Assumption*); it just matters whether their biological parents were smokers and whether they get unlucky.

This is so deeply counterintuitive and unexpected that even people who are generally familiar with the relevant topics like IQ or twin studies typically don't know about this or disbelieve it.

(Another example is probably folk physics: Newtonian motion is true, experimentally confirmed, mathematically logical, and completely unintuitive and took millennia to be developed after the start of mechanics.)

* Rich's citation is to Rowe 1994, The Limits of Family Influence: Genes, Experience, and Behavior; from pg204:

But this interpretation foolishly neglects to consider the genetic component of parent-child similarity. Table 7.2 summarizes reports of two twin studies, an adoptive study, and a family study. In all these studies, the offspring of smokers were adults at the time they were surveyed. Smoking's heritability averaged 43%, whereas smoking's rearing environmental variation was close to zero. [Shared rearing variation (c^2): N/A (family, Eysenck (1980)); <0% (Twin, Cannelli, Swan, Robinette, & Fabsilz (1990)); <0% (Twin, Swan, Carmelli, Rosenman, Fabsitz, & Christian (1990)); <0% (Adoptive, Eysenck (1980)); mean: 0%] In other words, effects of rearing variation (e.g. parents' lighting up or not, or having cigarettes in the home or not) were nil by the time the children had reached adulthood. In Eysenck's (1980) report on adoptees, the smoking correlation of biologically unrelated parent-child pairs was essentially zero (r = -.02). Parental smoking may influence a childs risk through genetic inheritance: The role of parents is a passive one-providing a set of genes at loci relevant to smoking risk, but not SOCially influencing their offspring.

Replies from: David_Gerard, IlyaShpitser, Torello, Viliam_Bur, Azathoth123
comment by David_Gerard · 2014-08-04T16:16:00.187Z · LW(p) · GW(p)

A fun one which came up recently on IRC: everyone thinks that how your parents raise you is incredibly important, this is so obvious it doesn't need any proof and is universal common sense, and you can find extended discussions of the best way to raise kids from Plato's Republic to Rousseau's Emile to Spock.

Except twin studies consistently estimate that the influence of 'shared environment' (the home) is small or near-zero for many traits compared to genetics and randomness/nonshared-environment.

This is quite possibly the most comforting scientific result ever for me as a parent, by the way.

Replies from: Prismattic, gwern
comment by Prismattic · 2014-08-05T03:28:38.369Z · LW(p) · GW(p)

Whereas for me, it's horrifying, given that my ex-spouse turned out to be an astonishingly horrible person.

I seem to recall Yvain posting a link to something he referred to as the beginnings of a possible rebuttal to The Nurture Assumption; I suppose I shall have to hang my hopes on that.

Replies from: gjm
comment by gjm · 2014-08-05T10:24:18.279Z · LW(p) · GW(p)

It may or may not be comforting to reflect that your ex-spouse is probably less horrible than s/he seems to you. (Just on general outside-view principles; I have no knowledge of your situation or your ex.)

comment by gwern · 2014-08-04T16:40:57.304Z · LW(p) · GW(p)

You feared more than you hoped, eh?

comment by IlyaShpitser · 2014-08-04T16:11:04.714Z · LW(p) · GW(p)

Old epi jungle saying: "the causal null is generally true."

Replies from: gwern
comment by gwern · 2014-08-04T18:38:56.148Z · LW(p) · GW(p)

'Shh, kemo sabe - you hear that?' 'No; the jungle is silent tonight.' 'Yes. The silence of the p-values. A wild publication bias stalks us. We must be cautious'.

comment by Torello · 2014-08-04T21:29:14.007Z · LW(p) · GW(p)

What is IRC?

Replies from: erratio, Kaj_Sotala
comment by erratio · 2014-08-04T21:30:40.230Z · LW(p) · GW(p)

Get off my lawn

comment by Viliam_Bur · 2014-08-06T07:32:08.698Z · LW(p) · GW(p)

So... does it mean that it's completely irrelevant who adopted Harry Potter, because the results would be the same anyway?

Or is the correct model something like: abuse can change things to worse, but any non-abusive parenting simply means the child will grow up determined by their genes? That is, we have a biologically set "destiny", and all the environment can do is either help us reach this destiny or somehow cripple us halfway (by abuse, by lack of nutrition, etc.).

Replies from: gwern, satt, drethelin, solipsist
comment by gwern · 2014-08-06T16:23:26.270Z · LW(p) · GW(p)

Or is the correct model something like: abuse can change things to worse, but any non-abusive parenting simply means the child will grow up determined by their genes? That is, we have a biologically set "destiny", and all the environment can do is either help us reach this destiny or somehow cripple us halfway (by abuse, by lack of nutrition, etc.).

In an home environment within the normal range for a population, the home environment will matter little in a predictable sense on many traits compared to the genetic legacy, and random events/choices/biological-events/accidents/etc. There are some traits it will matter a lot on, and in a causal sense, the home environment may determine various important outcomes but not in a way that is predictable or easily measured. The other category of 'nonshared environment' is often bigger than the genetic legacy, so speaking of a biologically set destiny is misleading: biologically influenced would be a better phrase.

Replies from: banx
comment by banx · 2014-08-06T19:31:13.367Z · LW(p) · GW(p)

Has this been demonstrated for home environments in the developing world or sub-middle class home environments in the developed world? My prior understanding was that it had not been.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-09T01:24:52.245Z · LW(p) · GW(p)

There are serious restriction of range problems with the literature. I believe that there is one small French adoption study with unrestricted range which produced 1 sigma IQ difference between the bottom and top buckets (deciles?) of adopting families.

I wonder if this what Shalizi alludes to when he says that IQ is closer to that of the adoptive parents than that of the biological parents.

Replies from: satt
comment by satt · 2014-08-12T02:02:10.281Z · LW(p) · GW(p)

I believe that there is one small French adoption study which produced 1 sigma IQ difference between the bottom and top buckets (deciles?) of adopting families.

(Both references describe the same study.) Capron & Duyme found 38 French children placed for adoption before age 2, 20 of them to parents with very high socioeconomic status (operationalized as having 14-23 years of education and working a profession) and 18 to parents with very low socioeconomic status (unskilled & semi-skilled labourers or farmers, with 5-8 years of education). When the kids took the WISC-R IQ test, those adopted into the high-SES families had a mean IQ of 111.6, while those in the low-SES families had a mean IQ of 100.0, for a difference of 0.77 sigma.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-14T00:51:57.550Z · LW(p) · GW(p)

Thanks!

comment by satt · 2014-08-06T23:35:58.466Z · LW(p) · GW(p)

So... does it mean that it's completely irrelevant who adopted Harry Potter, because the results would be the same anyway?

In the context of IQ I've seen it claimed that normal variation in parenting doesn't do much, but extreme abuse can still have a substantial effect. So parenting quality would only make a difference at the tails of the parenting quality distribution, but there it would make quite a difference.

comment by drethelin · 2014-08-06T23:18:18.546Z · LW(p) · GW(p)

In "No Two Alike" Harris argues that the biggest non-shared environment personality determinant is peer group. So Harry Potter style "Lock him up in a closet with no friends" would actually have a huge effect.

Replies from: None
comment by [deleted] · 2014-08-08T00:22:24.922Z · LW(p) · GW(p)

And it should be noted that parents do have control over peer group: where to live, public school vs. private school vs. homeschooling, getting children to join things, etc. So parenting still matters even if it's all down to genetics and non-shared environment.

Also, has anyone investigated whether the proper response to publicized social-science answers/theories/whatever you want to call them is to assume they're true or just wait for them to be rejected? That is: how many publicized social-science answers [the same question could be asked for diet-advice answers conflicting with pre-nutrition-studies received wisdom, etc.] were later rejected? It could well be that the right thing to do in general is stick with common sense...

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-08-08T12:38:30.082Z · LW(p) · GW(p)

And it should be noted that parents do have control over peer group: where to live, public school vs. private school vs. homeschooling, getting children to join things, etc.

Exactly! If you have something to protect as a parent, then after hearing "parents are unimportant, the important stuff is some non-genetic X" the obvious reaction is: "Okay, so how can I influence X?" (Instead of saying: "Okay, then it's not my fault, whatever.")

For example, if I want my children to be non-smokers, and I learn that whether I am smoking or not has much smaller impact than whether my children's friends are smoking... the obvious next question is: What can I do to increase the probability that my children's friends will be non-smokers? There are many indirect methods like choosing the place to live, choosing the school, choosing free-time activities, etc. I would just like to have more data on what smoking correlates with; where should I send my children and where should I prevent them from going, so that even if they "naturally" pick their peer group in that place, they will more likely pick non-smokers. (Replace non-smoking with whatever is your parenting goal.)

Shortly, when I read "parenting" in a study, I mentally translate it as: "what an average, non-strategic parent does". That's not the same as: "what a parent could do".

comment by solipsist · 2014-08-06T12:58:58.484Z · LW(p) · GW(p)

Fictional evidence, etc. Also, HPMOR has confounders, like a differing mechanism for Horcruxes.

comment by Azathoth123 · 2014-08-06T04:34:56.732Z · LW(p) · GW(p)

Except twin studies consistently estimate that the influence of 'shared environment' (the home) is small or near-zero for many traits compared to genetics and randomness/nonshared-environment.

As Protagoras points out here there are systematic problems with twin studies.

Replies from: gwern
comment by gwern · 2014-08-06T16:27:06.272Z · LW(p) · GW(p)

There are problems, but I don't think they are large, I think they are brought up mostly for ideological reasons (Shalizi is not an unbiased source and has a very big axe to grind), and a lot of the problems also cut the other way. For example, measurement error can reduce estimates of heritability a great deal, as we see in twin studies which correct for it and as predicted get higher heritability estimates, like "Not by Twins Alone: Using the Extended Family Design to Investigate Genetic Influence on Political Beliefs", Hatemi et al 2010 (this study, incidentally, also addresses the claim that twins may have special environments compared to their non-twin siblings and that will bias results, which has been claimed by people who dislike twin studies; there's no a priori reason to think this, and Hatemi finds no evidence for it, yet they had claimed it).

Replies from: gjm
comment by gjm · 2014-08-06T21:51:05.900Z · LW(p) · GW(p)

Shalizi is not an unbiased source and has a very big axe to grind

Do you mean more by this than that he has very strong opinions on this topic? I would guess you do -- that you mean there's something pushing him towards the opinions he has, that isn't the way it is because those opinions are right. But what?

Replies from: gwern, Barry_Cotter
comment by gwern · 2014-08-13T23:24:40.674Z · LW(p) · GW(p)

But what?

Shalizi is somewhere around Marxism in politics. This makes his writings on intelligence very frustrating, but on the other hand, it also means he can write very interesting things on economics at times - his essay on Red Plenty is the most interesting thing I've ever seen on economics & computational complexity. Horses for courses.

Replies from: gjm, gjm
comment by gjm · 2014-08-14T16:59:11.482Z · LW(p) · GW(p)

somewhere around Marxism in politics.

Shalizi states at least part of his position as follows:

"Market socialism" is a current of ideas [...] for how to make extensive use of markets without thereby creating gross economic and political inequality. [...] On the other hand, modern states are powerful enough as things stand; to turn the economy wholly over to them is a bad idea. To combine markets with socialism seems like an elegant and feasible solution, at least technically, and it's one which I support [...]

and on the same page says these things:

Incredible things were done in the name of [the political control of economic life], some of them noble and heroic (like resistance to Fascism, and the creation of democratic welfare states), others scarcely matched for wickedness (like Stalin's purges and deliberate famines) and stupidity (like Mao's Great Leap Forward and apparently unintentional but highly foreseeable famines).

and

The history of socialist movements is [...] bound up with the histories of organized labour, of economics and left-wing politics and general, and, less honourably, with that of revolutions and totalitarianism. [...] it had become clear [...] that they [sc. the Soviets] were far, far worse than capitalist democracies [...]

I have to say that none of this sounds very Marxist to me. Shalizi apparently finds revolutions dishonourable; the most notable attempts at (nominally) Marxist states, the USSR and the PRC, he criticizes in very strong terms; he wants most prices to be set by markets (at least this is how I interpret what he says on that page and others it links to).

Oh, here's another bit of evidence:

Sometime between [1956] and 1968 [...] he [sc. Kolakowski] stopped considering himself a Marxist, even a revisionist one [...] though still a socialist and (I think) an atheist.

followed in the next paragraph by

I think his views of socialism and Marxism are absolutely on-target

which seems to me to imply, in particular, that Shalizi doesn't consider himself "a Marxist, even a revisionist one".

He's certainly a leftist, certainly considers himself a socialist, but he seems quite some way from Marxism. (And further still from, e.g., any position taken by the USSR or the PRC.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-17T04:03:44.869Z · LW(p) · GW(p)

How about this?

[The ideas of the Frankfurt School] are very extreme examples of ways of thinking about society, both normatively and descriptively, for which I have very little sympathy, yet are closely affiliated to ideas I am receptive to. (E.g.: so far as I can see, they were all what Marxists would call "idealists", which is not a compliment, yet they claimed to be Marxists, even historical materialists!) My interest in them is thus interest in my notorious and embarrassing ideological cousins...

Not that I think pigeon-holing him is very useful for determining his views on economics or politics, let alone IQ.

Replies from: gjm
comment by gjm · 2014-08-17T22:25:26.530Z · LW(p) · GW(p)

Suggests that Marxism is an idea Shalizi is "receptive to" but not (at least to me) that he's actually a Marxist as such.

comment by gjm · 2014-08-14T17:11:04.654Z · LW(p) · GW(p)

This makes his writings on intelligence very frustrating

Does having political views that approximate Marxism imply irrationally-derived views on intelligence? I don't see why it should, but this may simply be a matter of ignorance or oversight on my part.

I am not an expert on Marx but would be unsurprised to hear that he made a bunch of claims that are ill-supported by evidence and have strong implications about intelligence -- say, that The Proletariat is in no way inferior in capabilities, even statistically, to The Bourgeoisie. But to me "somewhere around Marxism in politics" doesn't mean any kind of commitment to believing everything Marx wrote. It isn't obvious to me why someone couldn't hold pretty much any halfway-reasonable opinions about intelligence, while still thinking that it is morally preferable for workers to own the businesses they work for and the equipment they use, that we would collectively be better off with much much more redistribution of wealth than we currently have (or even with the outright abolition of individual property), etc.

In another comment I've given my reasons for doubting that Shalizi is even "somewhere around Marxism in politics". But even if I'm wrong about that, I'm not aware of prior commitments he has that would make him unable to think rationally about intelligence.

Of course it needn't be a matter of prior commitments as such. It could, e.g., be that he is immersed in generally-very-leftist thought (this being either a cause or a consequence of his own leftishness), and that since for whatever reason there's substantial correlation between being a leftist and having one set of views about intelligence rather than another, Shalizi has just absorbed a typically-leftist position on intelligence by osmosis. But, again, the fact that he could have doesn't mean he actually has.

I think the guts of what you're claiming is: Shalizi's views on intelligence are a consequence of his political views; either his political views are not arrived at rationally, or the way his political views have given rise to his views on intelligence are not rational, or both. -- That could well be true, but so far what you've given evidence for is simply that he holds one particular set of political views. How do you get from there to the stronger claim about the relationship between his views on the two topics?

Replies from: gwern
comment by gwern · 2014-08-17T01:59:11.503Z · LW(p) · GW(p)

I think the guts of what you're claiming is: Shalizi's views on intelligence are a consequence of his political views; either his political views are not arrived at rationally, or the way his political views have given rise to his views on intelligence are not rational, or both. -- That could well be true, but so far what you've given evidence for is simply that he holds one particular set of political views. How do you get from there to the stronger claim about the relationship between his views on the two topics?

At least part of it was reading his 'Statistical Myth' essay, being skeptical of the apparent argument for some of the reasons Dalliard would lay out at length years later, reading all the positive discussions of it by people I was unsure understood either psychometrics or Shalizi's essay (which he helpfully links), and then reading a followup dialogue http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/495.html where - at least, this is how it reads to me - he carefully covers his ass, walks back his claims, and quietly concedes a lot of key points. At that point, I started to seriously wonder if Shalizi could be trusted on this topic; his constant invocation of Stephen Jay Gould (who should be infamous by this point) and his gullible swallowing of 'deliberate practice' as more important than any other factor which since has been pretty convincingly debunked (both on display in the dialogue) merely reinforce my impression and the link to Gould (Shalizi's chief comment on Gould's Mismeasure of Man is apparently solely "I do not recommend this for the simple reason that I read it in 1988, when I was fourteen. I remember it as a very good book, for whatever that's worth."; no word on whether he is bothered by Gould's fraud) suggests it's partially ideological. Another revealing page: http://vserver1.cscs.lsa.umich.edu/~crshalizi/notebooks/iq.html I can understand disrecommending Rushton, but disrecommending Jensen who invented a lot of the field and whose foes even admire him? Recommending a journalist from 1922? Recommending some priming bullshit? (Where's the fierce methodologist statistician when you need him...?) There's one consistent criterion he applies: if it's against IQ and anything to do with it, he recommends it, and if it's for it, he disrecommends it. Apparently only foes of it ever have any of the truth.

Replies from: gjm
comment by gjm · 2014-08-17T22:40:40.298Z · LW(p) · GW(p)

Informative. Thanks! Though I must admit that my reaction to the pages of Shalizi that you cite isn't the same as yours.

comment by Barry_Cotter · 2014-08-08T10:26:44.099Z · LW(p) · GW(p)

I believe his political views are somewhere between way to the left of the Democratic Party and socialism. He dislikes the entire field of intelligence research in psychology because it's ideologically inconvenient. He criticises anything that he can find to criticise about it. Think of him as Stephen Jay Gould, but much smarter and more honest.

Replies from: gjm
comment by gjm · 2014-08-08T18:24:00.301Z · LW(p) · GW(p)

See, this is a place where the US is different from Europe. Because over here (at least in the bit of Europe I'm in), being "somewhere to the right of socialism" isn't thought of as the kind of crazy extremism that ipso facto makes someone dangerously biased and axe-grindy.

Now, of course politics is what it is, and affiliation with even the most moderate and reasonable political position can make otherwise sensible people completely blind to what's obvious to others. So the fact that being almost (but not quite) a socialist looks to me like a perfectly normal and sensible position is perfectly compatible with Shalizi being made nuts by it. But to me "he's somewhere to the left of Barack Obama" doesn't look on its own like something that makes someone a biased source and explains what their problem is.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-09T01:19:15.474Z · LW(p) · GW(p)

Being an extremist by local standards may be more relevant than actual beliefs.

Replies from: gjm
comment by gjm · 2014-08-09T13:11:16.557Z · LW(p) · GW(p)

Yup, that's a good point. (Though it depends on what "local" means. I have the impression that academics in the US tend to be leftier than the population at large.)

Replies from: Barry_Cotter
comment by Barry_Cotter · 2014-08-10T07:21:47.023Z · LW(p) · GW(p)

Academia in the US is much leftier than the population at large. I believe it was Jonathan Haidt who went looking for examples of social conservatives in his field and people kept nomimating Philip Tetlock who would not describe himself thus. At a conference Dr.Haidt was looking for a show of hands for various political positions. Republicans were substantially less popular than Communists. Psychology is about as left wing as sociology and disciplines vary but academia is a great deal to the left of the US general population.

comment by satt · 2014-08-05T22:25:42.834Z · LW(p) · GW(p)

Most drug new drugs fail clinical trials.

I'd generalize that to something like

  • collecting published results in medicine, psychology, epidemiology & economics journals gives an unbiased idea of the sizes of the effects they report

which is wrong at least twice over (publication bias and correlation-causation confusion) but is, I suspect, an implicit assumption made by lots of people who only made it to the first stage of traditional rationality (and reason along the lines of "normal people are full of crap, scientists are smarter and do SCIENCE!, so all I need to do to be correct is regurgitate what I find in scientific journals").

Replies from: ChristianKl
comment by ChristianKl · 2014-08-06T09:32:40.693Z · LW(p) · GW(p)

I'd generalize that to something like X which is wrong at least twice over

Then don't.

I point is more that if you only have theory and no empiric evidence, then it's likely that you are wrong. That doesn't mean that having a bit of empiric evidence automatically means that you are right.

I also would put more emphasis on having empiric feedback loops than at scientific publications. Publications are just one way of feedback. There a lot to be learned about psychology by really paying attention on other people with whom you interact.

If I interact with a person who has a phobia of spider and solve the issue and afterwards put a spider on his arm and the person doesn't freak out, I have my empiric feedback. I don't need a paper to tell me that the person doesn't have a phobia anymore.

Replies from: satt
comment by satt · 2014-08-06T23:10:26.754Z · LW(p) · GW(p)

Then don't.

I point is more that if you only have theory and no empiric evidence, then it's likely that you are wrong. That doesn't mean that having a bit of empiric evidence automatically means that you are right.

Yes, I agree. To clarify, I was neither condoning the belief in my bullet point, nor accusing you of believing it. I just wanted to tip my hat to you for inspiring my example with yours.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-06T23:18:05.557Z · LW(p) · GW(p)

Ah, okay.

comment by [deleted] · 2014-08-05T18:49:40.078Z · LW(p) · GW(p)

If a plane is on a conveyor belt going at the same speed in the opposite direction, will it take off?

I remember reading this in other places I don't remember, and it seems to inspire furious arguments despite being non-political and not very controversial.

Replies from: NancyLebovitz, shminux, army1987, tut
comment by NancyLebovitz · 2014-08-06T15:20:37.657Z · LW(p) · GW(p)

That reminds me of the question of whether hot water freezes faster than cold water.

comment by shminux · 2014-08-05T19:26:10.251Z · LW(p) · GW(p)

That's a great example. If I recall, people who get worked up about it generally feel that the answer is obvious and the other side is stupid for not understanding the argument.

comment by A1987dM (army1987) · 2014-08-06T17:44:57.601Z · LW(p) · GW(p)

Same speed with respect to what? This sound kind of like the tree-in-a-forest one.

Replies from: satt
comment by satt · 2014-08-06T23:26:38.545Z · LW(p) · GW(p)

As I remember the problem, the plane's wheels are supposed to be frictionless so that their rotation is uncoupled from the rest of the plane's motion. Hence the speed of the conveyor belt is irrelevant and the plane always takes off. Now, if you had a helicopter on a turntable...

Replies from: army1987, tut
comment by A1987dM (army1987) · 2014-08-09T08:42:54.470Z · LW(p) · GW(p)

What I mean is, on hearing that I thought of a conveyor belt whose top surface was moving at a speed -x with respect to the air, and a plane on top of it moving at a speed x with respect to the top of the conveyor belt, i.e. the plane was stationary with respect to the air. But on reading the Snopes link what was actually meant was that the conveyor belt was moving at speed -x and the plane's engines were working as hard as needed to move at speed x on stationary ground with no wind.

comment by tut · 2014-08-07T16:04:47.349Z · LW(p) · GW(p)

While at the same time the rolling speed of the plane, which is the sum of it's forward movement and the speed of the treadmill, is supposed to be equal to the speed of the treadmill. Which is impossible if the plane moves forward.

Replies from: satt
comment by satt · 2014-08-08T03:02:22.523Z · LW(p) · GW(p)

I'm not sure what you mean by "rolling speed of the plane", "it's forward movement", and "speed of the treadmill". The phrase "rolling speed" sounds like it refers to the component of the plane's forward motion due to the turning of its wheels, but that's not a coherent thing to talk about if one accepts my assumption that the wheels are uncoupled from the plane.

Replies from: tut
comment by tut · 2014-08-08T07:36:17.229Z · LW(p) · GW(p)

Rolling speed = how fast the wheels turn, described in terms of forward speed. So it's the circumference of the wheels multiplied by their angular speed. And the wheels are not uncoupled from the plane they are driven by the plane. It was only assumed that the friction in the wheel bearings is irrelevant.

Forward movement of the plane = speed of the plane relative to something not on the treadmill. I guess I should have called it airspeed, which it would be if there is no wind.

Speed of the treadmill = how fast the surface of the treadmill moves.

And that is more time than I wanted to spend rehashing this old nonsense. The grandparent was only meant to explain why the great grandparent would not have settled the issue, not to settle it on its own. The only further comment I have is the whole thing is based on an unrealistic setup, which becomes incoherent if you assume that it is about real planes and real treadmills.

Replies from: satt
comment by satt · 2014-08-09T16:43:17.093Z · LW(p) · GW(p)

And that is more time than I wanted to spend rehashing this old nonsense.

Fair enough. I have to chip in with one last comment, but you'll be happy to hear it's a self-correction! My comments don't account for potential translational motion of the wheels, and they should've done. (The translational motion could matter if one assumes the wheels experience friction with the belt, even if there's no internal wheel bearing friction.)

comment by tut · 2014-08-06T06:41:24.817Z · LW(p) · GW(p)

That's different though. The Plane on a Treadmill started with somebody specifying some physically impossible conditions, and then the furious arguments were between people stating the implication of the stated conditions on one side and people talking about the real world on the other hand.

comment by solipsist · 2014-08-04T19:36:43.909Z · LW(p) · GW(p)

If your twin's going away for 20 years to fly around space at close to the speed of light, they'll be 20 years older when they come back.

A spinning gyroscope, when pushed, will react in a way that makes sense.

If another nation can't do anything as well as your nation, there is no self-serving reason to trade with them.

You shouldn't bother switching in the Monty Hall problem

The sun moves across the sky because it's moving.

EDIT Corrected all statements to be false

Replies from: gjm, gjm
comment by gjm · 2014-08-04T21:23:40.542Z · LW(p) · GW(p)

Open trade [...]

I think you may have expressed this one the wrong way around; the way you've phrased it ("can make you better off") is the surprising truth, not the surprising untruth.

comment by gjm · 2014-08-04T21:22:53.650Z · LW(p) · GW(p)

If your twin flies through space for 20 years at close to the speed of light, they'll be 20 years older when they come back.

They will. I think you mean: If your twin flies through space at close to the speed of light and arrives back 20 years later, they'll be 20 years older when they come back. That one's false.

Replies from: solipsist
comment by solipsist · 2014-08-04T21:39:35.062Z · LW(p) · GW(p)

Reversed polarity on a few statements. Thanks.

Replies from: Gurkenglas
comment by Gurkenglas · 2014-08-05T06:20:16.509Z · LW(p) · GW(p)

Your first statement is still correct.

Replies from: gjm, solipsist
comment by gjm · 2014-08-05T10:02:54.275Z · LW(p) · GW(p)

To be more explicit: What is needed to make the statement interestingly wrong is for the two 20-year figures to be in different reference frames. If your twin does something for 20 years, then they will be 20 years older; but if they do something for what you experience as 20 years they may not be.

Replies from: solipsist
comment by solipsist · 2014-08-05T11:29:43.397Z · LW(p) · GW(p)

Edited to more firmly attach "for 20 years" to the earth.

comment by solipsist · 2014-08-05T16:45:00.439Z · LW(p) · GW(p)

Rephrased to more explicitly place "for 20 years" in the earth's reference frame.

comment by Lumifer · 2014-08-04T15:02:44.337Z · LW(p) · GW(p)

beliefs that are incontrovertibly incorrect, but which intelligent people will nonetheless arrive at quite reasonably through armchair-theorising?

Would wrong scientific theories qualify? E.g. phlogiston or aether.

comment by Manfred · 2014-08-04T23:17:58.412Z · LW(p) · GW(p)

Downwind faster than the wind. See seven pages of posts here for examples of people getting it wrong.

Kant was famously wrong when he claimed that space had to be flat.

Replies from: None
comment by [deleted] · 2014-08-05T14:00:11.915Z · LW(p) · GW(p)

Kant was famously wrong when he claimed that space had to be flat.

As discussed previously, this exact claim seems suspiciously absent from the first Critique.

Replies from: Manfred
comment by Manfred · 2014-08-06T00:10:23.982Z · LW(p) · GW(p)

Take, for example, the proposition: "Two straight lines cannot enclose a space, and with these alone no figure is possible," and try to deduce it from the conception of a straight line and the number two; or take the proposition: "It is possible to construct a figure with three straight lines," and endeavour, in like manner, to deduce it from the mere conception of a straight line and the number three. All your endeavours are in vain, and you find yourself forced to have recourse to intuition, as, in fact, geometry always does.

Geometry, nevertheless, advances steadily and securely in the province of pure a priori cognitions, without needing to ask from philosophy any certificate as to the pure and legitimate origin of its fundamental conception of space.

I agree that Kant doesn't seem to have ever considered non-euclidean geometry, and thus can't really be said to be making an argument that space is flat. If we could drop an explanation of general relativity, he'd probably come to terms with it. On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space, which seems pretty much what he was accused of.

Replies from: None
comment by [deleted] · 2014-08-06T10:46:19.935Z · LW(p) · GW(p)

On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space,

I don't see this in the quoted passage. He's trying to illustrate the nature of propositions in geometry, and doesn't appear to be arguing that the parallel postulate is universally true. "Take, for example," is not exactly assertive.

Also, have a care: those two paragraphs are not consecutive in the Critique.

comment by philh · 2014-08-06T15:04:38.657Z · LW(p) · GW(p)

This isn't very interesting, but I used to believe that the rules about checkmate didn't really change the nature of chess. Some of the forbidden moves - moving into check, or failing to move out if possible - are always a mistake, so if you just played until someone captured the king, the game would only be different in cases where someone made an obvious mistake.

But if you can't move, the game ends in stalemate. So forbidding you to move into check means that some games end in draws, where capture-the-king would have a victor.

(This is still armchair theorising on my part.)

comment by falenas108 · 2014-08-04T14:22:22.345Z · LW(p) · GW(p)

Does it have to be something from the modern day? Because there are tons of historical examples.

Replies from: Jiro, sixes_and_sevens
comment by Jiro · 2014-08-04T18:35:05.035Z · LW(p) · GW(p)

There are many beliefs that people will arrive at through armchair theorizing, but only until they are corrected. If you came up with the idea that the Earth was flat a long time ago, nobody would correct you. If you did that today, someone would correct you; indeed, society is so full of round-Earth information that it's hard for anyone to not have heard of the refutation before coming up with the idea, unless they're a young child.

Does that count as something arrived at through armchair theorizing? People would, after all, come up with it by armchair theorizing if they lived in a vacuum. They did come up with it through armchair theorizing back when they did live in a vacuum.

That's why there are tons of historical examples and not so many modern examples. A modern example has to be something where the refutation is well known by experts, but the refutation hasn't made it down to the common person, because if the refutation did make it down to the common person that would inhibit them from coming up with the armchair theory in the first place.

(For historical examples,

  1. It's possible that the refutation is known by our experts, but not by contemporary experts, or
  2. because of the bad state of mass communication in ancient times, the refutation simply hasn't spread enough to reach most armchair theorists.)
comment by sixes_and_sevens · 2014-08-04T14:38:39.125Z · LW(p) · GW(p)

Something from the modern day, yes. The people arriving at the naive belief, and the people with the ability to demonstrate its incorrect status, should coexist.

Replies from: falenas108, polymathwannabe
comment by falenas108 · 2014-08-04T14:49:00.929Z · LW(p) · GW(p)

Sorry to keep going on this, but would looking at a historical example of a group of intelligent people arriving at a naive belief, even though there was plenty of evidence available at the time that this is a naive belief work?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-08-04T15:06:39.037Z · LW(p) · GW(p)

Possibly, yes. I'd love to hear whatever you've got in mind.

comment by polymathwannabe · 2014-08-04T15:15:50.825Z · LW(p) · GW(p)

The Conservative obsession with a non-existent link between abortion and breast cancer.

Replies from: Larks
comment by Larks · 2014-08-04T23:27:43.319Z · LW(p) · GW(p)

That hardly satisfies any of the desiderata! It's political, controversial, and it's hard to see how armchair reasoning would lead you to believe it.

comment by pragmatist · 2014-08-08T06:02:49.622Z · LW(p) · GW(p)

Bell's spaceship paradox.

According to Bell, he surveyed his colleagues at CERN (clearly a group of intelligent, qualified people) about this question, and most of them got it wrong. Although, to be fair, the conflict here is not between expert reasoning and domain knowledge, since the physicists at CERN presumably possessed all the knowledge you need (basic special relativity, really) to get the right answer.

comment by philh · 2014-08-28T21:31:24.437Z · LW(p) · GW(p)

When I was ~16, I came up with group selection to explain traits like altruism.

comment by KnaveOfAllTrades · 2014-08-07T00:43:46.937Z · LW(p) · GW(p)

Generalising from 'plane on a treadmill'; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like 'because of Newton's Nth law', 'because of drag', 'because of air resistance', 'but this is unphysical so it must be false' etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physical experiments, since most people don't have a good enough physical intuition to know in advance what types of physical arguments go through, so should be in a state of epistemic learned helplessness with respect to physics.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-08-07T09:53:21.106Z · LW(p) · GW(p)

I have a strange request. Without consulting some external source, can you please briefly define "learned helplessness" as you've used it in this context, and (privately, if you like) share it with me? I promise I'll explain at some later date.

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-08-07T10:38:03.882Z · LW(p) · GW(p)

There will probably be holes and not quite capture exactly what I mean, but I'll take a shot. Let me know if this is not rigorous or detailed enough and I'll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.

Helplessness about topic X - One is not able to attain a knowably stable and confident opinion about X given the amount of effort one is prepared to put in or the limits of one's knowledge or expertise etc. One's lack of knowledge of X includes lack of knowledge about the kinds of arguments or methods that tend to work in X, lack of experience spotting crackpot or amateur claims about X, and lack of general knowledge of X that would allow one to notice one's confusion at false basic claims and reject them. One is unable to distinguish between ballsy amateurs and experts.

Learned helplessness about X - The helplessness is learned from experience of X; much like the sheep in Animal Farm, one gets opinion whiplash on some matter of X that makes one realise that one knows so little about X that one can be argued into any opinion about it.

(This has ended up more like a bunch of arbitrary properties pointing to the sense of learned helplessness rather than a slick definition. Is it suitable for your purposes, or should I try harder to cut to the essence?)

Rant about learned helplessness in physics: Puzzles in physics, or challenges to predict the outcome of a situation or experiment, often seem like they have many different possible explanations leading to a variety of very different answers, with the merit of these explanations not being distinguishable except to those who have done lots of physics and seen lots of tricks, and maybe even then maybe you just need to already know the answer before you can pick the correct answer.

Moreover, one eventually learns that the explanations at a given level of physics instruction are probably technically wrong in that they are simplified (though I guess less so as one progresses).

Moreover moreover, one eventually becomes smart enough to see that the instructors do not actually even spot their leaps in logic. (For example, it never seemed to occur to any of my instructors that there's no reason you can't have negative wavenumbers when looking at wavefunctions in basic quantum. It turns out that when I run the numbers, everything rescales since the wavefunction bijects between -n and n and one normalizes the wavefunction anyway, so that it doesn't matter, but one could only know this for sure after reasoning it out and justifying discarding the negative wavenumbers. It basically seemed like the instructors saw an 'n' in sin(n*pi/L) or whatever and their brain took it as a natural number without any cognitive reflection that the letter could have just as easily been a k or z or something, and to check that the notation was justified by the referent having to be a natural.)

Moreover, it takes a high level of philosophical ability to reason about physics thought experiments and their standards of proof. Take the 'directly downwind faster than the wind' problem. The argument goes back and forth, and, like the sheep, at every point the side that's speaking seems to be winning. Terry Tao comes along and says it's possible, and people link to videos of carts with propellers apparently going downwind faster than the wind and wheels with rubber bands attached allegedly proving it. But beyond deferring to his general hard sciences problem-solving ability, one has no inside view way to verify Tao's solution; what are the standards of proof for a thought experiment? After all, maybe the contraptions in the video only work (assuming they do work as claimed, which isn't assured) because of slight side-to-side effects rather than directly down wind or some other property of the test conditions implicitly forbidden by the thought experiment.

Since any physical experiment for a physics thought experiment will have additional variables, one needs some way to distinguish relevant and irrelevant variables. Is the thought experiment the limit as extraneous variables become negligible, or is there a discontinuity? What if different sets of variables give rise to different limits? How does anyone ever know what the 'correct' answer is to an idealised physics thought experiment of a situation that never actually arises? Etc.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-08-07T12:22:30.750Z · LW(p) · GW(p)

Thanks for that. The whole response is interesting.

I ask because up until quite recently I was labouring under a wonky definition of "learned helplessness" that revolved around strategic self-handicapping.

An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned".

It wasn't until recently that I came across an accurate definition in a book on reinforcement training. I'm pretty sure I've had "learned helplessness" in my lexicon for over a decade, and I've never seen it used in a context that challenged my definition, or used it in a way that aroused suspicion. It's worth noting that I probably picked up my definition through observing feminist discussions. Trying a mental find-and-replace on ten years' conversations is kind of weird.

I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

Replies from: KnaveOfAllTrades, satt
comment by KnaveOfAllTrades · 2014-08-07T12:41:42.323Z · LW(p) · GW(p)

Good chance you've seen both of these before, but:

http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html

I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

Damn, if only someone had created a thread for that, ho ho ho

Strategic incompetence?

I'm not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-08-07T13:35:05.323Z · LW(p) · GW(p)

Schelling does talk about strategic self-sabotage, but it captures a lot of deliberated behaviour that isn't implied in my fake definition.

Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.

Replies from: satt
comment by satt · 2014-08-12T00:25:53.422Z · LW(p) · GW(p)

Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.

Now picturing a Venn diagram with three overlapping circles labelled "epistemic learned helplessness", "what psychologists call 'learned helplessness'", and "what sixes_and_sevens calls 'learned helplessness'"!

comment by satt · 2014-08-12T00:12:17.240Z · LW(p) · GW(p)

An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned".

Making up a term for this..."reinforced helplessness"? (I dunno whether it'd generalize to cover the rest of what you formerly meant by "learned helplessness".)

comment by [deleted] · 2014-08-06T01:00:40.947Z · LW(p) · GW(p)

The sun revolves around the earth.

Replies from: gwern
comment by gwern · 2014-08-06T02:18:08.473Z · LW(p) · GW(p)

The earth revolving around the sun was also armchair reasoning, and refuted by empirical data like the lack of observable parallax of stars. Geocentrism is a pretty interesting historical example because of this: the Greeks reached the wrong conclusion with right arguments. Another example in the opposite direction: the Atomists were right about matter basically being divided up into very tiny discrete units moving in a void, but could you really say any of their armchair arguments about that were right?

Replies from: Douglas_Knight, ChristianKl
comment by Douglas_Knight · 2014-08-09T01:37:56.446Z · LW(p) · GW(p)

It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.

The atomists got the atomic theory from the Brownian motion of dust in a beam of light. the same way that Einstein convinced the final holdouts thousands of years later.

Replies from: gwern
comment by gwern · 2014-08-13T23:32:34.845Z · LW(p) · GW(p)

It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.

Eh? I was under the impression that most of the Greeks accepted geocentrism, eg Aristotle. Double-checking https://en.wikipedia.org/wiki/Heliocentrism#Greek_and_Hellenistic_world and https://en.wikipedia.org/wiki/Ancient_Greek_astronomy I don't see any support for your claim that heliocentrism was a respectable position and geocentrism wasn't overwhelmingly dominant.

The atomists got the atomic theory from the Brownian motion of dust in a beam of light.

Cite? I don't recall anything like that in the fragments of the Pre-socratics, whereas Eleatic arguments about Being are prominent.

Replies from: Douglas_Knight, Douglas_Knight
comment by Douglas_Knight · 2014-08-14T00:51:25.452Z · LW(p) · GW(p)

Lucretius talks about the motion of dust in light, but he doesn't claim that it is the origin of the theory. When I google "Leucippus dust light" I get lots of people making my claim and more respectable sources making weaker claims, like "According to traditional accounts the philosophical idea of simulacra is linked to Leucippus’ contemplation of a ray of light that made visible airborne dust," but I don't see any citations to where this tradition is recorded.

comment by Douglas_Knight · 2014-08-14T00:24:13.268Z · LW(p) · GW(p)

The Greeks cover hundreds of years. They made progress! You linked to a post about the supposed rejection of Aristarchus's heliocentric theory. It's true that no one before Aristarchus was heliocentric. That includes Aristotle who died when Aristarchus was 12. Everyone agrees that the Hellenistic Greeks who followed Aristotle were much better at astronomy than the Classical Greeks. The question is whether the Hellenistic Greeks accepted Aristarchus's theory, particularly Archimedes, Apollonius, and Hipparchus. But while lots of writings of Aristotle remain, practically nothing of the later astronomers remain.

It's true that secondary sources agree that Archimedes, Apollonius, and Hipparchus were geocentric. However, they give no evidence for this. Try the scholarly article cited in the post you linked. It's called "The Greek Heliocentric Theory and Its Abandonment" but it didn't convince me that there was an abandonment. That's where I got the claim about Hipparchus refusing to choose.

I didn't claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes's Sandreckoner, which uses Aristarchus's model. I don't claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.

Maybe heliocentrism survived a century and was finally rejected by Hipparchus. That's a world of difference from saying that Seleucus was his only follower. Or maybe it was just the two of them, but we live in a state of profound ignorance.

As for the ultimate trajectory of Greek science, that is a difficult problem. Lucio Russo suggests that Roman science is all mangled Greek science and proposes to extract the original. For example, Seneca claims that the retrograde motion of the planets is an illusion, which sounds like he's quoting someone who thinks the Earth moves, even if he doesn't. More colorful are Pliny and Vitruvius who claim that the retrograde motion of the planets is due to the sun shooting triangles at them. This is clearly a heliocausal theory, even if the authors claim to be geocentric. Less clear is Ruso's interpretation, that this is a description of a textbook diagram that they don't understand.

Replies from: gwern
comment by gwern · 2014-08-17T00:47:39.526Z · LW(p) · GW(p)

So, you just have an argument from silence that heliocentrism was not clearly rejected?

I didn't claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes's Sandreckoner, which uses Aristarchus's model. I don't claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.

I just read through the bits of Sand Reckoner referring to Aristarchus (Mendell's translation), and throughout Archimedes seems to be at pains to distance himself from Aristarchus's model, treating it as a minority view (emphasis added):

You grasp [King Gelon, the recipient of Archimedes's letter The Sand Reckoner] that the world is called by most astronomers the sphere whose center is the center of the earth and whose line from the center is equal to the straight-line between the center of the sun and the center of the earth, since you have heard these things in the proofs written by the astronomers. But Aristarchus of Samos produced writings of certain hypotheses in which it follows from the suppositions that the world is many times what is now claimed.

Not language which suggests he takes it particularly seriously, much less endorses it.

In fact, it seems that the only reason Archimedes brings up Aristarchus at all is as a form of 'worst-case analysis': some fools doubt the power of mathematics and numbers, but Archimedes will show that even under the most ludicrously inflated estimate of the size of the universe (one implied by Aristarchus's heliocentric model), he can still calculate & count the number of grains of sands it would take to fill it up; hence, he can certainly calculate & count the number for something smaller like the Earth. From the same chapter:

[1] Some people believe, King Gelon, that the number of sand is infinite in multitude. I mean not only of the sand in Syracuse and the rest of Sicily, but also of the sand in the whole inhabited land as well as the uninhabited. There are some who do not suppose that it is infinite, and yet that there is no number that has been named which is so large as to exceed its multitude.

[2] It is clear that if those who hold this opinion should conceive of a volume composed of the sand as large as would be the volume of the earth when all the seas in it and hollows of the earth were filled up in height equal to the highest mountains, they would not know, many times over, any number that can be expressed exceeding the number of it.

[3] I will attempt to prove to you through geometrical demonstrations, which you will follow, that some of the numbers named by us and published in the writings addressed to Zeuxippus exceed not only the number of sand having a magnitude equal to the earth filled up, just as we said, but also the number of the sand having magnitude equal to the world.

...[7] In fact we say that even if a sphere of sand were to become as large in magnitude as Aristarchus supposes the sphere of the fixed stars to be, we will also prove that some of the initial numbers having an expression (or: "numbers named in the Principles," cf. Heath, Archimedes, 222, and Dijksterhuis, Archimedes, 363) exceed in multitude the number of sand having a magnitude equal to the mentioned sphere, when the following are supposed.

And he triumphantly concludes in ch4:

[18] ... Thus, it is obvious that the multitude of sand having a magnitude equal to the sphere of the fixed stars which Aristarchus supposes is smaller than 1000 myriads of the eighth numbers.

[19] King Gelon, to the many who have not also had a share of mathematics I suppose that these will not appear readily believable, but to those who have partaken of them and have thought deeply about the distances and sizes of the earth and sun and moon and the whole world this will be believable on the basis of demonstration. Hence, I thought that it is not inappropriate for you too to contemplate these things.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-17T03:42:54.986Z · LW(p) · GW(p)

All I have ever said is that you should stop telling fairy tales about why the Greeks rejected heliocenrism. If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn't talk about parallax.

I listed several pieces of positive evidence, but I'm not interested in the argument.

Replies from: gwern
comment by gwern · 2014-08-17T16:03:48.279Z · LW(p) · GW(p)

If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn't talk about parallax.

The Sand Reckoner implies the parallax objection when it uses an extremely large heliocentric universe! Lack of parallax is the only reason for such extravagance. Or was there some other reason Aristarchus's model had to imply a universe lightyears in extent...?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-17T16:47:09.351Z · LW(p) · GW(p)

Aristarchus using a large universe is evidence that he thought about parallax. It is not evidence that his opponents thought about parallax.

You are making a circular argument: you say that the Greeks rejected heliocentrism for a good reason because they invoked parallax, but you say that they invoked parallax because you assume that they had a good reason.

There is a contemporary recorded reason for rejecting Aristarchus: heresy. There is also a (good) reason recorded by Ptolemy 400 years later, namely wind speed.

Replies from: gwern
comment by gwern · 2014-08-17T16:51:59.050Z · LW(p) · GW(p)

Aristarchus using a large universe is evidence that he thought about parallax. It is not evidence that his opponents thought about parallax.

Uh... why would the creator of the system consider parallax an issue, and the critics not consider parallax an issue?

And you still haven't addressed my quotes from The Sand Reckoner indicating Archimedes considered heliocentrism dubious and a minority view, which should override your arguments from silence.

You are making a circular argument: you say that the Greeks rejected heliocentrism for a good reason because they invoked parallax, but you say that they invoked parallax because you assume that they had a good reason.

No. I said parallax is why they rejected it in part because to save the model one has to make the universe large, then you said 'look! Archimedes uses a large universe!', and I pointed out this is 100% predicted by the parallax-rejection theory. So what? Where is your alternate explanation of why the large-universe - did Archimedes just make shit up?

There is a contemporary recorded reason for rejecting Aristarchus: heresy. There is also a (good) reason recorded by Ptolemy 400 years later, namely wind speed.

And how do these lead to a large universe...?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-17T22:04:42.304Z · LW(p) · GW(p)

Uh... why would the creator of the system consider parallax an issue, and the critics not consider parallax an issue?

The very question is whether the critics made good arguments. You are assuming the conclusion.
People make stupid arguments all the time. Anaxagoras was prosecuted for heresy and Aristarchus may have been. How many critics of Copernicus knew that he was talking about what happens over the course of a year, not what happens over the course of a day?

Yes, Archimedes says that Aristarchus's position is a minority. Not dubious. I do not see that in the quotes at all. Yes, Archimedes probably uses Aristarchus's position for the purposes of worst-case analysis to get numbers as large as possible; indeed, they are larger than the numbers Ptolemy attributes to Aristarchus. As I said at the beginning, I do not claim that he endorsed heliocentrism, only that he considered it a live hypothesis.
One mystery is what is the purpose of the Sandreckoner. Is it just about large numbers? Or is it also about astronomy? Is Archimedes using exotic astronomy to justify his interest in exotic mathematics? Or is he using his public venue to promote diversity in astronomy?

Replies from: gwern
comment by gwern · 2014-08-17T22:17:55.353Z · LW(p) · GW(p)

The very question is whether the critics made good arguments. You are assuming the conclusion.

It's assuming the conclusion to think critics agreed with Aristarchus's criticism of a naive heliocentric theory?

Yes, Archimedes says that Aristarchus's position is a minority. Not dubious. I do not see that in the quotes at all.

I disagree strongly. I don't see how you could possibly read the parts I quoted, and italicized, and conclude otherwise. Like, how do you do that? How do you read those bits and read it as anything else? What exactly is going through your head when you read those bits from Sand Reckoner, how do you parse it?

One mystery is what is the purpose of the Sandreckoner. Is it just about large numbers? Or is it also about astronomy? Is Archimedes using exotic astronomy to justify his interest in exotic mathematics? Or is he using his public venue to promote diversity in astronomy?

Gee, if only I had quoted the opening and ending bits of the Sand Reckoner where Archimedes explained his goal...

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-18T01:00:23.284Z · LW(p) · GW(p)

Many people object to Copernicus on the grounds that Joshua made the Sun stand still, or on grounds of wind, without seeming to realize that they object to the daily rotation of the Earth, not to his special suggestion of the yearly revolution of the Earth about the Sun.
If Copernicus had such lousy critics, why assume Aristarchus had good critics who were aware of his arguments? Maybe they objected to heresy, like (maybe) Cleanthes.
Archimedes was a smart guy who understood what Aristarchus was saying. He seems to accept Aristarchus's argument that heliocentrism implies a large universe. If (if!) he rejects the premise, that does not tell us why. Maybe because he rejects the conclusion. Or maybe he rejects the premise for completely different consequences, like wind. Or maybe he is not convinced by Aristarchus's main argument (whatever that was) and doesn't even bother to move on to the consequences.

Ptolemy does give a reason: he says wind. He has the drawback of being hundreds of years late, so maybe he is not representative, but at least he gives a reason. If you extract any reason, that is the one to pick.

The principal purpose of the Sandreckoner is to investigate infinity, to eliminate the realm of un-nameable numbers, thus to eliminate the confusion between un-nameably large and infinite. But there are many other choices that go into the contents, and they may be motivated by secondary purposes. Physical examples are good. Probably sand is a cliche. But why talk about astronomy at all? Why not stop at all the sand in the world? Or fill the sphere of the sun with sand, stopping at Aristarchus's non-controversial calculation of that distance? Such choices are rarely explained. I offered two possibilities and the text does not distinguish them.

Replies from: gwern
comment by gwern · 2014-08-18T01:50:41.802Z · LW(p) · GW(p)

If Copernicus had such lousy critics, why assume Aristarchus had good critics who were aware of his arguments? Maybe they objected to heresy, like (maybe) Cleanthes.

You have not explained why Aristarchus would make his universe so large if the criticisms were as bogus as some of Copernicus's critics. Shits and giggles?

If (if!) he rejects the premise, that does not tell us why. Maybe because he rejects the conclusion. Or maybe he rejects the premise for completely different consequences, like wind. Or maybe he is not convinced by Aristarchus's main argument (whatever that was) and doesn't even bother to move on to the consequences.

If he rejects heliocentrism, as he clearly does, it does not matter for your original argument why exactly.

You still have not addressed the quotes from Sand Reckoner I gave which clearly show Archimedes rejects heliocentrism and describes it as a minority rejected position and he only draws on Aristarchus as a worst-case a fortiori argument. Far from being a weak argument from silence (weak because while we lack a lot of material, I don't think we lack so much material that they could have seriously maintained heliocentrism without us knowing; absence of evidence is evidence of absence), your chosen Sand Reckoner example shows the opposite.

If this is the best you can do, I see no reason to revise the usual historical scenario that heliocentrism was rejected because any version consistent with observations had absurd consequences.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-18T02:57:51.737Z · LW(p) · GW(p)

Aristarchus made the universe big because he himself thought about parallax. Maybe some critic first made this objection to him, but such details are lost to time, and uninteresting to compared to the question of the response to the complete theory.

As to the rest, I abandon all hope of convincing you.
I ask only that any third parties read the whole exchange and not trust Gwern's account of my claims.

comment by ChristianKl · 2014-08-06T09:38:51.823Z · LW(p) · GW(p)

Atoms can actually be divided into parts, so it's not clear that the atomists where right. If you would tell some atomist about quantum states, I would doubt that they would find that to be a valid example of what they mean with "atom".

Replies from: gwern, Richard_Kennaway
comment by gwern · 2014-08-06T16:16:55.495Z · LW(p) · GW(p)

The atomists were more right than the alternatives: the world is not made of continuously divisible bone substances, which are bone no matter how finely you divide them, nor is it continuous mixtures of fire or water or apeiron.

comment by Richard_Kennaway · 2014-08-06T10:01:24.716Z · LW(p) · GW(p)

Atoms can actually be divided into parts, so it's not clear that the atomists where right

You could say the same of Dalton.

comment by NancyLebovitz · 2014-08-04T16:17:36.039Z · LW(p) · GW(p)

How about "human beings only use 10% of their brains"? Not political, not flamebait, but possibly also "a lot of people say it and sounds plausible" rather than armchair theorizing. "Everyone should drink eight glasses of water a day" is probably in the same category.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-08-04T16:46:58.642Z · LW(p) · GW(p)

I looked through Wikipedia's list of common misconceptions for anything that people might arise independently in lots of people through reasonable reflection, rather than just "facts" that sneak into the public consciousness, but none of them really qualify.

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-08T05:01:44.354Z · LW(p) · GW(p)

Of course, false "facts" can also easily sneak into less trafficked Wikipedia pages, such as its list of common misconceptions.

comment by Leonhart · 2014-08-09T14:14:28.126Z · LW(p) · GW(p)

Perhaps "The person who came out of the teleporter isn't me, because he's not made of the same atoms"?

comment by John_Maxwell (John_Maxwell_IV) · 2014-08-05T04:26:28.002Z · LW(p) · GW(p)

Why not also spend an equally amount of time searching for examples that prove the opposite of the point you're trying to make? Or are you speaking to an audience that doesn't agree this is possible in principle?

Edit: Might Newtonian physics be an example?

comment by [deleted] · 2014-08-04T12:56:32.579Z · LW(p) · GW(p)

A thought I've had floating around for a few years now.

With the Internet, it's a lot easier to self-study than ever before. This changes the landscape. Money is much less of a limiting factor, and things like time, motivation, and availability of learning material are now more important. It occurs to me that the last is greatly language-dependent. If the only language you speak is spoken by five million other people, you might as well not have the Internet at all. But even if you speak a major language, the material you'll be getting is greatly inferior in quantity, and probably quality, to material available to English speakers. Just checking stats for Wikipedia, the English version is many times larger than other versions and scores much better on all indices. For newer things like MOOCS and Quora, the gap is even larger, and a counterpart often doesn't even exist (Based on my experiences with Korean, my native language).

Could this spark a significant education gap between English speakers and non-speakers? Since learning through the web has only recently become competitive with traditional methods of learning, we shouldn't expect to see the bulk of the effects for at least a decade or so.

Replies from: ChristianKl, Richard_Kennaway
comment by ChristianKl · 2014-08-04T13:43:26.890Z · LW(p) · GW(p)

Given that most of the important scientific papers are in English there already a gap between people who can speak English and people who don't. I don't think that you can get a good position in a Western business these days if you can't speak any English.

Replies from: None
comment by [deleted] · 2014-08-04T16:19:34.517Z · LW(p) · GW(p)

I was thinking more in terms of nations. The top few percent of any country can already speak English and have all the resources necessary for learning. The education the rest get is largely determined by the quality of their country's educational system. MOOCs disrupt this pattern.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-04T22:14:42.707Z · LW(p) · GW(p)

I personally didn't learn my English in the formal education system of Germany but on the internet.

I think that countries like Korea, China or Japan don't really provide students with much free time to learn English on their own or use MOOCs.

Replies from: None
comment by [deleted] · 2014-08-05T03:10:24.452Z · LW(p) · GW(p)

That's interesting. Would you say that your English ability is typical of what an intelligent German speaker could attain through the Internet?

For Koreans, learning English well enough to comfortably learn in it is extremely difficult short of living in an English speaking country for multiple years at a young age. I hear that the Japanese also have this problem.

I knew that it's easier for speakers of European languages to learn English than for East Asian languages, but your ability is way above what I thought would be feasible without spending insane amounts of time on English.

If you are typical, well that explains why RichardKennaway belowmentioned choosing to learn English as if it were a minor thing. You see, I have this perception of English as a "really hard thing" that takes years to get mediocre at. And I believe this is the common view among East Asians.

Replies from: Kaj_Sotala, Emile, ChristianKl
comment by Kaj_Sotala · 2014-08-05T06:33:21.563Z · LW(p) · GW(p)

I recall reading a news article that claimed that the difference between the kids who play a lot of video games and spend a lot of time on the English-speaking Internet, and the kids who do not, is very obvious in the English classes of most Finnish schools these days. Basically the avid gamers get top grades without even trying much.

My personal experience was similar - I learned very little English in school that I wouldn't already have learned from video games, books, and the English-speaking Internet before that.

That said, this doesn't contradict the "it takes years to become good" idea - it did take us years, we just had pretty much our entire childhoods to practice.

comment by Emile · 2014-08-05T06:17:31.462Z · LW(p) · GW(p)

I knew that it's easier for speakers of European languages to learn English than for East Asian languages.

The important category is probably speakers of germanic languages; Italians and Russians probably don't get as big of an advantage.

Replies from: gjm
comment by gjm · 2014-08-05T10:38:46.606Z · LW(p) · GW(p)

I strongly suspect that they're still a lot better off than native speakers of (say) Mandarin or Korean or Japanese. To be more specific: I suspect German is somewhat better for this purpose than Italian, which in turn is substantially better than Russian, which in turn is substantially better than Hungarian, which in turn is substantially better than Mandarin.

  • English and German are both Germanic languages. They share a lot of structure and vocabulary and are written with more or less the same letters.
  • English and Italian are both languages with a lot of Latin in their heritage. They share some structure and a lot of vocabulary and are written with exactly the same letters.
  • English and Russian are both Indo-European languages with some classical heritage. They share some structure but rather little vocabulary, and their writing systems are closely related.
  • Hungarian is not Indo-European, but largely shares its writing system with English.
  • Mandarin is not Indo-European (and I think is decidedly further from Indo-European than Hungarian is). It works in a completely different way from English in many many ways, and has a radically (ha!) different writing system.

I would guess (but don't know enough for my guess to be worth much) that the gap between Hungarian and Mandarin is substantially the largest of the ones above, and that one could find other languages that would slot into that gap while maintaining the "substantially better" progression.

Replies from: Emile
comment by Emile · 2014-08-05T12:20:50.872Z · LW(p) · GW(p)

Agreed.

I don't think the writing system would account for that much of a difference, since learning the Latin Alphabet is something everybody is doing anyway, and it's not much extra work (compared to grammar and vocabulary). I still suspect Hungarian-speakers might find English easier because of closer cultural assumptions and background.

comment by ChristianKl · 2014-08-05T10:08:27.585Z · LW(p) · GW(p)

I knew that it's easier for speakers of European languages to learn English than for East Asian languages, but your ability is way above what I thought would be feasible without spending insane amounts of time on English.

I probably do spend insane amounts of time on the English internet. An amount of time that a Japanese student simply couldn't because he's to busy keeping up with the extensive school curriculum in the Japan. East Asians tend to spend a lot of time to drill children to perform well on standardized tests with doesn't leave much time for things like learning English.

Another issue is that a lot of the language teaching of English in East Asia is simply highly inefficient. That will change with various internet elearning projects.

An outlier would be Singapore where as Wikipedia suggest: "The English language is now the most medium form of communication among students from primary school to university."

Replies from: Emile
comment by Emile · 2014-08-05T12:25:09.339Z · LW(p) · GW(p)

East Asians tend to spend a lot of time to drill children to perform well on standardized tests with doesn't leave much time for things like learning English.

I've seen them spend a lot of time drilling for standardized English tests, but those tests miss a lot of things, and quite a few students do well on those tests but can't have a conversation in English. Or know what "staunch", "bristle", and "bulwark" mean, but not "bullshit".

comment by Richard_Kennaway · 2014-08-04T13:45:20.854Z · LW(p) · GW(p)

things like time, motivation, and availability of learning material are now more important.

And ability to learn.

Could this spark a significant education gap between English speakers and non-speakers?

The greater the gap, the greater the incentive for non-speakers to narrow the gap by becoming speakers.

Replies from: None
comment by [deleted] · 2014-08-04T16:16:31.532Z · LW(p) · GW(p)

Yes, but only if the gap is known to exist.

comment by ChristianKl · 2014-08-04T13:48:27.660Z · LW(p) · GW(p)

I recently learned that chocolate contain significant amount of coffeine. 100g chocolate contain roughly as much as a cup of black tea. As a result I updated in the direction of not eating chocolate directly before going to bed.

I don't know whether the information is new to everyone, but it was interesting for me.

Replies from: stoat, gwern, Douglas_Knight
comment by stoat · 2014-08-04T14:41:24.535Z · LW(p) · GW(p)

Caffeine's a strong drug for me, except I have a huge tolerance now because I consume so much coffee. One night a few years ago, after I had quit caffeine for about a month, I was picking away at a bag of chocolate almonds while doing homework, and after a few hours I noticed that I felt pretty much euphoric. So yeah, this is good info to have if you're trying to get off caffeine.

comment by gwern · 2014-08-04T15:33:40.353Z · LW(p) · GW(p)

Besides caffeine, there's also theobromine.

a cup of black tea

FWIW, I did some reading of studies and it seems that kinds of tea vary too much in caffeine content for classifying by preparation method to be a meaningful indication of caffeine content, and there's some question about how l-theanine plays a role. It's probably better to say 'a cup of tea'.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2014-08-04T15:48:43.845Z · LW(p) · GW(p)

it seems that kinds of tea vary too much in caffeine content for classifying by preparation method to be a meaningful indication of caffeine content

Here is some data on tea caffeine content.

Anecdotally, I know a person who drinks a lot of "regular" black tea (Ceylon/Assam), but doesn't drink Darjeeling tea because it gets her jittery and too-much-caffeine-shaky.

Replies from: gwern
comment by gwern · 2014-08-04T15:51:28.294Z · LW(p) · GW(p)

Yeah, that was one of the studies I read on the topic. (The key part is "Caffeine concentrations in white, green, and black teas ranged from 14 to 61 mg per serving (6 or 8 oz) with no observable trend in caffeine concentration due to the variety of tea.", although they bought mostly black teas and not many white/green or any oolongs; but the other studies don't show a clear trend either.)

Replies from: Lumifer
comment by Lumifer · 2014-08-04T16:03:00.588Z · LW(p) · GW(p)

Did you see any data on natural variability -- that is, compare the caffeine content in tea from two different bushes on the same planation; from different plantations (on different soils, different altitude, etc.)?

What makes tea white/green/oolong/black is just post-harvest thermal processing and it seems likely that the caffeine content is determined at the plant level.

Replies from: gwern
comment by gwern · 2014-08-04T16:37:23.264Z · LW(p) · GW(p)

Did you see any data on natural variability -- that is, compare the caffeine content in tea from two different bushes on the same planation; from different plantations (on different soils, different altitude, etc.)?

Don't think so. It'd be a good study to run, but a bit challenging: even if you buy from a specific plantation, I think they tend to blend or mix leaves from various bushes, so getting the leaves would be more of a challenge than normal.

What makes tea white/green/oolong/black is just post-harvest thermal processing and it seems likely that the caffeine content is determined at the plant level.

I thought that they were also usually harvested at different times through the year?

Replies from: Lumifer
comment by Lumifer · 2014-08-04T16:50:16.800Z · LW(p) · GW(p)

I thought that they were also usually harvested at different times through the year?

You mean that tea intended to become, say, white, is harvested at different time than tea intended to become black? I don't think that's the case. As far as I know the major difference is what you harvest, but that expresses itself as quality of the tea, not whether it is white or oolong or black. For the top teas you harvest the bud at the tip of the branch and one or two immature leaves next to it (which often look silverish because of fine hairs on these leaves), such teas are known as "tippy". Cheaper teas harvest full-grown leaves. There might well be the difference in caffeine content between the two, but it's not a green/black difference, it's a good tea vs lousy tea difference.

Darjeeling is unusual in that it has two specific harvesting seasons (called "first flush" and "second flush") but both are used to make black (well, kinda-black) tea.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-08-05T17:31:44.735Z · LW(p) · GW(p)

White tea is harvested early and immature. Black/oolong/green is a matter of post-processing.

White tea has huge variance in caffeine across varieties. Both tails of the distribution are white.

Replies from: Lumifer
comment by Lumifer · 2014-08-05T17:48:22.257Z · LW(p) · GW(p)

White tea is harvested early and immature. Black/oolong/green is a matter of post-processing.

Can you provide a link for that assertion? The post-harvesting processing of white tea is quite different from that of green, not to mention black. Also, I believe that while white tea requires top-quality leaves (the bud + 1-2 young leaves) and other teas don't, the top quality greens, oolongs, and blacks use the same "immature" leaves as white.

comment by ChristianKl · 2014-08-04T22:54:41.546Z · LW(p) · GW(p)

The average difference between different cups of tea are probably greater than the differences between different kinds of black tea. I don't see how using a wider category is helpful for giving people an idea about how much caffeine a bar of chocolate happens to have.

A cup of black tea is an amount that the average person wouldn't drink right before bed. If you have a better metric for given people a meaningful idea about the amount of caffeine in chocolate feel free to suggest one.

Replies from: gwern
comment by gwern · 2014-08-05T01:01:22.984Z · LW(p) · GW(p)

I don't see how using a wider category is helpful for giving people an idea about how much caffeine a bar of chocolate happens to have.

And I don't see why you should make distinctions which don't make a difference, and engage in false precision.

A cup of black tea is an amount that the average person wouldn't drink right before bed.

And they would drink a cup of white tea, green tea, or oolong tea right before bed?

If you have a better metric for given people a meaningful idea about the amount of caffeine in chocolate feel free to suggest one.

I already did: 'a cup of tea'.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-05T09:11:19.991Z · LW(p) · GW(p)

And I don't see why you should make distinctions which don't make a difference, and engage in false precision.

There are various kind of herbal tea that don't have any coffeine in them and I do drink them before going to bed.

Replies from: gwern
comment by gwern · 2014-08-05T16:06:21.258Z · LW(p) · GW(p)

Yes, but people don't usually mean herbal teas or tisanes when they say 'tea'.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-06T09:54:06.623Z · LW(p) · GW(p)

Yes, but people don't usually mean herbal teas or tisanes when they say 'tea'.

That depends very much on the people with whom you interact.

Replies from: Antiochus
comment by Antiochus · 2014-08-06T19:55:48.763Z · LW(p) · GW(p)

Caffeinated tea, then?

comment by Douglas_Knight · 2014-08-05T17:39:54.385Z · LW(p) · GW(p)

100g of pure chocolate is a lot. I normally eat 25g of 85% chocolate. That's probably an upper bound on a typical serving, diluted by other ingredients. For people who do not otherwise consume caffeine, it's a powerful dose, but for people who drink coffee every morning, it's probably not much.

Added: 25g of pure chocolate has about 10mg of caffeine, about the same as 25g of liquid coffee.

comment by Paul Crowley (ciphergoth) · 2014-08-10T10:17:38.285Z · LW(p) · GW(p)

I've never tried to fnord something before, did I do it right?

Frankenstein's monster doomsayers overwhelmed by Terminator's Skynet become ever-more clever singularity singularity the technological singularity idea that has taken on a life of its own techno-utopians wealthy middle-aged men singularity as their best chance of immortality Singularitarians prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence a man-made god that grants transcendence doomsayers the techno-dystopians Apocalypsarians equally convinced super-intelligent AI no interest in curing cancer or old age or ending poverty malevolently or maybe just accidentally bring about the end of human civilisation Hollywood Golem Frankenstein's monster Skynet and the Matrix fascinated by the old story man plays god and then things go horribly wrong singularity chain reaction even the smartest humans cannot possibly comprehend how it works out of control singularity technological singularity cautious and prepared optimistic obsessively worried by a hypothesised existential risk a sequence of big ifs risk while not impossible is improbable worrying unnecessarily we're falling into a trap fallacy taking our eyes off other risks none of this has brought about the end of civilisation a huge gulf obsessing about the risk of super-intelligent AI cautious and prepared we should be worrying about present-day AI rather than future super-intelligent AI.

Artificial intelligence will not turn into a Frankenstein's monster, Alan Winfield, Observer, Sunday 10 August 2014

comment by Kawoomba · 2014-08-07T15:56:42.317Z · LW(p) · GW(p)

Many European countries, such as France, Denmark and Belgium, enjoyed jokes that were surreal, like Dr Wiseman's favourite:

An alsatian went to a telegram office and wrote: "Woof. Woof. Woof. Woof. Woof. Woof. Woof. Woof. Woof."

The clerk examined the paper and told the dog: "There are only nine words here. You could send another 'Woof' for the same price."

"But," the dog replied, "that would make no sense at all."

Dr Wiseman is now preparing scientific papers based on his findings, which he believes will benefit people developing artificial intelligence in computer programs.

Source, it's from back in 2002

comment by fubarobfusco · 2014-08-04T21:32:34.628Z · LW(p) · GW(p)

On the limits of rationality given flawed minds —

There is some fraction of the human species that suffers from florid delusions, due to schizophrenia, paraphrenia, mania, or other mental illnesses. Let's call this fraction D. By a self-sampling assumption, any person has a D chance of being a person who is suffering from delusions. D is markedly greater than one in seven billion, since delusional disorders are reported; there is at least one living human suffering from delusions.

Given any sufficiently interesting set of priors, there are some possible beliefs that have a less than D chance of being true. For instance, Ptolemaic geocentrism seems to me to have a less than D chance of being true. So does the assertion "space aliens are intervening in my life to cause me suffering as an experiment."

If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases "B is true, despite my previous belief that it is quite unlikely" and "I have developed a delusional disorder, despite delusional disorders being quite rare"?

Replies from: Manfred, gjm, mathnerd314, ChristianKl
comment by Manfred · 2014-08-04T23:07:20.414Z · LW(p) · GW(p)

For you to rule out a belief (e.g. geocentrism) as totally unbelievable, not only does it have to be less likely than insanity, it has to be less likely than insanity that looks like rational evidence for geocentrism.

You can test yourself for other symptoms of delusions - and one might think "but I can be deluded about those too," but you can think of it like requiring your insanity to be more and more specific and complicated, and therefore less likely.

comment by gjm · 2014-08-05T10:15:49.140Z · LW(p) · GW(p)

The relevant number is probably not D (the fraction of people who suffer from delusions) but a smaller number D0 (the fraction of people who suffer from this particular kind of delusion). In fact, not D0 but the probably-larger-in-this-context number D1 (the fraction of people in situations like yours before this happened who suffer from the particular delusion in question).

On the other hand, something like the original D is also relevant: the fraction of people-like-you whose reasoning processes are disturbed in a way that would make you unable to evaluate the available evidence (including, e.g., your knowledge of D1) correctly.

Aside from those quibbles, some other things you can do (mostly already mentioned by others here):

  • Talk to other people whom you consider sane and sensible and intelligent.
  • Check your reasoning carefully. Pay particular attention to points about which you feel strong emotions.
  • Look for other signs of delusions.
  • Apply something resembling scientific method: look for explicitly checkable things that should be true if B and false if not-B, and check them.
  • Be aware that in the end one really can't reliably distinguish delusions from not-delusions from the inside.
comment by mathnerd314 · 2014-08-04T22:59:23.257Z · LW(p) · GW(p)

The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are "infectious" (see Mass hysteria), so this is not really a good method unless you're mostly isolated from the main population.

The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives' tale (which, being old, is probably useful as a belief, regardless of truth value).

Finally, the defeatist answer is that you can't actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you'll just rationalize them away regardless. If you keep notes on your beliefs, you'll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there's not much hope for change.

For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.

Replies from: ChristianKl, Richard_Kennaway
comment by ChristianKl · 2014-08-05T09:30:27.754Z · LW(p) · GW(p)

For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer.

Given replication rates of scientific studies a single study might not be enough. Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.

No need to get people to wash their hands before you do a business deal with them.

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-05T20:12:13.058Z · LW(p) · GW(p)

Given replication rates of scientific studies a single study might not be enough.

Enough for what? My question is whether my hair stylist saying "Shaving makes the hair grow back thicker." is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.

Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.

I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).

No need to get people to wash their hands before you do a business deal with them.

I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)

Replies from: ChristianKl
comment by ChristianKl · 2014-08-06T09:26:37.770Z · LW(p) · GW(p)

I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.

Humans are biased to overrate bad human behavior as a cause for mistakes. The decent thing is to orient yourself on whether similar studies replicate.

Regardless every publish-or-perish paper has an inherent bias to find spectacular results.

Enough for what?

Let's say wearning red every day.

Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.

I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up.

That's fixable by training Fermi estimates.

I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)

It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-06T18:02:01.963Z · LW(p) · GW(p)

Humans are biased to overrate bad human behavior as a cause for mistakes.

If a crocodile bites off your hand, it's generally your fault. If the hurricane hits your house and kills you, it's your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn't give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I'm involved in goes wrong).

The decent thing is to orient yourself on whether similar studies replicate.

Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it's a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it's widely cited as a building block of someone's theory, or if there's a book on it.

Regardless every publish-or-perish paper has an inherent bias to find spectacular results.

Right, people only publish their successes. There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to "all information everywhere" then that bias (mostly) goes away, and the real problem is finding anything at all.

Let's say wearing red every day.

So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time? For example, I don't want a red car because I don't want to get pulled over by the cops all the time. Similarly for most results; they're very limited in scope, of the form "if X then Y" or even "X associate with Y". Many times, Y is irrelevant, so I don't need to even consider X.

Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.

Sure, but if I'm involved with a case then I'll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.

That's fixable by training Fermi estimates.

You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.

It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.

Ah, social science. I need to take more courses in statistics before I can comment... so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).

Replies from: mathnerd314, ChristianKl
comment by mathnerd314 · 2014-08-07T19:38:57.445Z · LW(p) · GW(p)

For example, I don't want a red car because I don't want to get pulled over by the cops all the time.

The car story appears to be a myth nowadays, but that could just be due to the increased use of radar guns and better police training. Radar guns were introduced around the 1950's so all of their policemen quotes are too recent to tell.

comment by ChristianKl · 2014-08-06T23:41:17.004Z · LW(p) · GW(p)

So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it.

Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true.

Accepting reality for what it is helps to have an accurate perception of reality. Only once you understand the territory should you go out and try to change things. If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.

There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results

I spoke about incentives. Researchers have an incentive to publish in prestigious journals and optimize their research practices for doing so. The case with blogs isn't much different. Successful bloggers write polarizing posts that get people talking and engage with the story even there would be a way to be more accurate and less polarizing. The incentives go towards "spectual".

Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.

You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.

It's not about remembering it's about being able to make estimates even when you aren't sure. And you can calibrate your error intervals.

So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time?

Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.

One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings.

I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.

I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).

The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-08-07T04:58:40.479Z · LW(p) · GW(p)

Probably a minor point, but were both the red and blue shirts photoshopped? If one of them was an actual photo, it might have looked more natural (color reflected on to your face) than the other.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-07T10:28:32.265Z · LW(p) · GW(p)

In this case no, the blue was the original you are right that this might have screwed with the results. HotOrNot internal algorithms were also a bit opaque.

But to be fair the setup of the original study wasn't natural either. The color in those studies has the color of the border of the photo.

If I wanted to repeat the experiment I would like to it on Amazon Mechanical turk. At the moment I don't really have the spare money for projects like that but maybe someone else on LW cares enough to dress in an attractive way and wants to optimize and has the money.

The whole thing might also work good for a blogger willing to a bit of cash to write an interesting post.

Especially for online dating like Tinder, photo optimisation through empiric measurement of photos can increase success rates a bit.

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-07T20:18:10.468Z · LW(p) · GW(p)

Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true. Accepting reality for what it is helps to have an accurate perception of reality.

I'm not certain where you see conflation. I have separate storage areas for things to think about, evidence, actions, and risk/reward evaluations. They interact as described here. Things I hear about go into the "things to think about" list.

Only once you understand the territory should you go out and try to change things.

The world is changing so I must too. If the apocalypse is tomorrow, I'm ready. I don't need to "understand" the apocalypse or its cause to start preparing for it. IF I learn something later that says I did the wrong thing, so be it. I prefer spending most of my time trying to change things than sitting in a room all day trying to understand. Indeed, some understanding can only be gained through direct experience. So I disagree with you here.

If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.

The decision procedure I outlined above accounts for most biases; you're welcome to suggest revisions or stuff I should read.

I spoke about incentives. [...] The incentives go towards "spectual".

You didn't, AFAICT; you spoke about "inherent biases". I think my point still stands though; averaging over "all information everywhere" counteracts most perverse incentives, since perversion is rare, and the few incentives left are incentives that are shared among humans such as survival, reproduction, etc. In general humans are good at that sort of averaging, although of course there are timing and priming effects. Researchers/bloggers are incentivized to produce good results because good results are the most useful and interesting. Good results lead to good products or services (after a 30 year lag). The products/services lead to improved life (at least for some). Improved life leads to more free time and better research methods. And the cycle goes on, the end result AFAICT is a big database of mostly-correct information.

Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.

His post is entitled "Why Forgetting Can Be Good" and his mention of Anki is limited to "I’m skeptical of the value of an SRS for most domains of knowledge." If he then recommends Anki for learning vocabulary, this changes relatively little; he's simply found a knowledge domain where he found SRS useful. Different studies, different conclusions, different contributions to different decisions.

It's not about remembering it's about being able to make estimates even when you aren't sure.

You're never sure, so why mention "even when you aren't sure", since it's implied? Striking that out...

It's not about remembering it's about being able to make estimates.

Estimation comes after the evidence-gathering phase. If you have no evidence you can make no estimates. Fermi estimation is just another estimation method, so it doesn't change this. If you have no memory, then you have no evidence. So it is about remembering. "Those who cannot remember the past are condemned to repeat it".

And you can calibrate your error intervals.

If you have no estimates you can't have error intervals either. Indeed, you can't do calibration until you have a distribution of estimates.

Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.

It looks like the central word is definitely dominance. Stringing the top words into a sentence I get "Sports teams wear red to show dominance and it has an effect on referees' performance". I guess I was going off of the Mandrill story where signs of dominance are correlated with willingness to be aggressive. This study says dominance and threat are emphasized by wearing red, where "threat" is measured by "How threatening (intimidating, aggressive) did you feel?". Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.

The comments do focus on status, so I guess you have a point. But I generally skip over the comments when an article is linked to. And the status discussion was in the comments of Overcoming Bias post, so by no means central.

One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings. I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.

Would you be referring to, among others, this study? Unfortunately... it still looks like experimental psychology, so again I have to plead lack of statistics.

The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.

I've mostly been reading Army / DoD studies, which have a different funding model. But I guess cancer will become relevant eventually (preferably later rather than sooner).

Side note: does LW have a "collapse threads more than N levels deep" feature like reddit? It probably should have triggered a few replies ago, so I didn't post on the wrong child...

Replies from: ChristianKl
comment by ChristianKl · 2014-08-07T21:26:27.185Z · LW(p) · GW(p)

The decision procedure I outlined above accounts for most biases; you're welcome to suggest revisions or stuff I should read.

The problem is that you assume that know the relevant biases. There are often cases where you don't know why someone screws up. There are domains where it's easier to get knowledge about how much people screw up than understanding the reasons behind screwups.

Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.

Fear produces fight or flight responses. People often fight out of fear. Aggressiveness often comes out of weakness. A karate black belt is dominant but usually not aggressive. Taller people get payed more money because being tall is a signal for social dominance.

Would you be referring to, among others, this study?

Yes.

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-07T23:03:41.842Z · LW(p) · GW(p)

The problem is that you assume that you know the relevant biases.

Wikipedia has a list; I've checked a few of them, and the rest are on my TODO list. I have that page watched so if there's a new bias I'll know.

There are often cases where you don't know why someone screws up. There are domains where it's easier to get knowledge about how much people screw up than understanding the reasons behind screwups.

Information is produced regardless, and often recorded (see e.g. Gwern's Mistakes page). So long as I myself don't screw up, which, assuming that I always follow my decision procedure and my decision procedure is correct, I won't, then it doesn't matter.

Fear produces fight or flight responses. People often fight out of fear. Aggression often comes out of weakness.

OK, but I was talking about "perceived willingness to be aggressive" (signal), not aggression (action).

A karate black belt is dominant but usually not aggressive. Taller people get payed more money because being tall is a signal for social dominance.

Someone wearing a black belt is probably going to be perceived as more aggressive, the same way someone idly cleaning their fingernails with a sharp knife might be. Similarly if a person adopts something recognized as a fighting stance. Not certain about tall people, that's probably something else besides perceived aggressiveness, e.g. "My parents were rich and could feed me a lot".

This has gone on long enough that it might be worth summarizing into a post... do you want to write it or should I?

Replies from: ChristianKl
comment by ChristianKl · 2014-08-08T10:15:06.271Z · LW(p) · GW(p)

Wikipedia has a list; I've checked a few of them, and the rest are on my TODO list. I have that page watched so if there's a new bias I'll know.

There not good evidence for the claim that reading a list of a bunch of biases improves your decision making ability. See Eliezers discussion on the hindsight bias: http://lesswrong.com/lw/il/hindsight_bias/

Someone wearing a black belt is probably going to be perceived as more aggressive, the same way someone idly cleaning their fingernails with a sharp knife might be.

I'm not so much talking about actually wearing the black belt but the psychological changes that the kind of training that makes people a black belt creates. Changes in confidence and body language.

This has gone on long enough that it might be worth summarizing into a post... do you want to write it or should I?

We went through many separate points and at the moment I don't know how to pull them in a good way together into one post. If you see a decent way feel free.

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-08T16:07:21.679Z · LW(p) · GW(p)

There not good evidence for the claim that reading a list of a bunch of biases improves your decision making ability. See Eliezers discussion on the hindsight bias: http://lesswrong.com/lw/il/hindsight_bias/

I checked that the procedure accounts for the biases. Hindsight bias is avoided by computing uncertainty using a regression analysis. Availability bias is avoided by using a large database with random sampling. Etc. I haven't gone through all of them, but so far the biases I've looked at can't affect the decision outcome because the human isn't directly involved in those stages of computation.

Someone wearing a black belt is probably going to be perceived as more aggressive

And there's even a study on black uniforms that shows they increase perceived aggression.

Changes in confidence and body language.

This page says martial arts training increases dominance, as you say. On the other hand, that study also says that martial arts training decreases (observed) aggression. This study says perceived aggressiveness is highly correlated with proportion of mixed-martial-arts fights won, which I interpret as also meaning that martial arts training increases perceived aggression before a fight (since martial training ought to result in winning more martial arts fights). So it looks like martial arts training encourages controlling the aggressiveness signal, suppressing it in some non-fighting cases and enhancing it in competition. Or else the actual aggression levels decreased because the willingness to fight was communicated more clearly and thus people chose to fight less because their estimates of the costs rose.

We went through many separate points and at the moment I don't know how to pull them in a good way together into one post. If you see a decent way feel free.

My general writing strategy is as follows: I go through source material, write down all the quotes/facts that seem useful into a bullet list, then sort alphabetically, then reorder and group the bullets, then rewrite the sub-bullets into paragraphs, then reorder the paragraphs, then remove the list formatting and add paragraph formatting, then add a title and introduction. (The conclusion is just more facts/quotes). I've practiced this on a couple of my required-because-core essays and they've gotten reasonable marks (B+ / A- level depending on how nice the teacher is).

Replies from: ChristianKl
comment by ChristianKl · 2014-08-08T17:53:45.643Z · LW(p) · GW(p)

In most social situations aggressiveness is bad. A woman doesn't want an aggressive boyfriend. But she usually want that her boyfriend isn't low status without any amount of dominance.

If you sit in school it's good if your teacher is dominant but aggression is not a sign of a good teacher.

Or else the actual aggression levels decreased because the willingness to fight was communicated more clearly and thus people chose to fight less because their estimates of the costs rose.

People don't make clear estimates of costs when in high pressure situations. Instead fight/flight/freeze reactions trigger. Martial arts training removes that trigger and instead allows it's participants to make more conscious decisions about whether to fight. Being able to make conscious decisions often leads to less fights.

I've practiced this on a couple of my required-because-core essays and they've gotten reasonable marks (B+ / A- level depending on how nice the teacher is).

Then I'm happy to see the outcome in this case.

comment by Richard_Kennaway · 2014-08-05T07:14:52.895Z · LW(p) · GW(p)

For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.

What sort of beliefs are you talking about here? Are you classifying simply being wrong about something as a "delusional disorder"?

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-05T20:20:41.945Z · LW(p) · GW(p)

Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren't exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.

comment by ChristianKl · 2014-08-05T09:29:42.419Z · LW(p) · GW(p)

If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases "B is true, despite my previous belief that it is quite unlikely" and "I have developed a delusional disorder, despite delusional disorders being quite rare"?

The basic idea is to talk about your belief in detail with a trusted friend that you consider sane.

Writing your own thought processes down in a diary also helps to be better able to evaluate it.

comment by Pablo (Pablo_Stafforini) · 2014-08-04T19:17:29.525Z · LW(p) · GW(p)

There is a common idea in the “critical thinking”/"traditional rationality" community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?

Replies from: Lumifer, None, palladias, ChristianKl, Protagoras, iarwain1, Viliam_Bur, gjm
comment by Lumifer · 2014-08-04T19:24:49.523Z · LW(p) · GW(p)

Why is this so?

Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...

Replies from: faul_sname, Azathoth123
comment by faul_sname · 2014-08-07T07:05:28.642Z · LW(p) · GW(p)

Awfully similar, but not identical.

In the first case, you have independent evidence that the conclusion is false, so you're basically saying "If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments."

In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true. I think it's more likely that there is a flaw in your conclusion that I can't detect than that there is a flaw in the reasoning that led to my conclusion."

The person in the first case is far more likely to respond with "I don't know" in response to the question of "So what do you think the real answer is, then?" In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form "If Y is true, how do you explain X?" which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.

Rereading your comment, I see that there are two ways to interpret it. The first is "Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion." The second is "Rationalists do not use this form of argument because it is flawed -- they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence." I agree with the first interpretation, but not the second -- that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.

Replies from: Lumifer
comment by Lumifer · 2014-08-07T14:46:44.431Z · LW(p) · GW(p)

In the first case, you have independent evidence that the conclusion is false

"Independent evidence" is a tricky concept. Since we are talking Bayesianism here, at the moment you're rejecting the argument it's not evidence any more, it's part of your prior. Maybe there was evidence in the past that you've updated on, but when you refuse to accept the argument, you're refusing to accept it solely on the basis of your prior.

In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true."

Which is pretty much equivalent to saying "I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that's why I reject your argument".

I see that there are two ways to interpret it.

I think both apply.

comment by Azathoth123 · 2014-08-06T04:40:13.061Z · LW(p) · GW(p)

In fact that case is just a special case of the former with you having bad priors.

Replies from: Lumifer
comment by Lumifer · 2014-08-06T14:46:58.561Z · LW(p) · GW(p)

Not quite, your priors might be good. We're talking here about ignoring evidence and that's a separate issue from whether your priors are adequate or not.

comment by [deleted] · 2014-08-05T09:17:34.074Z · LW(p) · GW(p)

This idea seems like a manifestation of epistemic learned helplessness.

comment by palladias · 2014-08-05T15:38:51.149Z · LW(p) · GW(p)

I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, "I really don't have the relevant background to evaluate your argument, and it's not a field I'm planning to do the legwork to understand very well."

comment by ChristianKl · 2014-08-05T09:11:35.429Z · LW(p) · GW(p)

Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?

In my experience there are LW people who would in such cases simply declare that they won't be convinced of the topic at hand and suggest to change the subject.

I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren't able to evaluate arguments on the matter and therefore won't be convinced.

Replies from: philh
comment by philh · 2014-08-06T14:46:57.694Z · LW(p) · GW(p)

That was probably me. I don't think I handled the situation particularly gracefully, but I really didn't want to continue that conversation, and I couldn't see whether the person in question was wearing a crocker's rules tag.

I don't remember my actual words, but I think I wasn't trying to go for "nothing could possibly convince me", so much as "nothing said in this conversation could convince me".

Replies from: ChristianKl
comment by ChristianKl · 2014-08-06T15:22:09.527Z · LW(p) · GW(p)

It's still more graceful than the "I think you are wrong based on my heuristics but I can't tell you where you are wrong" that Pablo Stafforini advocates.

comment by Protagoras · 2014-08-05T00:09:40.198Z · LW(p) · GW(p)

Because that ends the discussion. I think a lot of people around here just enjoy debating arguments (certainly I do).

comment by iarwain1 · 2014-08-04T23:45:10.169Z · LW(p) · GW(p)

I actually do say things like this pretty frequently, though I haven't had the opportunity to do so on LW yet.

comment by Viliam_Bur · 2014-08-05T20:05:48.484Z · LW(p) · GW(p)

A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn't verify... or ever could, but the verification would take a lot of time... something like: "There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct." And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.

And in such situations I just waved my hands and said -- well, I guess you just have to consider me unintelligent -- and went away.

I didn't think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.

Replies from: Lumifer
comment by Lumifer · 2014-08-05T20:14:59.616Z · LW(p) · GW(p)

Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.

comment by gjm · 2014-08-05T10:21:51.796Z · LW(p) · GW(p)

Related link: Peter van Inwagen's article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.

Replies from: Benito
comment by Ben Pace (Benito) · 2014-08-08T12:34:00.252Z · LW(p) · GW(p)

Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of 'one man's modus ponens...'.

comment by [deleted] · 2014-08-09T02:02:38.409Z · LW(p) · GW(p)

Quoted in full from here:

A 33-year-old doctor in Africa and a 60-year-old missionary have both contracted Ebola, and both will likely die. In a made-for-tv-movie scenario, there’s only enough serum for one person so the doctor insists it go to the old lady. People are using this to illustrate how awesome and selfless the doctor is, saying that Even Now he “puts the needs of others above his own needs.” I, on the other hand, think this is a rotten stinking act of hubris. As a DOCTOR, he is far more valuable to the African people, and as such HE should get the serum. Not only is his act NOT selfless, in fact many more people will die since he has essentially killed their doctor. - Ruth Waytz

Replies from: pragmatist
comment by pragmatist · 2014-08-09T06:27:56.375Z · LW(p) · GW(p)

I see the broad point Waytz is making, but the ranty delivery is pretty silly. Why is the doctor's act not selfless? It certainly appears to be motivated by altruism (even if that altruism is misguided, from a utilitarian perspective). Having a non-utilitarian moral code is not the same thing as selfishness.

Second, the anger in that comment seems to have more to do with a distaste for deontological altruistic gestures than anything else. I really doubt Waytz would be as mad if the doctor had simply decided that he had had enough of working in the medical profession and decided to open a bistro instead.

comment by TheMajor · 2014-08-05T13:01:21.268Z · LW(p) · GW(p)

Not sure if this belongs here, but not sure where else it should go.

Many pages on the internet disappear, returning 404's when looking for them (especially older pages). The material I found on LW and OB is of such great quality that I would really hate it if a part of the pages here also disappeared (as in became harder to access for me). I am not sure if this is in any part realistic, but the thought does bother me. So I was hoping to somehow make a local backup of LW/OB, downloading all pages to a hard drive. There are other reasons for wanting this same thing: I am frequently in regions without internet access, and also this might finally allow me to organise the posts (the categories on LW leave much to be desired, the closest thing to a good structure I found is the chronological list on OB, which seems to be absent on LW?).

So my triple question: should I be worried about pages disappearing (probably not too much), would it still be a good idea to try to make a local backup (probably yes, storage is cheap and I think it would be useful for me personally to have LW offline, even only the older posts) and how does one go about this?

Replies from: TylerJay, David_Gerard
comment by TylerJay · 2014-08-05T15:52:44.224Z · LW(p) · GW(p)

You might be interested in reading Gwern's page on Archiving URLs and Link Rot

comment by David_Gerard · 2014-08-05T19:43:58.677Z · LW(p) · GW(p)

Pages here are disappearing - someone's been going through the archive deleting posts they don't like. (c.f. [1] versus [2].) (The post is still slightly available, but the 152 comments are no longer associated with it.) So get archiving sooner rather than later.

comment by NancyLebovitz · 2014-08-08T19:45:55.476Z · LW(p) · GW(p)

How to Work with "Stupid" People

The hypothesis is that people frequently underestimate the intelligence of those they work with. The article suggests some ways people could get the wrong impression, and some strategies for improving communications and relationships. It all seems very plausible.

However, the author doesn't offer any examples, and the comments are full of complaints about unchangeably stupid coworkers.

Replies from: Viliam_Bur, Lumifer
comment by Viliam_Bur · 2014-08-09T20:55:50.639Z · LW(p) · GW(p)

I believe I had the opposite problem most of my life. I was taught to be humble, to never believe I am better than anyone else, et cetera. Nice political slogans, and probably I should publicly pretend to believe it. But there is a problem that I have a lot of data of people doing stupid things, and I need some explanation. And of course, if I forbid myself to use the potentially correct explanation, then I am pushing myself towards the incorrect ones.

Sometimes the problem is that I didn't understand something, so the seemingly stupid behavior wasn't actually stupid, it was me not understanding something. Yes, sometimes this happens, so it is reasonable to consider this hypothesis seriously. But oftentimes, even after careful exploration, the stupid behavior is stupid. When people keep saying that 2+2=5, it could mean they have secret mathematical knowledge unknown to you; but it is more likely that they are simply wrong.

But the worse problem is that refusing to believe in other people's stupidity deprives you of wisdom of "Never attribute to malice that which is adequately explained by stupidity." Not believing in stupidity can make you paranoid, because if those people don't do stupid things because of stupidity, then they must have some purpose doing it. And if it's a stupid thing that happens to harm you, it means they hate you, or at least don't mind when you are harmed. Ignorance starts to seem like strategical plausible deniability.

I had to overcome my upbringing and say to myself: "Viliam, your IQ is at least four sigma over the average, so when many people seem retarded to you, even many university-educated people, that's because they really are retarded, compared with you. They are usually not passively aggressive; they are trying to do their best, their best is just often very unimpressive to you (but probably impressive in their own eyes, and in eyes of their peers). You are expecting from them more than they can realistically provide; and they often even don't understand what you are saying. And they live in their world, where they are the norm; you are the exception. And it will never change, so you better get used to it, otherwise you prepare yourself for a lifetime of disappointment."

From that moment, when I see someone doing something stupid, I consider a hypothesis "maybe that's the best their intelligence allows them to do". And suddenly, I am not angry at most people around me. They are nice people, they are just not my equals, and it's not their fault. Often they have a knowledge that I don't have, and I can learn from them. (Intelligence does not equal knowledge.) But also, they often do something completely stupid that likely doesn't seem stupid in their eyes. I should not assume that everything they do makes sense. I should not expect them to able to understand everything I am trying to explain; I can try, but I shouldn't become too involved in it; sometimes I have to give up and accept some stupidity as a part of my environment.

The proper way to work with stupid people is to realize their limitations and don't blame them for not being what you want them to be. (Of course you should always check whether your estimates are correct. But they are not always wrong.)

comment by Lumifer · 2014-08-08T20:11:44.315Z · LW(p) · GW(p)

That blog post assumes that actual stupidity is never the "real" problem. I beg to disagree.

Replies from: Pfft, NancyLebovitz, ahbwramc
comment by Pfft · 2014-08-11T20:18:47.804Z · LW(p) · GW(p)

Or does it?

They may have raw intelligence, but poor thinking habits—patterns of absorbing, processing, and filing information. Cognitively, they aren’t set up to get to the heart of a matter, to distinguish between essential and accidental details, to form and apply valid generalizations. This too may require patience. It isn’t good, but it isn’t willful, irrational, or stupid. Concentrate on what other virtues and talents they bring to the table, such as creativity, diligence, or relationship-building.

This seems to mean exactly "maybe they are stupid after all", but expressed using a different set of words.

(I would guess that the author at some point adopted "never think that someone is stupid" as a deontological rule, and then unintentionally evolved a different set of words to be able to think about stupidity without triggering the filter...)

comment by NancyLebovitz · 2014-08-08T20:22:20.327Z · LW(p) · GW(p)

You're right. I'm sure that actual stupidity is sometimes the real problem. On the other hand, it would surprise me if it's always the real problem. At that point, the question becomes how much effort is worth putting in.

comment by ahbwramc · 2014-08-09T17:41:33.905Z · LW(p) · GW(p)

I think purely from a fundamental attribution error point of view we should expect the average "stupid" person we encounter to be less stupid than they seem.

(which is not to say stupidity doesn't exist of course, just that we might tend to overestimate its prevalence)

I guess the other question would be, are there any biases that might lead us to underestimate someone's stupidity? Illusion of transparency, perhaps, or the halo effect? I still think we're on net biased against thinking other people are as smart as us.

Replies from: Lumifer, Azathoth123
comment by Lumifer · 2014-08-10T00:10:57.167Z · LW(p) · GW(p)

are there any biases that might lead us to underestimate someone's stupidity?

Sex appeal, of course :-D

comment by Azathoth123 · 2014-08-10T20:44:53.215Z · LW(p) · GW(p)

Are you saying that charlatans and cranks don't exist or at least never manage to obtain any followers?

comment by [deleted] · 2014-08-05T14:35:12.201Z · LW(p) · GW(p)

I have been considering finding a group of writers/artists to associate with in order to both provide me a catalyst for self-improvement and a set of peers who are serious about their work. I have several friends who are "into" writing or comics or whatever other medium, but most of them are as "into" it as the time between video games, drinking, and staying up late to binge Dexter episodes allows.

We have a whole sequences here on LessWrong about the Craft and the Community. So I don't feel the need to provide some bits of anecdotal evidence for why I think having a community for your craft is a good idea.

Instead, I'll just ask, to the writers: how have you found a community for your craft/have you bothered?

Replies from: Alicorn, TylerJay, polymathwannabe
comment by Alicorn · 2014-08-06T06:27:33.108Z · LW(p) · GW(p)

I put writing online for free and siphoned off spare HPMoR fans until I had enough fanbase to maintain my own stable of beta readers, set of tumblr tags, and modestly populated forum. This is more how I cultivated a fandom than a set of colleagues, but some of the people I collected this way also cowrite with me and most of them are available to spur me along.

comment by TylerJay · 2014-08-05T15:55:04.695Z · LW(p) · GW(p)

I was once part of an online community on sffworld writing forum. There were regular posters like on any forum and there was also a small workshop (6-8 people) and each week two people would submit something for the rest of the group to read and provide feedback on. It was motivating and fun.

comment by polymathwannabe · 2014-08-05T14:43:04.050Z · LW(p) · GW(p)

I frequent a sci-fi fan club in my city and from that group emerged a tiny writing workshop (6 members currently). The couple of guys who came up with the idea had heard that I wrote some small stuff and won a local contest, and thus I got invited. Every two Sundays we meet via Skype to comment on the stories that we've posted to our FB group since the last meeting. It has been helpful for me; we've agreed to be brutally honest with one another.

comment by [deleted] · 2014-08-05T07:41:18.195Z · LW(p) · GW(p)

As a person living very far away from west Africa, how worried should I be about the current Ebola outbreak?

Replies from: gjm, None, palladias
comment by gjm · 2014-08-05T10:04:11.564Z · LW(p) · GW(p)

(Not in any way an expert; just going by what I've heard elsewhere.) I think the answer probably depends substantially on how much you care about the welfare of West Africans. It is very unlikely to have any impact to speak of in the US or Western Europe, for instance.

comment by [deleted] · 2014-08-05T10:05:28.457Z · LW(p) · GW(p)

No, You're Not Going To Get Ebola

Replies from: byrnema
comment by byrnema · 2014-08-10T15:51:58.695Z · LW(p) · GW(p)

Sorry, realized I don't feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)

comment by palladias · 2014-08-05T15:35:49.511Z · LW(p) · GW(p)

TL;DR: Ebola is very hard to transmit person to person. Don't think flu, think STDs.

Ebola isn't airborne, so breathing the same air, being on the same plane as an Ebola case will not give you Ebola. It doesn't spread quite like STDs, but it does require getting an infected person's bodily fluids (urine, semen, blood, and vomit) mixed up in your bodily fluids or in contact with a mucous membrane.

So, don't sex up your recently returned Peace Corps friend who's been feeling a little fluish, and you should be a-ok.

Replies from: byrnema, Lumifer
comment by byrnema · 2014-08-15T16:07:46.257Z · LW(p) · GW(p)

A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.

Note the following description of (casual) contact:

Casual contact is defined as a) being within approximately 3 feet (1 meter) or within the room or care area for a prolonged period of time (e.g., healthcare personnel, household members) while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations); or b) having direct brief contact (e.g., shaking hands) with an EVD case while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations). At this time, brief interactions, such as walking by a person or moving through a hospital, do not constitute casual contact.

(Much more contagious than an STD.)

But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning 'ember' (or 10 or 20) and any change in these probabilities -- plenty of time to handle and douse out any further hotspots that form.

The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.

We need to douse it while it is relatively small -- I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.

comment by Lumifer · 2014-08-05T16:46:41.553Z · LW(p) · GW(p)

Ebola is very hard to transmit person to person.

Um. Given that an epidemic is actually happening and given that more than one doctor attending Ebola patients got infected, I'm not sure that "very hard" is the right term here.

Having said that, if you don't live in West Africa your chances of getting Ebola are pretty close to zero. You should be much more afraid of lightning strikes, for example.

comment by niceguyanon · 2014-08-08T13:50:56.207Z · LW(p) · GW(p)

Non-conventional thinking here, feel free to tell me why this is wrong/stupid/dangerous.

I am young and healthy, and when I catch a cold, I think " cool, when I recover immune system +1." I take this one step further though, when I don't get sick for a long time, I start to hope I get sick because I want to exercise my immune system. I know this might sound obviously wrong but can we just discuss why exactly?

My priors tell me that actively avoiding any germs and people to prevent getting sick is unhealthy. So I have lived my life not avoiding germs but also not asking people to cough on me either. But is there room to optimize? I caught something pretty nasty that lasted a month, and I am sure I got it from being at a large music festival breathing hot breathy air, but better now than catching that strain of what ever it was, when I am 70 right? And I don't mean I want to catch a serious case of pneumonia and potentially die, I mean what if there was a way to catch a strain of the common cold every now and then deliberately.

Replies from: pianoforte611, Lumifer, satt, polymathwannabe
comment by pianoforte611 · 2014-08-08T14:57:41.641Z · LW(p) · GW(p)

There are over 100 strains of the common cold. If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future. On the other hand, good hygiene will significantly decrease your chance of being infected by most contagious diseases.

Replies from: NancyLebovitz, Lumifer
comment by NancyLebovitz · 2014-08-08T18:09:21.704Z · LW(p) · GW(p)

It's at least plausible that people become less vulnerable to colds as they get older.

http://www.nytimes.com/2013/08/06/science/can-immunity-to-the-common-cold-come-with-age.html?_r=0

comment by Lumifer · 2014-08-08T15:30:42.521Z · LW(p) · GW(p)

If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future.

He's not talking about gaining immunity in the vaccination sense. He's talking about developing a better, stronger immune system.

comment by Lumifer · 2014-08-08T15:28:19.733Z · LW(p) · GW(p)

But is there room to optimize?

Maybe, but I don't think you can find out -- the data is too noisy and the variance is too big.

Besides, of course, the better your immune system gets, the more rarely will you get sick with infectious diseases...

comment by satt · 2014-08-09T14:51:28.602Z · LW(p) · GW(p)

The catch I'd expect here is for the marginal immunological benefit from an extra cold to be less than the marginal cost of suffering an extra cold, although a priori I'm not sure which way a cost-benefit analysis would go.

It'd depend on how well colds help your immune system fight other diseases; the expected marginal number of colds prevented per extra cold suffered; the risk of longer-term side effects of colds; how the cost of getting sick changes with age (which you mentioned); the chance that you'll mistakenly catch something else (like influenza) if you try to catch someone else's cold; and the doloric cost of suffering through a cold. One might have to trawl through epidemiology papers to put usable numbers on these.

Consuming probiotics (or even specks of dirt picked up from the ground) might be easier & safer.

comment by polymathwannabe · 2014-08-08T15:27:13.209Z · LW(p) · GW(p)

Your immune system is already being subjected to constant demands by the simple fact that you don't live in a quarantine bunker. Let it do its job. Intentional germ-seeking is reckless.

comment by DavidAgain · 2014-08-06T08:30:35.927Z · LW(p) · GW(p)

Thought that people (particularly in the UK) might be interested to see this, a blog from one of the broadsheets on Bostrom's Superintelligence

http://blogs.telegraph.co.uk/news/tomchiversscience/100282568/a-robot-thats-smarter-than-us-theres-one-big-problem-with-that/

comment by Lumifer · 2014-08-05T18:45:03.348Z · LW(p) · GW(p)

Another attempt at a sleep sensor, currently funded on Kickstarter.

comment by Pablo (Pablo_Stafforini) · 2014-08-04T17:33:43.353Z · LW(p) · GW(p)

Another piece of potentially useful information that may be new to some folks here: sleeping more ~7.5 hours is associated to a higher mortality risk (and the risk is comparable to sleeping less than ~5 hours).

Relevant literature reviews:

Cappuccio FP, D'Elia L, Strazzullo P, et al. Sleep duration and all-cause mortality: a systematic review and meta-analysis of prospective studies. Sleep 2010;33(5):585-592.

Background: Increasing evidence suggests an association between both short and long duration of habitual sleep with adverse health outcomes. Objectives: To assess whether the population longitudinal evidence supports the presence of a relationship between duration of sleep and all-cause mortality, to investigate both short and long sleep duration and to obtain an estimate of the risk. Methods: We performed a systematic search of publications using MEDLINE (1966-2009), EMBASE (from 1980), the Cochrane Library, and manual searches without language restrictions. We included studies if they were prospective, had follow-up >3 years, had duration of sleep at baseline, and all-cause mortality prospectively. We extracted relative risks (RR) and 95% confidence intervals (CI) and pooled them using a random effect model. We carried out sensitivity analyses and assessed heterogeneity and publication bias. Results: Overall, the 16 studies analyzed provided 27 independent cohort samples. They included 1,382,999 male and female participants (follow-up range 4 to 25 years), and 112,566 deaths. Sleep duration was assessed by questionnaire and outcome through death certification. In the pooled analysis, short duration of sleep was associated with a greater risk of death (RR: 1.12; 95% CI 1.06 to 1.18; P < 0. 01) with no evidence of publication bias (P = 0.74) but heterogeneity between studies (P = 0.02). Long duration of sleep was also associated with a greater risk of death (1.30; [1.22 to 1.38]; P < 0.0001) with no evidence of publication bias (P = 0.18) but significant heterogeneity between studies (P < 0.0001). Conclusion: Both short and long duration of sleep are significant predictors of death in prospective population studies.

Grandner MA, Hale L, Moore M, et al . Mortality associated with short sleep duration: the evidence, the possible mechanisms, and the future. Sleep Med Rev 2010;14(3):191-203.

This review of the scientific literature examines the widely observed relationship between sleep duration and mortality. As early as 1964, data have shown that 7-h sleepers experience the lowest risks for all-cause mortality, whereas those at the shortest and longest sleep durations have significantly higher mortality risks. Numerous follow-up studies from around the world (e.g., Japan, Israel, Sweden, Finland, the United Kingdom) show similar relationships. We discuss possible mechanisms, including cardiovascular disease, obesity, physiologic stress, immunity, and socioeconomic status. We put forth a social–ecological framework to explore five possible pathways for the relationship between sleep duration and mortality, and we conclude with a four-point agenda for future research.

Grandner MA, Drummond SP. Who are the long sleepers? Towards an understanding of the mortality relationship. Sleep Med Rev. Oct 2007;11(5):341–60.

While much is known about the negative health implications of insufficient sleep, relatively little is known about risks associated with excessive sleep. However, epidemiological studies have repeatedly found a mortality risk associated with reported habitual long sleep. This paper will summarize and describe the numerous studies demonstrating increased mortality risk associated with long sleep. Although these studies establish a mortality link, they do not sufficiently explain why such a relationship might occur. Possible mechanisms for this relationship will be proposed and described, including (1) sleep fragmentation, (2) fatigue, (3) immune function, (4) photoperiodic abnormalities, (5) lack of challenge, (6) depression, or (7) underlying disease process such as (a) sleep apnea, (b) heart disease, or (c) failing health. Following this, we will take a step back and carefully consider all of the historical and current literature regarding long sleep, to determine whether the scientific evidence supports these proposed mechanisms and ascertain what future research directions may clarify or test these hypotheses regarding the relationship between long sleep and mortality.

Replies from: gwern, ChristianKl
comment by gwern · 2014-08-04T19:09:31.771Z · LW(p) · GW(p)

I don't find these results to be of much value. There's a long history of various sleep-duration correlations turning out to be confounds from various diseases and conditions (as your quote discusses), so there's more than usual reason to minimize the possibility of causation, and if you do that, why would anyone care about the results? I don't think a predictive relationship is much good for say retirement planning or diagnosing your health from your measured sleep. And on the other hand, there's plenty of experimental studies on sleep deprivation, chronic or acute, affecting mental and physical health, which overrides these extremely dubious correlates. It's not a fair fight.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-08-04T19:27:02.575Z · LW(p) · GW(p)

Yes, my primary reason for posting these studies was actually to elicit a discussion about the kinds of conclusions we may or may not be entitled to draw from them (though I failed to make this clear in my original comment). I would like to have a better epistemic framework for drawing inferences from correlational studies, and it is unclear to me whether the sheer (apparent) poor track-record of correlational studies when assessed in light of subsequent experiments is enough to dismiss them altogether as sources of evidence for causal hypotheses. And if we do accept that sometimes correlational studies are evidentially causally relevant, can we identify an explicit set of conditions that need to obtain for that to be the case, or are these grounds so elusive that we can only rely on subjective judgment and intuition?

comment by ChristianKl · 2014-08-04T22:04:11.029Z · LW(p) · GW(p)

Based on that data, I think a blanket suggestion that everybody should sleep 8 hours isn't warranted. It seems that some people with illnesses or who are exposed to other stressors need 8 hours.

I would advocate that everybody sleeps enough to be fully rested instead of trying to sleep a specific number of hours that some authority considers to be right for the average person.

I think the same goes for daily water consumption. Optimize values like that in a way that makes you feel good on a daily basis instead of targeting a value that seems to be optimal for the average person.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-08-04T22:14:57.062Z · LW(p) · GW(p)

What are your grounds for making this recommendation? The parallel suggestion that everyone should eat enough to feel fully satisfied doesn't seem like a recipe for optimal health, so why think things should be different with sleep? Indeed, the analogy between food and sleep is drawn explicitly in one of the papers I cited, and it seems that a "wisdom of nature" heuristic (due to "changed tradeoffs"; see Bostrom & Sandberg, sect. 2) might support a policy of moderation in both food and sleep. Although this is all admittedly very speculative.

Replies from: ChristianKl
comment by ChristianKl · 2014-08-04T22:42:02.481Z · LW(p) · GW(p)

What are your grounds for making this recommendation?

Years of thinking about the issue that aren't easily compressed.

In general alarm clocks don't seem to be healthy devices. The idea of habitually breaking sleep at a random point of the sleep circle doesn't seem good.

Let's say we look at a person who needs 8 hours of sleep to feel fully rested. The person has health issue X. When we solve X than they only need 7 hours of sleep. The obvious way isn't to wake up the person after 7 hours of sleep but to actually fix X.

That idea of sleep seems to both reflect the research that forcibly cutting peoples sleep in a way that leads to sleep deprivation is bad. It also explains why the people who sleep 8 hours on average die earlier than the people who sleep 7 hours.

If I get a cold my body needs additional sleep during that time. I have a hard time imagine that cutting that sleep needs away is healthy.

If we look at eating I also think similar things are true. There not much evidence that forced dieting is healthy. Fixing underlying issues seems to be preferable over forcibly limiting food consumption.

While we are at the topic of sleep and mortality it's worth pointing out that sleeping pills are very harmful to health.

comment by Lumifer · 2014-08-07T17:42:34.932Z · LW(p) · GW(p)

What it means to be statistically educated, a list by the American Statistical Association. Not half bad.

comment by iarwain1 · 2014-08-06T16:37:02.541Z · LW(p) · GW(p)

Anybody have any advice on how to successfully implement doublethink?

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-06T19:53:07.654Z · LW(p) · GW(p)

Once upon a time I tried using what I could coin "quicklists". I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I'd been doing, and some policemen asking me pointed questions. (I don't believe any drugs were involved, just sleep deprivation, but I can't say for certain).

More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it's hard to quantify the exact effect.

I'm not certain these techniques actually count as "doublethink", since the contradiction is between my "internal" beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.

NB: Retrieving your original beliefs after you've been going off of the ones from the paper is left as an exercise to the student

Replies from: None
comment by [deleted] · 2014-08-07T05:18:11.956Z · LW(p) · GW(p)

I would like to read more about this. Would you consider writing it up?

Replies from: mathnerd314
comment by mathnerd314 · 2014-08-07T15:22:49.524Z · LW(p) · GW(p)

I thought I had written all I could. What sort of things should I add?

Replies from: Vulture
comment by Vulture · 2014-08-07T23:18:49.256Z · LW(p) · GW(p)

I think a little more elaboration on the quicklists experiment would be appreciated, and in particular a clearer description of what you think transpired when it went "too right". For me, at least, your experimental outcome might be extremely surprising (depending on the extent of the sleep deprivation involved), but I'm not even sure yet what model I should be re-assessing.

comment by chaosmage · 2014-08-04T19:21:31.716Z · LW(p) · GW(p)

I've been looking for tools to help organize complex arguments and systems into diagrams, and ran into Flying Logic and Southbeach modeller. Could anyone here with experience using these comment on their value?

Replies from: mathnerd314, mathnerd314
comment by mathnerd314 · 2014-08-04T23:17:04.024Z · LW(p) · GW(p)

I don't have experience with those, but I'll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg

comment by mathnerd314 · 2014-08-10T15:49:23.756Z · LW(p) · GW(p)

And UnBBayes does computational analyses, similar to Flying Logic, except it uses Bayesian probability.

comment by jamesf · 2014-08-04T17:36:21.432Z · LW(p) · GW(p)

Suppose you wanted to find out all the correlates for particular Big Five personality traits. Where would you look, besides the General Social Survey?

Replies from: gwern
comment by gwern · 2014-08-04T19:06:18.995Z · LW(p) · GW(p)

Would 'Google Scholar' be too glib an answer here?

Replies from: jamesf
comment by jamesf · 2014-08-04T19:42:53.634Z · LW(p) · GW(p)

It gave me mostly psychological and physiological correlates. I'm interested more in behavioral and social/economic things. I suppose you can get from the former to the latter, though with much less confidence than a directly observed correlation.

Your answer is exactly as glib as it should be, but only because I didn't really specify what I'm curious about.

comment by Error · 2014-08-09T23:35:14.255Z · LW(p) · GW(p)

I'm at Otakon 2014, and there was a panel today about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.

comment by Ixiel · 2014-08-07T00:56:37.186Z · LW(p) · GW(p)

Is there a way to see if I can vote both ways?

A month or so ago I started to get errors saying I can't downvote. I don't really care that much (it's not me that's gaining from my vote), but if I can't downvote I want to make sure I don't upvote so I don't bias things.

Replies from: Alicorn, tut
comment by Alicorn · 2014-08-07T01:06:48.612Z · LW(p) · GW(p)

Your downvotes are limited by your karma (I think it's four downvotes to a karma point). I don't think you will meaningfully bias anything if you continue to upvote things you like while accumulating enough karma to downvote again.

Replies from: Ixiel, tut
comment by Ixiel · 2014-08-07T01:32:46.928Z · LW(p) · GW(p)

Yeah it's the principle. I guess I'll just try a down before I up going forward. Thanks Al

comment by tut · 2014-08-07T16:19:20.648Z · LW(p) · GW(p)

That they are, even when everything works perfectly. There was also an error a while ago that gave the same error message to (some?) people who were not at their limit.

comment by tut · 2014-08-07T16:16:52.371Z · LW(p) · GW(p)

I had those too. It stopped rather quickly.

comment by Lumifer · 2014-08-06T15:28:05.648Z · LW(p) · GW(p)

Anchoring in marathon runners.

Replies from: witzvo, Douglas_Knight
comment by witzvo · 2014-08-07T02:06:58.673Z · LW(p) · GW(p)

That's a pretty cool histogram in figure 2.

comment by Metus · 2014-08-04T23:04:36.729Z · LW(p) · GW(p)

What is the general opinion on neurofeedback? Apparently there is scientific evidence pointing to its efficacy, but have there been controlled studies showing greater benefit to neurofeedback over traditional methods if they are known?

Replies from: James_Miller
comment by James_Miller · 2014-08-05T03:51:10.326Z · LW(p) · GW(p)

I have done a lot of neurofeedback. It's more of an art than a science right now. I think there have been many studies that have shown some benefit, although I don't know if any are long-term. But the studies might not be of much value since there is so much variation in treatment since it is supposed to be customized for your brain. The first step is going to a neurofeedback provider and having him or her look at your qEEG to see how your brain differs from a typical persons' brain. Ideally for treatment, you would say I have this problem, and the provider would say, yes this is due to your having ... and with 20 sessions we can probably improve you. Although I am not a medical doctor, I would strongly advise anyone who can afford it to try neurufeedback before they try drugs such as anti-depressants.

comment by [deleted] · 2014-08-04T19:43:19.455Z · LW(p) · GW(p)

Does anyone have any experience or thoughts regarding Cal Newport's "Study Hacks" blog, or his books? I'm trying to get an idea of how reliable his advice is before, saying, reading his book about college, or reading all of the blog archives.

Replies from: Kaj_Sotala, Benito
comment by Ben Pace (Benito) · 2014-08-04T22:57:23.432Z · LW(p) · GW(p)

Cognito Mentoring refer to him a fair bit, and often in mild agreement. Check their blog and wiki.

comment by NancyLebovitz · 2014-08-09T10:01:11.384Z · LW(p) · GW(p)

A history of anime fandom

I'm not vouching for this, but it sounds plausible.

comment by Skeptityke · 2014-08-06T18:12:59.846Z · LW(p) · GW(p)

Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.

So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?

Replies from: Lumifer, bramflakes, shminux, tut
comment by Lumifer · 2014-08-06T19:20:20.001Z · LW(p) · GW(p)

Yes, though ignoring the effects of evaporation is ignoring a major factor.

comment by bramflakes · 2014-08-09T15:36:48.824Z · LW(p) · GW(p)

Yes, it's how hair dryers work.

comment by shminux · 2014-08-06T21:24:39.345Z · LW(p) · GW(p)

Yes. Your body would try to cool your face exposed to hot air by circulating more blood through it, creating a temperature gradient through the surface layer. Consequently, the air nearest your face would be colder than ambient. A wind would blow away the cooler air, resulting in the air with ambient temperature touching your skin. Of course, in reality humidity and sweating are major factors, negating the above analysis.

comment by tut · 2014-08-07T08:57:08.153Z · LW(p) · GW(p)

Yes. This happens sometimes in a really wet Sauna.

But conditions in which you actually feel this also kill you in less than a day. You need to lose about 100 W of heat in order to keep a stable body temperature, and moving air only feels hotter than still air if you are gaining heat from the air.

comment by William_Quixote · 2014-08-04T13:05:54.125Z · LW(p) · GW(p)

Scicast: I mentioned this last open thread, but it was late in the month and got buried. Who here participates on scicast? I'm there under this name. It would be good to get a tally of how much LW prescience there is and how we as a group are doing. So if you're there, sound off

comment by advancedatheist · 2014-08-04T17:11:51.604Z · LW(p) · GW(p)

Has anyone tried to watch "Atheist TV"? https://atheists.org/atheistTV/live

I've joked that you would have trouble following the programming because the shows would start and stop suddenly through random chance. ; )

Seriously, I hope it doesn't run "atheist porn" about Madalyn O'Hair's alleged greatness, an opinion of her legacy I don't happen to share. I've read several accounts of her life which show how badly a mess she made of it, leading up to her abduction and murder by a violent career criminal named David Waters whom she had hired for her American Atheists' organization, and then managed to piss off somehow. Madalyn's younger son, the atheist activist Jon Murray, and her granddaughter (Jon's niece) Robin all lived together, and they all died at the hands of Waters and his accomplices.

Despite my efforts to bring this up on atheist forums, apparently Madalyn's fans don't want to discuss the weirdness of her family situation. Madalyn in her 1965 Playboy interview says that she thought girls should become sexually active as early as 13, and the boys at 15, and that religious superstition interfered with normal sexual development and fulfillment. Yet she kept her younger son Jon from moving out of her house, and she reportedly ran off the only known girlfriend Jon ever had (it remains unknown if Jon ever had his sexual debut with any woman); so Jon, the atheist, up through his murder at age 40 lived like a sexually abstinent christian or something, and quite possibly died a virgin.

If "atheism" makes it easier to become sexually self-actualized, a belief even many christians hold in a back-handed way, then Jon must have really sucked at the task of living as an atheist, despite having the example of America's best known atheist in the latter 20th Century as his mother.

Now, if some fringe christian obsessive like Fred Phelps had a 40 year old son who never moved away from home and apparently never had a girlfriend, atheists would draw conclusions from the situation which support their prejudices about the sex-negativity of certain kinds of christian belief. Why, look at what religion did to this poor fellow!

comment by advancedatheist · 2014-08-04T18:13:01.410Z · LW(p) · GW(p)

Notice the title of this article:

$7,060,259,674,497.51--Federal Debt Up $7 Trillion Under Obama http://www.cnsnews.com/news/article/terence-p-jeffrey/706025967449751-federal-debt-7t-under-obama

A Modern Monetary Theorist would look at the other side of the ledger and write, "U.S. Dollar Assets Held by Non-Federal Entities Up $7 Trillion Under Obama." And he or she wouldn't necessarily consider this outcome catastrophic or even harmful.

The Federal Debt seems to track the build up in retirement assets, for example: http://research.stlouisfed.org/fred2/graph/?g=mzu.

Because users of the U.S. dollar live in a closed financial system, the dollars in those assets have to come from the Federal Debt because they literally can't from anywhere else.