Open Thread, April 27-May 4, 2014

post by NancyLebovitz · 2014-04-27T20:34:17.084Z · LW · GW · Legacy · 206 comments

Contents

  You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
206 comments

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

And, while this is an accidental exception, future open threads should start on Mondays until further notice.

206 comments

Comments sorted by top scores.

comment by Omid · 2014-04-28T04:25:25.697Z · LW(p) · GW(p)

Seth Roberts is dead .

I was considering the Shangri-La diet, but now I'm nervous.

Replies from: Gary_Drescher, NancyLebovitz, Stabilizer, David_Gerard, ChristianKl
comment by Gary_Drescher · 2014-05-20T17:16:29.013Z · LW(p) · GW(p)

According to information his family graciously posted to his blog, the cause of death was occlusive coronary artery disease with cardiomegaly.

http://blog.sethroberts.net/

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-21T17:18:48.842Z · LW(p) · GW(p)

Does that make it more likely or less likely that his death was related to his diet?

comment by NancyLebovitz · 2014-04-28T04:31:04.544Z · LW(p) · GW(p)

The commenters are more concerned about the possible effects of high doses of omega-3.

comment by Stabilizer · 2014-04-28T05:35:02.814Z · LW(p) · GW(p)

This is really sad. He definitely was something else when it came to self-experimentation.

comment by David_Gerard · 2014-04-28T16:08:25.475Z · LW(p) · GW(p)

The blog's now disappeared. Archive copy.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-28T17:32:04.039Z · LW(p) · GW(p)

His blog is back-- it's had occasional down time for a while. The archive copy was down, though.

Probably a good idea to save anything you think is especially important.

comment by ChristianKl · 2014-04-28T13:56:13.384Z · LW(p) · GW(p)

It's very sad news and I still ask myself what to make of it. Seth influenced my own QS journey a lot. In the end the it seems like extrapolating health from the kind of data he gathered wasn't possible.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-28T14:51:06.227Z · LW(p) · GW(p)

His approach would be expected to optimize for common situations, which may not be the same as optimizing for rare situations. I've been working on a theory that health is not a single thing.

For all I know, he had some intrinsic cardio-vascular problems, and his self-experimentation led to him living longer than he otherwise would have.

Replies from: ChristianKl, brazil84
comment by ChristianKl · 2014-04-29T09:32:23.211Z · LW(p) · GW(p)

I've been working on a theory that health is not a single thing.

That an interesting way of phrasing the sentence.

The issue is that Seth himself based his behavior on the idea that health is a bit like intelligence and it's possible to generalize from a few factors most of the useful information.

comment by brazil84 · 2014-04-28T16:49:45.374Z · LW(p) · GW(p)

Intuitively, it seems likely to me that his death is related to one or more of his self-experiments with supplements. This is based on the observation that it's pretty unusual for 60 year old men to collapse and die, particularly if they have no serious self-reported health problems. Calculating an actual probability seems like it would be pretty hard.

Edit: I suppose there is also an outside chance that this is a hoax. Has the death been reported in any newspapers?

Replies from: gwern, ChristianKl
comment by gwern · 2014-04-28T20:07:18.680Z · LW(p) · GW(p)

60-yo men die all the time; anytime someone who writes on diet dies, someone is going to say 'I wonder if this proves/disproves his diet claims', no matter what the claims were or their truth. They don't, of course, since even if you had 1000 Seth Roberts, you wouldn't have a particularly strong piece of evidence on correlation of 'being Roberts' and all-cause mortality, and his diet choices were not randomized, so you don't even get causal inference. More importantly, if Roberts had died at any time before his actuarial life expectancy (in the low 80s, I'd eyeball it, given his education, ethnicity, and having survived so long already), people would make this claim.

OK, so let's be a little more precise and play with some numbers.

Roberts published The Shangri-la Diet in 2006. If he's 60 now in 2014 (8 years later), then he was 52 then. Let's say people would only consider his death negatively if he died before his actuarial life expectancy, and I'm going to handwave that as 80; then he has 28 years to survive before his death stops looking bad.

What's his risk of dying if his diet makes zero difference to his health one way or another? Looking at http://www.ssa.gov/OACT/STATS/table4c6.html from 52-80, the per-year risk of death goes from 0.006337 to 0.061620. What's the cumulative risk? We can, I think, calculate it as (1 - 0.06337) ... (1 - 0.061620). A little copy-paste, a little Haskell, and:

> foldr1 (*) $ map (1-) [0.006337,0.006837,0.007347,0.007905,0.008508,0.009116,0.009723,
                         0.010354,0.011046,0.011835,0.012728,0.013743,0.014885,0.016182,
                         0.017612,0.019138,0.020752,0.022497,0.024488,0.026747,0.029212,
                         0.031885,0.034832,0.038217,0.042059,0.046261,0.050826,0.055865,
                         0.061620]
0.5065374918662645

So roughly speaking, Roberts had maybe a 50% chance of surviving from publishing his diet book to a ripe old age. (Suppose Roberts's ideas had halved his risk of death in each time period, which we can implement with a call to map (/2). It's not quite as simple as dividing 50% by 2, but when you rerun the probability, then he'd have a 71% chance of survival, or more relevantly, he still has a 29% chance of dying in that timespan.)

In summary: Life sucks, and diet gurus can be expected to die all the time no matter whether their ideas are great or horrible, so their deaths tell us so little that discussing it at all is probably biasing our beliefs through an anchoring or salience effect.

Replies from: brazil84, V_V
comment by brazil84 · 2014-04-29T06:20:58.282Z · LW(p) · GW(p)

60-yo men die all the time; anytime someone who writes on diet dies, someone is going to say 'I wonder if this proves/disproves his diet claims', no matter what the claims were or their truth.

Agreed.

More importantly, if Roberts had died at any time before his actuarial life expectancy (in the low 80s, I'd eyeball it, given his education, ethnicity, and having survived so long already), people would make this claim.

Not sure about that, for example if he had died at the age of 81 in a car accident. Although I appreciate your effort, I am not sure that you have the reference class of events correct. The evidence suggest that Roberts died (1) suddenly; (2) due to failure of some bodily system; (3) at an age which is well under his life expectancy. The prior probability of this happening has got to be far less than the prior probability of him simply dying from any cause before his actuarial life expectancy.

At the same time, he was apparently consuming large amounts of butter, omega fatty acids from flax seeds, and other esoteric things. Of course it's difficult to even being estimating the risk inherent in doing such things.

Ironically, Seth Roberts was a big believer in "n=1 experiments."

Do you have an estimate of the probability that Robert's death is related to his supplement regime?

Replies from: gwern, NancyLebovitz
comment by gwern · 2014-04-29T16:30:57.349Z · LW(p) · GW(p)

Not sure about that, for example if he had died at the age of 81 in a car accident. Although I appreciate your effort, I am not sure that you have the reference class of events correct.

The all-cause mortality figures were chosen for convenience. I'm sure one could dig up more appropriate figures that exclude accident, homicide, etc. But the reference class is still going to be pretty broad: if Roberts had committed suicide, had developed cancer, had a stroke rather than heart attack (or whatever), had a fall, people would be speculating on biological roots ('perhaps he was going senile thanks to the oils' or 'he claimed the flax seed oil was helping balance, but he fell all the same!'). And I'm not sure that the better figures would be that much lower: this isn't a young cohort - few elderly people are murdered or die in car accidents, AFAIK, and mortality is primarily from diseases and other health problems.

The prior probability of this happening has got to be far less than the prior probability of him simply dying from any cause before his actuarial life expectancy.

As I've pointed out, the prior is quite high that he would die in a 'suspicious' way.

Do you have an estimate of the probability that Robert's death is related to his supplement regime?

No, and I refuse to give one on a problem which reflects motivated cognition on the part of many people based on heavily-selected evidence & post hoc reasoning. Any estimate would anchor me and bias my future thinking on diet matters. The story is far too salient, the evidence far too weak.

Replies from: brazil84
comment by brazil84 · 2014-04-29T17:02:05.321Z · LW(p) · GW(p)

I'm sure one could dig up more appropriate figures that exclude accident, homicide, etc. But the reference class is still going to be pretty broad: if Roberts had committed suicide, had developed cancer, had a stroke rather than heart attack (or whatever), had a fall, people would be speculating on biological roots

I would have to agree with that, however some causes of death are more suspicious than others. In this case, he died apparently died suddenly, at an age where sudden death is rather unusual in people with no self-reported history of serious health problems. Also, this kind of sudden death is usually the result of cardiovascular problems, i.e. heart attack or stroke. Last, he was known to be regularly consuming a lot of concentrated fat on a regular basis (half a stick of butter a day; and perhaps olive oil and flax seed on top of it); fatty foods have long been suspected as playing a role in cardiovascular problems, that they cause lipids to build up in the blood stream and clog up the works.

It would be very tricky to do the equations, if it's possible at all, but it seems reasonable to think it's likely that his supplement regimen played a role in his demise.

As I've pointed out, the prior is quite high that he would die in a 'suspicious' way.

Well do you agree that what happened is more 'suspicious' than if he had died at the age of 75 from lung cancer?

No, and I refuse to give one on a problem which reflects motivated cognition on the part of many people based on heavily-selected evidence & post hoc reasoning.

Suit yourself, but it strikes me as confusing that I would make a claim and you would respond with a calculation which seems to address the claim but actually doesn't. It makes me think you are trying to subtly change the subject. Which is fine, but I think you should be explicit about it. Otherwise it seems like you are attacking a strawman.

Replies from: gwern
comment by gwern · 2014-04-29T20:36:55.620Z · LW(p) · GW(p)

In this case, he died apparently died suddenly, at an age where sudden death is rather unusual in people with no self-reported history of serious health problems. Also, this kind of sudden death is usually the result of cardiovascular problems, i.e. heart attack or stroke. Last, he was known to be regularly consuming a lot of concentrated fat on a regular basis (half a stick of butter a day; and perhaps olive oil and flax seed on top of it); fatty foods have long been suspected as playing a role in cardiovascular problems, that they cause lipids to build up in the blood stream and clog up the works.

Again, this is post hoc reasoning conjured upon observing the exact particulars of his death, and so suspect even without considering additional questions like whether fat is all it's cracked up to be, what his medical tests were saying, etc.

Well do you agree that what happened is more 'suspicious' than if he had died at the age of 75 from lung cancer?

Yes.

Suit yourself, but it strikes me as confusing that I would make a claim and you would respond with a calculation which seems to address the claim but actually doesn't.

My calculation addresses a major part of the Bayesian calculation: the probability of an observed event ('death') conditional on the hypothesis ('his diet is harmful') being false. Since dying aged 52-80 is so common, that sharply limits how much could ever be inferred from observing dying.

Replies from: brazil84
comment by brazil84 · 2014-04-29T21:04:15.178Z · LW(p) · GW(p)

Again, this is post hoc reasoning conjured upon observing the exact particulars of his death

Actually I don't know the exact particulars of the death. But I do agree with what I think is your basic point here -- it's extremely easy to make these sorts of connections with the benefit of hindsight and that ease might be coloring my analysis. At the same time, I do think that -- in fairness -- the death is pretty high on the 'suspicious' scale so I stand by my earlier claim.

My calculation addresses a major part of the Bayesian calculation:

Perhaps, but it seems to me you are throwing the baby out with the bathwater a bit here by ignoring the facts which make this death quite a bit more 'suspicious' than other deaths of men in that age range. More importantly, you don't seem to dispute that your calculation doesn't really address my claim.

Look, I agree with your basic point -- the premature death of a diet guru, per se, doesn't say much about the efficacy or danger of the diet guru's philosophy. No calculation is necessary to convince me.

Replies from: gwern
comment by gwern · 2014-04-29T23:31:00.601Z · LW(p) · GW(p)

More importantly, you don't seem to dispute that your calculation doesn't really address my claim.

I did dispute that:

My calculation addresses a major part of the Bayesian calculation...that sharply limits how much could ever be inferred from observing [Roberts] dying.

(A simple countermeasure to avoid biasing yourself with anecdotes: spend time reading in proportion to sample size. So you're allowed to spend 10 minutes reading about Roberts's 1 death if you then spend 17 hours repeatedly re-reading a study on how fat consumption did not predict increased mortality in a sample of 100 men.)

Replies from: brazil84, Jayson_Virissimo
comment by brazil84 · 2014-04-30T01:05:17.569Z · LW(p) · GW(p)

I did dispute that:

My calculation addresses a major part of the Bayesian calculation...that sharply limits how much could ever be inferred from observing [Roberts] dying.

I wouldn't call it "major" because (1) you refuse to assign a probability to an event I stated I thought was likely; and (2) the main point of your calculation was pretty non-controversial and even without a calculation I doubt anyone would seriously dispute it.

Let's do this: Is there anything I stated with which you disagree? If so, please quote it. TIA.

Replies from: gwern
comment by gwern · 2014-04-30T01:23:53.946Z · LW(p) · GW(p)

I wouldn't call it "major" because (1) you refuse to assign a probability to an event I stated I thought was likely;

It puts an upper bound as I said. Plug the specific conditional I calculated into Bayes theorem and see what happens. Or look at a special case: suppose conditional on the diet not being harmful, Roberts had a 50% chance of dying before 80; now, what is the maximal amount in terms of odds or decibels or whatever that you could ever update your prior upon observing Roberts's death assuming the worsened diet risk is >50%? Is this a large effect size? Or small?

(Now take into account everything you know about correlations, selection effects, the plausibility of the underlying claims about diet, what is known about Roberts's health, how likely you are to hear about deaths of diet gurus, etc...)

(2) the main point of your calculation was pretty non-controversial and even without a calculation I doubt anyone would seriously dispute it.

One would think so.

Replies from: brazil84
comment by brazil84 · 2014-04-30T07:42:02.235Z · LW(p) · GW(p)

It puts an upper bound as I said.

So what? One can trivially put an upper and lower bound on any probability: No probability can exceed 1 or be lower than 0. But it ain't "major" to say so. On the contrary, it's trivial.

Anyway, please answer my question: Was there anything in my original post with which you disagreed? If so, please quote it. TIA.

comment by Jayson_Virissimo · 2014-04-30T00:17:33.045Z · LW(p) · GW(p)

Your countermeasure seems to recommend never reading fiction. Feature or bug?

comment by NancyLebovitz · 2014-04-29T08:26:17.441Z · LW(p) · GW(p)

Seth Roberts' last article

It was nice to know all that but I did wonder: Was I killing myself? Fortunately I could find out. A few months before my butter discovery, I had gotten a “heart scan” – a tomographic x-ray of my circulatory system. These scans are summarized by an Agatston score, a measure of calcification. Your Agatston score is the best predictor of whether you will have a heart attack in the next few years. After a year of eating a half stick of butter every day, I got a second heart scan. Remarkably, my Agatston score had improved (= less calcification), which is rare. Apparently my risk of a heart attack had gone down.

Some ambiguity about the Agatston score

Agatston's overview of his test

Replies from: brazil84
comment by brazil84 · 2014-04-29T14:09:42.531Z · LW(p) · GW(p)

Thank you for your post, which raises some interesting questions. Of course at this point it is not known if Roberts died of a heart attack, although the smart money is on a cardio-vascular problem - heart attack, stroke, aneurism, etc.

The first question is whether the Agatston score is as good as it's made out to be by Doctor Agatston. Another question is whether it is skillful in the case of Roberts himself. Probably none of the people who were studied were eating half a stick of butter a day, along with lots of flax seeds, extra light olive oil, and who knows what else.

Replies from: V_V
comment by V_V · 2014-04-29T22:07:57.967Z · LW(p) · GW(p)

I'm not a doctor, but a quick search on Wikipedia turns up that the most common cause of sudden death in people over 30 is coronary artery atheroma (arteriosclerosis), but other common causes are genetically determined or at least have a significant genetic component. I suppose some of these are easier to detect (hypertrophic cardiomyopathy perhaps?), so we can probably rule them out for somebody like Roberts who constantly monitored his health and bragged about how healthy he was. Other conditions are probably more difficult to detect with standard tests.

Replies from: brazil84
comment by brazil84 · 2014-04-29T23:02:56.336Z · LW(p) · GW(p)

The puzzle has a lot of pieces missing, to be sure. Another question is whether Roberts was telling the whole truth about his health. Or about his diet for that matter. It's even not out of the question that he has gained a lot of weight.

comment by V_V · 2014-04-29T21:28:57.659Z · LW(p) · GW(p)

So roughly speaking, Roberts had maybe a 50% chance of surviving from publishing his diet book to a ripe old age.

If his actuarial life expectancy was 80 and he had died at 79 it wouldn't have looked particularly suspicious. But according to your data, his probability of dying between 52 and 60 was only about 7.5%, which is not terribly low, but still enough to warrant reasonable doubt, especially considering the circumstances of his death.

Replies from: brazil84, gwern
comment by brazil84 · 2014-05-04T17:51:22.103Z · LW(p) · GW(p)

But according to your data, his probability of dying between 52 and 60 was only about 7.5%, which is not terribly low, but still enough to warrant reasonable doubt, especially considering the circumstances of his death.

I think the more interesting question is the probability of a man in his age range (who is not obese; not a smoker; and has no serious self-reported history of health problems) suddenly collapsing and dying. I don't know the answer to this question, but it's a pretty unusual event.

By the way, here is a video of Seth Roberts speaking about his butter experiment a few years ago. Seth Roberts mentions that he eats a half a stick of butter a day on top of his Omega-3 regimen. (And probably this is on top of daily consumption of raw olive oil).

http://vimeo.com/14281896

At around 11:00, an apparent cardiologist concedes that the butter regimen may very well improve brain function but he warns Roberts that he is risking clogging up the arteries in his brain and points out that Roberts brain function won't be so great if he has a stroke. Roberts is pretty dismissive of the comment and points out that there is reason to believe the role of fat consumption in atherosclerosis over-emphasized or mistaken.

Still, if someone suddenly collapses and dies, from what I understand it's usually a cardiovascular problem -- a blood clot; stroke; aneurism; heart attack, internal bleeding, etc. And Roberts was consuming copious amounts of foods which are widely believed to have a big impact on the cardiovascular system.

It's silly to ignore this information when assessing probabilities. Here's an analogy: Suppose that Prince William has a newborn son and you are going to place a bet on what the child's name will be. You might reason that the most common male given name in the world is "Mohamed" and therefore the smart money is on "Mohamed." Of course you would lose your money.

The flaw in this type of reasoning is that when assessing probabilities, there is a requirement that you use all available information.

I imagine Gwern would respond that he is merely setting an upper bound. But that's silly and pointless too. If 90% of male children in Saudi Arabia are named "Mohamed," we can infer that the probability the Royal Baby will be named "Mohamed" does not exceed 90%. But so what? That's trivial.

comment by gwern · 2014-04-29T23:28:25.932Z · LW(p) · GW(p)

but still enough to warrant reasonable doubt, especially considering the circumstances of his death.

I disagree (reasonable doubt under what assumptions? in what model? can you translate this to p-values? would you take that p-value remotely seriously if you saw it in a study where n=1?), and I've already pointed out many systematic biases and problems with attempting to infer anything from Roberts's death.

Replies from: V_V, Douglas_Knight
comment by V_V · 2014-04-30T07:21:05.641Z · LW(p) · GW(p)

I'm not saying we can scientifically infer from his premature death that his diet was unhealthy.

I'm saying that his premature death is informal evidence that his diet at best didn't have a significant positive impact on life expectancy, and at worst was actively harmful. I can't quantify how much, but you were the one who attempted a quantitative argument and I've just criticized your argument, namely your strawman definition of "suspicious death", using your own data and assumptions, hence it seems odd that you now ask me for assumptions and p-values.

comment by Douglas_Knight · 2014-04-30T03:29:46.374Z · LW(p) · GW(p)

Isn't the p-value simply 100%-7.5%?

comment by ChristianKl · 2014-04-29T09:34:49.762Z · LW(p) · GW(p)

Edit: I suppose there is also an outside chance that this is a hoax. Has the death been reported in any newspapers?

Yes fittingly from Ryan Holiday: http://betabeat.com/2014/04/personal-science-pioneer-seth-roberts-passes-away/

But I don't think a normal newspaper would do more fact checking then the people who read Seth's blog and comment on it.

comment by jaime2000 · 2014-05-01T14:27:34.339Z · LW(p) · GW(p)

I just graduated from FIU with a bachelor's in philosophy and a minor in mathematics. I'd like to thank my parents, God and Eliezer Yudkowsky (whose The Sequences I cited in each of the five papers I had to turn in during my final semester).

Replies from: Prismattic, shminux
comment by Prismattic · 2014-05-02T01:43:55.841Z · LW(p) · GW(p)

I can't tell whether the serial comma joke here is intentional.

comment by Shmi (shminux) · 2014-05-01T23:30:53.747Z · LW(p) · GW(p)

Grats! Hope you have a job lined up.

God and Eliezer Yudkowsky

Redundant? Mutually exclusive? I can't decide.

Replies from: arundelo
comment by Punoxysm · 2014-04-27T21:27:18.350Z · LW(p) · GW(p)

I have to say, I seriously don't get the Bayesian vs Frequentist holy wars. It seems to me the ratio of importance to education of its participants is ridiculously low.

Bayesian and frequentist methods are sets of statistical tools, not sacred orders to which you pledge a blood oath. Just understand the usage of each tools, and the fact that virtually any model of something that happens in the real world is going to be misspecified.

Replies from: Oscar_Cunningham, Tenoke, Eugine_Nier, ChristianKl, Emile
comment by Oscar_Cunningham · 2014-04-27T21:45:16.701Z · LW(p) · GW(p)

It's because Bayesian methods really do claim to be more than just a set of tools. They are supposed to be universally applicable.

comment by Tenoke · 2014-04-27T21:38:34.366Z · LW(p) · GW(p)

I have to say, I seriously don't get the Bayesian vs Frequentist holy wars.

This is a bit of an exaggeration.

Additionally, you are only talking about the 'sets of statistical tools', where in my experience the bigger disagreement often lies in whether a person accepts that probabilities can be subjective or not; And yes - this does matter.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-28T06:35:28.916Z · LW(p) · GW(p)

Can you please give an example of where the possible subjectivity of probabilities matter? I mean this in earnest.

Replies from: Tenoke
comment by Tenoke · 2014-04-28T06:55:40.504Z · LW(p) · GW(p)

'From my point of view the probability for X is Y, but from his point of view at the time it would've been Z'. (subjective) vs 'The Probability for X is Y' ('objective').

Honestly though, frequentists use subjective probabilities all the time and you can argue that frequentism is just as subjective as bayesinism, so even that disagreement is quite muddy.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-28T18:23:26.093Z · LW(p) · GW(p)

Can you be more concrete? When would this matter for two people trying to share a model and make predictions of future events?

comment by Eugine_Nier · 2014-05-02T04:14:59.187Z · LW(p) · GW(p)

Part of it is that Bayesianism claims to be not just a better statistical tool, but a new and better epistemology, a replacement and improvement over Aristotelian logic.

comment by ChristianKl · 2014-05-01T22:42:44.730Z · LW(p) · GW(p)

There are a bunch of issues involved. It hard to speak about them because the term Bayesianism is encompasses a wide array of ideas and everytime it's used it might refer to a different subset of that cluster of ideas.

Part of LW is that it's a place to discuss how an AGI could be structured. As such we care about the philosophic level of how you come to know that something is true. As such there an interest into going as basic as possible when looking at epistemology. There are issues about objective knowledge versus "subjective" Bayesian priors that are worth thinking about.

We live at a time where up to 70% of scientific research can't be replicated. Frequentism might not be to blame for all of that, but it does play it's part. There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

There are further issues that a lot of additional assumptions are loaded into the word Bayesianism if you use that word on LessWrong. What Bayesianism taught me speaks about a bunch of issues that only have indirectly something to do with Bayesian tools vs. Frequentist tools.

Let's say I want to decide how much salt I should eat. I do follow the consensus that salt is bad and therefore have some prior that salt is bad. Then a new study comes along and says that low salt diets are unhealthy. If I want to make good decisions I have to ask: How much should I update? There no good formal way for making such decisions. We lack a good framework for doing this. Bayes rule is the answer to that problem that provides the promise of a solution. The solution to wait a few years and then read a meta review is unsatisfying.

In the absence of a formal way to do the reasoning, many people do use informal ways of updating towards new evidence. Cognitive bias research suggest that the average person isn't good at this.

Just understand the usage of each tools, and the fact that virtually any model of something that happens in the real world is going to be misspecified.

That sentence is quite easy to say but it effectively means there no such thing as pure absolute objective truth. If you use tools A you get truth X and if you use tools B you get truth Y. Neither X or Y are "more true". That's not an appealing conclusion to many people.

Replies from: IlyaShpitser, lmm, Punoxysm
comment by IlyaShpitser · 2014-05-02T14:23:43.337Z · LW(p) · GW(p)

Full disclosure: I have papers using B (on structure learning using BIC, which is an approximation to a posterior of a graphical model), and using F (on estimation of causal effects). I have no horse in this race.


Bayes rule is the answer to that problem that provides the promise of a solution.

See, this is precisely the kind of stuff that makes me shudder, that regularly appears on LW, in an endless stream. While Scott Alexander is busy bible thumping data analysts on his blog, people here say stuff like this.

Bayes rule doesn't provide shit. Bayes rule just says that p(A | B) p(B) = p(B | A) p(A).

Here's what you actually need to make use of info in this study:

(a) Read the study.

(b) See if they are actually making a causal claim.

(c) See if they are using experimental or observational data.

(d) Experimental? Do we believe the setup? Are we in a similar cohort? What about experimental design issues? Observational? Do they know what they are doing, re: causality-from-observational-data? Is their model that permits this airtight (usually it is not, see Scott's post on "adjusting for confounders". Generally to really believe that adjusting for confounders is reasonable you need a case where you know all confounders are recorded by definition of the study, for instance if doctors prescribe medicine based only on recorded info in the patient file).

(e) etc etc etc

I mean what exactly did you expert, a free lunch? Getting causal info and using it is hard.


p.s. If you skeptical about statistics papers that adjust for confounders, you should also be skeptical about missing data papers that assume MAR (missing at random). It is literally the same assumption.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-02T16:04:52.616Z · LW(p) · GW(p)

You might want to read a bit more precisely. I did choose my words when I said "promise of a solution" instead of "a solution".

In particular MetaMed speaks about wanting to produce a system of Bayesian analysis of medical papers. (Bayesian mathematical assessment of diagnosis)

I mean what exactly did you expert, a free lunch? Getting causal info and using it is hard.

You miss the point. When it comes to interviewing candidates for job then we found out that unstructured human assessment doesn't happen that good.

It could very well be that the standard unstructured way of reading papers is not optimal and that we should have Bayesian beliefs nets in which we plug numbers such as whether the experiment is experimental or observational.

Whether MetaMed or someone else succeeds at that task and provides a good improvement on the status quo isn't certain but there are ideas to explore.

Is it clear that MetaMed as group of self professed Bayesians provide a useful service? Maybe, maybe not. On the other hand the philosophy on which MetaMed operates is not the standard philosophy on which the medical establishment operates.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-05-02T16:33:41.883Z · LW(p) · GW(p)

I don't know how Metamed works (and it's sort of their secret sauce, so they probably will not tell us without an NDA). I am guessing it is some combination of doing (a) through (e) above for someone who cannot do it themselves, and possibly some B stats. Which seems like a perfectly sensible business model to me!

I don't think the secret sauce is in the B stats part of what they are doing, though. If we had a hypothetical company called "Freqmed" that also humanwaved (a) through (e), and then used F stats I doubt they would get non-sensible answers. It's about being sensible, not your identity as a statistician.


I can be F with Bayes nets. Bayes nets are just a conditional independence model.


I don't know how successful Metamed will be, but I honestly wish them the best of luck. I certainly think there is a lot of crazy out there in data analysis, and it's a noble thing to try to make money off of making things more sensible.


The thing is, I don't know about a lot of the things that get talked about on LW. I do know about B and F a little bit, and about causality a little bit. And a huge chunk of stuff people say is just plain wrong. So I tell them it's wrong, but they keep going and don't change what they say at all. So how should I update -- that folks on this rationalist community generally don't know what they are talking about and refuse to change?

It's like wikipedia -- the first sentence in the article on confounders is wrong on wikipedia (there is a very simple 3 node example that violates that definition). The talk page on Bayesian networks is a multi-year tale of woe and ignorance. I once got into an edit war with a resident bridge troll for that article, and eventually gave up and left, because he had more time. What does that tell me about wikipedia?

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2014-05-02T17:24:32.059Z · LW(p) · GW(p)

If we had a hypothetical company called "Freqmed"

But we don't. MetaMed did come out of a certain kind of thinking. The project had a motivation.

I do know about B and F a little bit, and about causality a little bit.

Just because you know what the people in the statistic community mean when they say "Bayesian" doesn't automatically mean that you know what someone on LW means when he says Bayesian.

If you look at the "What Bayesianism taught me", there a person who changed their beliefs through learning about Bayesianism. Do the points he makes have something to do with Frequentism vs. Bayesianism? Not directly. On the other hand he did change major beliefs about he thinks about how the world and epistemology.

That means that the term Bayesianism as used in that article isn't completely empty.

It's about being sensible

Sensiblism might be a fun name for a philosophy. On the first LW meetup where I attended one of the participants had a scooter. My first question was about his traveling speed and how much time he effectively wins by using it. On that question he gave a normal answer.

My second question was over the accident rate of scooters. He replied something along the lines: "I really don't know, I should research the issue more in depth and get the numbers." That not the kind of answer normal people give when faced with the question for safety of the mode of travel.

You could say he's simply sensible while 99% of the population that out there that would answer the question differently isn't. On the other hand it's quite difficult to explain to those 99% that they aren't sensible. If you prod them a bit they might admit that knowing accident risks is useful for making a decision about one's mode of travel but they don't update on a deep level.

Then people like you come and say: "Well of course we should be sensible. There no need to point is about explicitly or to give it a fancy name. Being sensible should go without saying."

The problem is that in practice it doesn't go without saying and speaking about it is hard. Calling it Bayesianism might be a very confusing way to speak about it but it seems to be an improvement over having no words at all. Maybe tabooing Bayesianism as word on LW would be the right choice. Maybe the word produces more problems than it solves.

It's like wikipedia -- the first sentence in the article on confounders is wrong on wikipedia.

"In statistics, a confounding variable (also confounding factor, a confound, or confounder) is an extraneous variable in a statistical model that correlates (directly or inversely) with both the dependent variable and the independent variable." is at the moment that sentence. How would you change the sentence? There no reason why we shouldn't fix that issue right now.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-05-02T17:39:05.228Z · LW(p) · GW(p)

How would you change the sentence? There no reason why we shouldn't fix that issue right now.

Counterexamples to a definition (this example is under your definition but is clearly not what we mean by confounder) are easier than a definition. A lot of analytic philosophy is about this. Defining "intuitive terms" is often not as simple as it seems. See, e.g.:

http://arxiv.org/abs/1304.0564

If you think you can make a "sensible" edit based on this paper, I will be grateful if you did so!


re: the rest of your post, words mean things. B is a technical term. I think if you redefine B as internal jargon for LW you will be incomprehensible to stats/ML people, and you don't want this. Communication across fields is hard enough as it is ("academic coordination problem"), let's not make it harder by not using standard terminology.

Maybe tabooing Bayesianism as word on LW would be the right choice. Maybe the word produces more problems than it solves.

I am 100% behind this idea (and in general taboo technical terms unless you really know a lot about it).

Replies from: ChristianKl
comment by ChristianKl · 2014-05-02T17:48:36.107Z · LW(p) · GW(p)

Counterexamples to a definition are easier than a definition. See, e.g.:

But they don't solve the problem of Wikipedia being in your judgement wrong about this point.

re: the rest of your post, words mean things. B is a technical term.

If you look at the dictionary you will find that most words have multiple meanings.They also happen to evolve meaning over time.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-05-02T21:03:08.036Z · LW(p) · GW(p)

Let's see if I can precommit to not posting here anymore.

comment by Lumifer · 2014-05-02T16:47:17.828Z · LW(p) · GW(p)

It's about being sensible, not your identity as a statistician.

Speaking of, an interesting paper which distinguishes the Fisher approach to testing from the Neyman-Pearson approach and shows how you can unify/match some of that with Bayesian methods.

comment by lmm · 2014-05-05T18:20:57.151Z · LW(p) · GW(p)

We live at a time where up to 70% of scientific research can't be replicated. Frequentism might not be to blame for all of that, but it does play it's part. There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

It seems to me that there's a bigger risk from Bayesian methods. They're more sensitive to small effect sizes (doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against, doing a bayesian one it might be evidence for). If the prior isn't swamped then it's important and we don't have good best practices for choosing priors; if the prior is swamped then the bayesianism isn't terribly relevant. And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Bayes' theorem is true (duh) and I'd accept that there are situations where bayesian analysis is more effective than frequentist, but I think it would do more harm than good in formal science.

Replies from: gwern, Douglas_Knight
comment by gwern · 2014-05-06T02:44:26.294Z · LW(p) · GW(p)

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

Why would you do that? If I got a p=0.1 result doing a meta-analysis, I wouldn't be surprised at all since things like random-effects means it takes a lot of data to turn in a positive result at the arbitrary threshold of 0.05. And as it happens, in some areas, an alpha of 0.1 is acceptable: for example, because of the poor power of tests for publication bias, you can find respected people like Ioannides using that particular threshold (I believe I last saw that in his paper on the binomial test for publication bias).

If people really acted that way, we'd see odd phenomenon where people saw successive meta-analysts on whether grapes cure cancer: 0.15 that grapes cure cancer (decreases belief grapes cure cancer), 0.10 (decreases), 0.07 (decreases), someone points out that random-effects is inappropriate because studies show very low heterogeneity and the better fixed-effects analysis suddenly reveals that the true p-value is now at 0.05 (everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'). Instead, we see people acting more like Bayesians...

And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Is that a guess, or a fact based on meta-studies showing that Bayesian-using papers cook the books more than NHST users with p-hacking etc?

Replies from: gwern
comment by gwern · 2014-10-10T02:10:38.849Z · LW(p) · GW(p)

everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'

Turns out I am overoptimistic and in some cases people have done just that: interpreted a failure to reject the null (due to insufficient power, despite being evidence for an effect) as disproving the alternative in a series of studies which all point the same way, only changing their minds when an individually big enough study comes out. Hauer says this is exactly what happened with a series of studies on traffic mortalities.

(As if driving didn't terrify me enough, now I realize traffic laws and road safety designs are being engineered by vulgarized NHST practitioners who apparently don't know how to patch the paradigm up with emphasis on power or meta-analysis.)

comment by Douglas_Knight · 2014-05-13T06:53:50.830Z · LW(p) · GW(p)

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

No. The most basic version of meta-analysis is, roughly, that if you have two p=0.1 studies, the combined conclusion is p=0.01.

comment by Punoxysm · 2014-05-02T01:49:14.610Z · LW(p) · GW(p)

To all your points about the overloading of "Bayesian", fair enough. I guess I just don't see why that overloading is necessary.

We lack a good framework for doing this. Bayes rule is the answer to that problem that provides the promise of a solution. The solution to wait a few years and then read a meta review is unsatisfying.

Sure Bayes rule provides a formalization of updating beliefs based on evidence, but you can still be dead wrong. In particular, setting a prior on any given issue isn't enough. You have to be prepared to update for evidence of the form "I am really bad at setting priors". And really, priors are just a (possibly arbitrary) way of digesting existing evidence. Sometimes they can be very useful (avoiding privileging the hypothesis) but sometimes they are quite arbitrary.

There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

According to the Slate Star Codex article Bem's results stand up to bayesian analysis quite well (that is, it has a strong Bayes factor). The only exception he mentioned was "I begin with a very low prior for psi phenomena, and a higher prior for the individual experiments and meta-analysis being subtly corrupt"; but there's nothing especially helpful about this in actually fixing the experimental design and meta-analysis.

Part of LW is that it's a place to discuss how an AGI could be structured. As such we care about the philosophic level of how you come to know that something is true. As such there an interest into going as basic as possible when looking at epistemology.

How you get from AGI to epistemology eludes me. As long as the AGI can accurately model its interactions with the environment, that's really all it needs (or can hope) to do.

That sentence is quite easy to say but it effectively means there no such thing as pure absolute objective truth. If you use tools A you get truth X and if you use tools B you get truth Y. Neither X or Y are "more true". That's not an appealing conclusion to many people.

One of them is more useful for prediction and inference. They can guide you towards observing mechanisms useful for future hypothesis generation. That's all you can hope for. Especially in the case of "are low-salt diets healthy". A "Yes" or "No" to that question will never be truthful, because "health" and "for what segments of the population" and "in conjunction with what other lifestyle factors" are left underspecified. And you'll never get rid of the kernel of doubt that the low-sodium lobby has been the silent force behind all the anti-salt research this whole time.

The best you can do is provide enough evidence that anyone who points out your hypothesis is not truth can be reasonably called a pedant or conspiracy theorist, but not 100% guaranteed wrong.

As you might see, I am a fan of the idea of Dissolving epistemology.

comment by Emile · 2014-04-28T11:47:42.590Z · LW(p) · GW(p)

Can you point to examples of these "holy wars"? I haven't encountered something I'd describe like that, so I don't know if we've been seeing different things, or just interpreting it differently.

To me it looks like a tension between a method that's theoretically better but not well-established, and a method that is not ideal but more widely understood so more convenient - a bit like the tension between the metric and imperial systems, or between flash and html5.

Replies from: sixes_and_sevens, IlyaShpitser, satt
comment by sixes_and_sevens · 2014-04-28T12:59:44.394Z · LW(p) · GW(p)

The term "holy war" or "religious war" is often used to describe debates where people advocate for a side with an intensity disproportionate to the stakes, (e.g. the proper pronunciation of "gif", vi vs. emacs, surrogate vs. natural primary keys in the RDBM). That's how I read the OP, and it's fitting in context.

Replies from: Emile
comment by Emile · 2014-04-28T15:34:21.689Z · LW(p) · GW(p)

Sure, I'm just not sure which debates he's referring to ... is it on LessWrong? Elsewhere?

comment by IlyaShpitser · 2014-05-01T10:06:34.695Z · LW(p) · GW(p)

To me it looks like a tension between a method that's theoretically better


It's because Bayesian methods really do claim to be more than just a set of tools. They are supposed to be universally applicable.


[etc.]

Ugh. Here is a good heuristic:

"Not in stats or machine learning? Stop talking about this."

Replies from: Emile
comment by Emile · 2014-05-01T21:22:14.252Z · LW(p) · GW(p)

Dude, I'm being genuinely curious about what "holy wars" he's talking about. So far I got:

  • a definition of "holy war" in this context
  • a snotty "shut up, only statisticians are allowed to talk about this topic"

... but zero actual answers, so I can't even tell if he's talking about some stupid overblown bullshit, or if he's just exaggerating what is actually a pretty low-key difference in opinion.

Replies from: VincentYu, ChristianKl, Jayson_Virissimo
comment by VincentYu · 2014-05-02T01:37:19.953Z · LW(p) · GW(p)

A "holy war" between Bayesians and frequentists exists in the modern academic literature for statistics, machine learning, econometrics, and philosophy (this is a non-exhaustive list).

Bradley Efron, who is arguably the most accomplished statistician alive, wrote the following in a commentary for Science in 2013 [1]:

The term "controversial theorem" sounds like an oxymoron, but Bayes' theorem has played this part for two-and-a-half centuries. Twice it has soared to scientific celebrity, twice it has crashed, and it is currently enjoying another boom. The theorem itself is a landmark of logical reasoning and the first serious triumph of statistical inference, yet is still treated with suspicion by most statisticians. There are reasons to believe in the staying power of its current popularity, but also some signs of trouble ahead.

[...]

Bayes' 1763 paper was an impeccable exercise in probability theory. The trouble and the subsequent busts came from overenthusiastic application of the theorem in the absence of genuine prior information, with Pierre-Simon Laplace as a prime violator. Suppose that in the twins example we lacked the prior knowledge that one-third of twins are identical. Laplace would have assumed a uniform distribution between zero and one for the unknown prior probability of identical twins, yielding 2/3 rather than 1/2 as the answer to the physicists' question. In modern parlance, Laplace would be trying to assign an "uninformative prior" or "objective prior", one having only neutral effects on the output of Bayes' rule. Whether or not this can be done legitimately has fueled the 250-year controversy.

Frequentism, the dominant statistical paradigm over the past hundred years, rejects the use of uninformative priors, and in fact does away with prior distributions entirely. In place of past experience, frequentism considers future behavior. An optimal estimator is one that performs best in hypothetical repetitions of the current experiment. The resulting gain in scientific objectivity has carried the day, though at a price in the coherent integration of evidence from different sources, as in the FiveThirtyEight example.

The Bayesian-frequentist argument, unlike most philosophical disputes, has immediate practical consequences.

In another paper published in 2013, Efron wrote [2]:

The two-party system [Bayesian and frequentist] can be upsetting to statistical consumers, but it has been a good thing for statistical researchers — doubling employment, and spurring innovation within and between the parties. These days there is less distance between Bayesians and frequentists, especially with the rise of objective Bayesianism, and we may even be heading toward a coalition government.

The two philosophies, Bayesian and frequentist, are more orthogonal than antithetical. And of course, practicing statisticians are free to use whichever methods seem better for the problem at hand — which is just what I do.

Thirty years ago, Efron was more critical of Bayesian statistics [3]:

A summary of the major reasons why Fisherian and NPW [NeymanPearsonWald] ideas have shouldered Bayesian theory aside in statistical practice is as follows:

  1. Ease of use: Fisher’s theory in particular is well set up to yield answers on an easy and almost automatic basis.
  2. Model building: Both Fisherian and NPW theory pay more attention to the preinferential aspects of statistics.
  3. Division of labor: The NPW school in particular allows interesting parts of a complicated problem to be broken off and solved separately. These partial solutions often make use of aspects of the situation, for example, the sampling plan, which do not seem to help the Bayesian.
  4. Objectivity: The high ground of scientific objectivity has been seized by the frequentists.

None of these points is insurmountable, and in fact, there have been some Bayesian efforts on all four. In my opinion a lot more such effort will be needed to fulfill Lindley’s prediction of a Bayesian 21st century.

The following bit of friendly banter in 1965 between M. S. Bartlett and John W. Pratt shows that the holy war was ongoing 50 years ago [4]:

Bartlett: I am not being altogether facetious in suggesting that, while non-Bayesians should make it clear in their writings whether they are non-Bayesian Orthodox or non-Bayesian Fisherian, Bayesians should also take care to distinguish their various denominations of Bayesian Epistemologists, Bayesian Orthodox and Bayesian Savages. (In fairness to Dr Good, I could alternatively have referred to Bayesian Goods; but, oddly enough, this did not sound so good.)

Pratt: Professor Bartlett is correct in classifying me a Bayesian Savage, though I might take exception to his word order. On the whole, I would rather be called a Savage Bayesian than a Bayesian Savage. Of course I can quite see that Professor Bartlett might not want to admit the possibility of a Good Bayesian.

For further reading I recommend [5], [6], [7].

[1]: Efron, Bradley. 2013. “Bayes’ Theorem in the 21st Century.” Science 340 (6137) (June 7): 1177–1178. doi:10.1126/science.1236536.

[2]: Efron, Bradley. 2013. “A 250-Year Argument: Belief, Behavior, and the Bootstrap.” Bulletin of the American Mathematical Society 50 (1) (April 25): 129–146. doi:10.1090/S0273-0979-2012-01374-5.

[3]: Efron, B. 1986. “Why Isn’t Everyone a Bayesian?” American Statistician 40 (1) (February): 1–11. doi:10.1080/00031305.1986.10475342.

[4]: Pratt, John W. 1965. “Bayesian Interpretation of Standard Inference Statements.” Journal of the Royal Statistical Society: Series B (Methodological) 27 (2): 169–203. http://www.jstor.org/stable/2984190.

[5]: Senn, Stephen. 2011. “You May Believe You Are a Bayesian but You Are Probably Wrong.” Rationality, Markets and Morals 2: 48–66. http://www.rmm-journal.com/htdocs/volume2.html.

[6]: Gelman, Andrew. 2011. “Induction and Deduction in Bayesian Data Analysis.” Rationality, Markets and Morals 2: 67–78. http://www.rmm-journal.com/htdocs/volume2.html.

[7]: Gelman, Andrew, and Christian P. Robert. 2012. “‘Not Only Defended but Also Applied’: The Perceived Absurdity of Bayesian Inference”. Statistics; Theory. arXiv (June 28).

comment by ChristianKl · 2014-05-01T22:42:31.389Z · LW(p) · GW(p)

Ilya responded to your second paragraph not the first one. metric vs. imperial or flash vs. html5 are not good analogies.

comment by Jayson_Virissimo · 2014-05-01T21:43:53.409Z · LW(p) · GW(p)

Dude, I'm being genuinely curious about what "holy wars" he's talking about.

For lots of "holy war" anecdotes, see The Theory That Would Not Die by Sharon Bertsch McGrayne.

...I can't even tell if he's talking about some stupid overblown bullshit, or if he's just exaggerating what is actually a pretty low-key difference in opinion.

Do you consider personal insults, accusations of fraud, or splitting academic departments along party lines to be "a pretty low-key difference in opinion"? If so, then it is "overblown bullshit," otherwise it isn't.

comment by satt · 2014-05-02T02:02:29.698Z · LW(p) · GW(p)

Can you point to examples of these "holy wars"? I haven't encountered something I'd describe like that, so I don't know if we've been seeing different things, or just interpreting it differently.

Various bits of Jaynes's "Confidence intervals vs Bayesian intervals" seem holy war-ish to me. Perhaps the juiciest bit (from pages 197-198, or pages 23-24 of the PDF):

I first presented this result to a recent convention of reliability and quality control statisticians working in the computer and aerospace industries; and at this point the meeting was thrown into an uproar, about a dozen people trying to shout me down at once. They told me, "This is complete nonsense. A method as firmly established and thoroughly worked over as confidence intervals can't possibly do such a thing. You are maligning a very great man; Neyman would never have advocated a method that breaks down on such a simple problem. If you can't do your arithmetic right, you have no business running around giving talks like this".

After partial calm was restored, I went a second time, very slowly and carefully, through the numerical work [...] with all of them leering at me, eager to see who would be the first to catch my mistake [...] In the end they had to concede that my result was correct after all.

To make a long story short, my talk was extended to four hours (all afternoon), and their reaction finally changed to: "My God – why didn't somebody tell me about these things before? My professors and textbooks never said anything about this. Now I have to go back home and recheck everything I've done for years."

This incident makes an interesting commentary on the kind of indoctrination that teachers of orthodox statistics have been giving their students for two generations now.

comment by Shmi (shminux) · 2014-04-29T20:05:31.895Z · LW(p) · GW(p)

The Amanda Knox prosecution saga continues: if the original motive does not hold, deny the need for a motive.

comment by Gunnar_Zarncke · 2014-04-27T21:47:43.702Z · LW(p) · GW(p)

During our Hamburg Meetup we discussed selection pressure on humans. We agreed that there is almost none on mutations affecting health in general due to medicine. But we agreed that there is tremendous pressure on contraception. We identified four ways evolution works around contraception. We discussed what effects this could have on the future of society. The movie Idiocracy was mentioned. This could be a long term (a few generations) existential risk.

The four ways evolution works around contraception:

  • Biological factors. Examples are hormones compensating the contraception effects of the pill or allergies against condoms. These are easily recognized, measured and countered by the much faster operating pharma industry. There are also little ethical issues with this.

  • Subconscious mental factors. Factors mostly leading to non- or mis-use of contraception. Examples are carelessness, impulsiveness, fear, and insufficient understanding of the contraceptives usage. These are what some fear leads to collective stultification. There are ethical injunctions to 'cure' these factors even if medically/therapeutically possible.

  • Conscious mental factors. Factors leading to explicit family planning e.g. children/family as terminal goals. These lead to a conscious use of contraception. The effect is less pronounced but likely leads to healthy and better educated children. These are actively encouraged but my personal impression is that this is less an area suspectible to education (because it depends on ones terminal goals).

  • Group selection factors. These are factors favoring groups which collectively have more children. The genetic effects are likely weak here but the memetic effects are strong. A culture with social norms against contraception or for large families are likely to out-birth other groups.

Any mistakes? Do you agree? Are we missing something?

EDIT: Fixed link, typos

Replies from: Metus, Izeinwinter
comment by Metus · 2014-04-27T23:26:11.242Z · LW(p) · GW(p)

Group selection factors. These are factors favoring groups which collectively have more children. The genetic effects are likely weak here but the memetic effects are strong. A culture with social norms against contraception or for large families are likely to out-birth other groups.

These will by far be the strongest. See for example the birth rates of religious people versus anyone else.

comment by Izeinwinter · 2014-04-28T05:54:18.436Z · LW(p) · GW(p)

These discussions all have the same problem. They misapprehend how slow evolution is. - Long before any such selection can take place, the human genome is going to get rewritten end to end by deliberate technological intervention. Or heck, people will just stop dying - universal survival means no selection.

This means that the only thing that matters for the persistence of any human trait is how much they are valued. Uhm. Including how much they are valued by already-modified humans. A few rounds of iteration on that theme and I can guarantee at least one thing about future humanity: They will be one hundred percent satisfied with their physical incarnation. (because otherwise, it'd get changed.)

Think it through. How long do you think it will take before we master genetic engineering and decide to use it? 50 years? 500? 5000? Because at datum 5000, evolution will have done bugger-all to the genome. I mean, lactose tolerance might be a bit more common... but overall? Tech is fast. Social and legal change is slower, but compared to evolution? Blindingly fast. And this is some weak-sauce selective pressures. Most people do have kids. Failing at contraception does not shift the lifetime number of children reliably upwards, it just fucks you over economically. And kids are expensive.

Replies from: bramflakes, Gunnar_Zarncke
comment by bramflakes · 2014-04-28T08:29:13.274Z · LW(p) · GW(p)

Small changes to genotype don't imply small changes to phenotype.

comment by Gunnar_Zarncke · 2014-04-28T07:00:37.103Z · LW(p) · GW(p)

Evolution is slow. It takes generations. Depending on the selection pressure these may be quite few. Assume sexual drive were the only determining factor for reproductive fitness (which probably is a good approximation for some animals) and you introduce a 95% successful 'contraception' (e.g. a genetic modification to avoid reproduction - this has been done for mosquitoes) and guess how many generations it takes to work around it. Now humans use 95% reliable contraceptives - but their usage is regulated by complex processes so no simple analysis suffices (just think of the misinterpretaion of the baby-bust/pill-gap).

Additionally we don't have to limit us to genetic evolution. We could also consider memetic evolution - the one invoked somewhat imprecisely in point 4. Memes evolve faster. It could happen that meme-complexes joining birth-control and anti-science out-breed progress within few generations.

Sure after 500 years we'd likely have the technological means - if anyone is still interested in technology then. And for some 500 may be a more likely date than 50.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-04-28T07:42:37.173Z · LW(p) · GW(p)

It takes many generations. Human generations are quite long.

Without a technological civilization, the oldtime pressures of hunger and violence will dominate everything else - Which in some ways favors various means of birth control. Because having 6 kids and having all of them die due to splitting available resources to many ways is not a successful strategy. Therefore, your projection only makes sense in a continuing technological civilization, in which case engineering happens.

And again. Most people have kids. Successful use of birth control allows you to control time and number of said kids, the mosquito analogy holds no water whatsoever, if you want to model the selective advantages / disadvantages of this, you are going to need extensive real world data over generations- and a computing model projecting forward, and you would still be making stuff up.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-04-28T08:37:27.200Z · LW(p) · GW(p)

Therefore, your projection only makes sense in a continuing technological civilization, in which case engineering happens.

Agreed. But the speed of technology is estimated quite variably. And at least currently there are already ethical (read: memetic) constraints on applying technology to reproduction. So one could argue that the selection pressure is already doing its work.

you are going to need extensive real world data [for] projecting forward.

Agreed. What do you propose? Assuming it too complicate to contemplate?

Replies from: Izeinwinter
comment by Izeinwinter · 2014-04-28T10:12:40.076Z · LW(p) · GW(p)

.... Yes. I mean, if you want to do a phd's worth of work, there are existing datasets one could mine - but the time horizon (since the legalization of birth control) is so short and the social context regarding reproduction has been shifting so heavily during this period that any predictions you make would end up being barely guesses. Fortunately, the subset of plausible futures in which this matters is absurdly small. The world would essentially have to enter into technological and social stasis for many thousands of years, and well. Uhm. No.

The marching morons has a lot to answer for, really, since variations on this is an idea that crops up like weed, and it is a pretty absurd scenario.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-04-28T10:36:42.252Z · LW(p) · GW(p)

The marching morons has a lot to answer for, really, since variations on this is an idea that crops up like weed,

This is kind of an relevant argument because it means this - despite my non-political phrasing - is really a political topic because the opinion coalition effects are possibly much stronger than any solid predictions to be had. Or to rephrase this: Any actual biological effect is outweight by memetic effects.

it is a pretty absurd scenario.

I beg to differ.

comment by cousin_it · 2014-04-29T19:05:10.741Z · LW(p) · GW(p)

When I'm procrastinating on a project by working on another, sexier project, it feels exactly like a love triangle where all three participants are inside my head, with all the same pleading, promises and infidelities. I wish that told us something new about procrastination or love!

comment by NancyLebovitz · 2014-05-03T15:55:08.883Z · LW(p) · GW(p)

A statistical look at whether bike helmets make sense-- concludes that there are some strong arguments against requiring bike helmets, and that drivers give less room to cyclists wearing helmets.

comment by Tenoke · 2014-04-27T21:19:32.471Z · LW(p) · GW(p)

I am repeating myself so much, but..

Why is this posted a day early (when the prior thread was posted earlier than it should've been solely so they can start on Monday).? And way more impotantly, why is the open_thread tag not there. Can you please at least include it (it is important for the sidebar functionality, among other things).

I often rant about people posting the open threads incorrectly when there is so little about it (posting it after the previous one is over, making it last 7 days and adding a simple tag), but this is the 3rd OT posted this week. People specifically go out of their way to post an Open Thread and then don't even do the minimum. I was suspicious that people actually do it for the 3-4 karma they get, but Nancy has 18,8k, so this is likely not the reason.

I am sorry, if I sound rude in any way, but please can someone explain this phenomenon to me. I notice that I am confused.

Replies from: Robin_Hartell, pinyaka, Metus
comment by Robin_Hartell · 2014-04-30T12:54:51.610Z · LW(p) · GW(p)

How much appetite would there be for shorter windows on the Open Threads? Having one every 4 days would have some advantages:

  • The material would be spread out more evenly through time, rather than having the majority of the posts in the first few days of a new thread
    • The start day would vary over time
Replies from: Tenoke
comment by Tenoke · 2014-04-30T13:25:10.511Z · LW(p) · GW(p)

I am not sure if you are aware, but the OTs have been a monthly rather than weekly event for years until a few months ago.

At any rate, I am generally not convinced that a further reduction is needed, as it will 1. It might make people even more confused as to when to post them 2. It is generally becomes more 'spammy 'and 3. The benefits of a further reduction are questionable.

comment by pinyaka · 2014-04-29T17:55:14.386Z · LW(p) · GW(p)

I created one of the other incorrect OT threads because the post date of the correct one didn't match the expiration date of the previous one (so quickly eyeballing posts at that date in the Discussion post list didn't turn it up) and searching via the box for "Open Thread" didn't return the current one in the first page of results (although it did turn up the previous one almost at the top of the list). I didn't use the tag because I didn't know that I was supposed to. There isn't really an obvious set of rules or standards for creating open threads, so I did what it seemed like other people do and just created one when it looked like there wasn't a current one, copying the text from the previous one for the body and choosing the seven day range following the previous OT thread (hence the title saying 23 April, even though I posted on the 24th).

I notice now that the latest open thread is on the sidebar. I looked for that specifically and didn't see one, so that will help people who go through the same process as me. (ETA: I notice that it's not on the sidebar from every page. This is what the sidebar currently looks like from my profile and does not include a link to the current OT. It would not have occurred to me to look at the sidebar from multiple pages. Same goes for the current Rationality Diary, and for some reason the latest Rationality Quotes is only on the sidebar on my profile page, not the discussion list or in this thread)

If you're really concerned about starting on Monday and using the tag, perhaps adding a set of instructions on creating new OT posts in the body of the OT post for people to copy and paste will be helpful.

Final Edit: The search result page sidebar also does not include the latest OT or rationality diary and that is likely the page whose sidebar I would have checked.

comment by Metus · 2014-04-27T23:27:19.659Z · LW(p) · GW(p)

People want to say something and think that it will drown out in the previous open thread. So they do their own.

Replies from: Tenoke
comment by Tenoke · 2014-04-28T06:07:08.877Z · LW(p) · GW(p)

This does not explain the current thread, as Nancy did not post anything in it initially (but seems to be a big part of it).

comment by NancyLebovitz · 2014-05-04T13:06:16.157Z · LW(p) · GW(p)

Why engineering hours should not be viewed as fungible-- increasing speed/preventing bottlenecks is important enough to be worth investing in. An example of how to be utilitarian without being stupid about it.

Any recommendations for discussions of how to figure out what's important to measure?

comment by David_Gerard · 2014-05-04T09:27:51.365Z · LW(p) · GW(p)

HELP WANTED: I recall that it is highly questionable that consciousness is even continuous. We feel like it is, but (as you know) we have considerable experimental evidence that your "consciousness" thinks things well after you've decided to do them. I can't find it, but I recall a result that says that "consciousness" is a story your brain tells itself after-the-fact, in bursts between gaps of obliviousness. (This also dissolves "quantum immortality".) Does anyone know about this one?

Replies from: Risto_Saarelma, bramflakes
comment by Risto_Saarelma · 2014-05-04T12:57:46.815Z · LW(p) · GW(p)

Don't remember an exact result like that, but that did remind me of Denntt's Consciousness Explained, which had stuff about the brain doing all sorts of after-the-fact rewriting of sensory inputs to create a single narrative that presents itself as the conscious experience.

Replies from: David_Gerard
comment by David_Gerard · 2014-05-04T21:17:33.850Z · LW(p) · GW(p)

That's probably what I'm thinking of. Thank you to you and bramflakes!

comment by bramflakes · 2014-05-04T12:55:19.607Z · LW(p) · GW(p)

I think Dennet talks about it a lot, possibly in Consciousness Explained, but I don't know whether those experiments had been done in 1991.

comment by niceguyanon · 2014-04-30T20:44:46.311Z · LW(p) · GW(p)

The 135 degree angle sitting position seems popular these days but sometimes the chair you are sitting in can not recline.

So if you must sit in a non-reclining chair, is it better to sit upright on the edge of your seat with your knees bent at 45 degrees and maintain a hip angle of 135 degrees or to sit in a relaxed upright position using the back support at a relatively 90 degree hip angle?

Replies from: tut
comment by tut · 2014-05-01T18:01:19.573Z · LW(p) · GW(p)

It is better to sit with your entire back against the backrest. What angle to set the backrest to is a second order optimization, the ideal angle is not the same for everyone, and it if you sit for more than an hour at a time it is better to change the angle now and then than to leave it at any given angle.

comment by Suryc11 · 2014-04-30T03:29:58.629Z · LW(p) · GW(p)

This is a really great take on why use of privilege-based critique in (often leftist) public discourse is flawed:

http://harvardpolitics.com/united-states/privilege-leftist-critique-left/?fb_action_ids=10152177872632732&fb_action_types=og.likes

(Tl;dr: it's both malicious, because it resorts to using essential features of interlocutors against them--ie, quasi-ad hominems--and fallacious, because it fails to explain why the un(der)-privileged can offer arguments that work against their own interests.)

Replies from: V_V, ChristianKl, NancyLebovitz
comment by V_V · 2014-04-30T11:39:50.296Z · LW(p) · GW(p)

I'd always thought that using 'privilege' arguments was the plain and simple ad hominem fallacy.

Replies from: NancyLebovitz, David_Gerard
comment by NancyLebovitz · 2014-04-30T14:59:57.930Z · LW(p) · GW(p)

To the extent that privilege claims are about ignorance, I think they're likely to have a point. To the extent that they're a claim that some people are guaranteed to be wrong, they're ad hominem.

Replies from: fubarobfusco, V_V, Suryc11
comment by fubarobfusco · 2014-04-30T17:12:01.662Z · LW(p) · GW(p)

One really common case is when person A says something to the effect of, "I don't see why B people don't do X instead of complaining about fooism" — but X is an action that is (relatively easily) available to person A, but is systematically unavailable to B people. (And sometimes because of fooism.)

Or, X has been tried repeatedly in the history of B people, and has failed; but A doesn't know that history.

Or, X is just ridiculously expensive (in money/time/energy) and B people are poor/busy/tired, or otherwise ill-placed to implement it.

Or, X is an attempt to solve the wrong problem, but A doesn't have the practical experience to distinguish the actual problem from the situation at hand — A may be pattern-matching a situation into the wrong category.

Some of this post could totally be rephrased as being about "non-depressed-person privilege", but the author doesn't write like that.

Replies from: V_V
comment by V_V · 2014-04-30T19:29:12.259Z · LW(p) · GW(p)

Then the correct response is to point out that X is hard/impractical/ineffective, supporting your point with evidence or plausible arguments.

Asserting to know better because of your incommunicable personal experience, quite possibly affected by confirmation bias and whatnot, is not a way of arguing, it is a way of refusing to engage in intellectual discussion.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-01T11:05:42.554Z · LW(p) · GW(p)

I can imagine people frustrated from having to explain the same concept online for the hundredth time; always to someone else; often to people who genuinely don't know, but sometimes to trolls. That's the moment where people are likely to point to a FAQ. That's why we have the Sequences here. Etc.

The problem is that the FAQ (or the Sequences) usually do contain the full explanation, and sometimes even a place where that specific explanation can be debated. But the sentence "check your privilege" does not. It is not replacing hundreds of explanations with one, but hundreds of explanations with zero.

(Sure, I could google what "privilege" means, but then I'd get dozens of explanations, sometimes mutually contradictory. And I don't know which of the versions the person had in mind. Or it can say that privilege means X or Y or Z, and it may seem to me that neither applies to what I have said, and I don't know which one of them was supposed to apply to me. -- As a loose analogy, it is better to link people to a specific article in the Sequences, than to Sequences as a whole.)

I guess the solution would be to write a good "Privilege FAQ". One written by a rational person, which would explain ways how to use it but also how to not use it, encourage people to link to specific subsections of it, and perhaps contain a short commentary to the most frustratingly repeated specific misunderstandings.

(Problem is, creating a good FAQ is hard work, and it may not be the same fun as bullying random people online. -- This applies to internet debates in general, not just specifically about privilege.)

comment by V_V · 2014-04-30T19:35:00.918Z · LW(p) · GW(p)

To the extent that privilege claims are about ignorance,

Of course it is quite possible that people from certain backgrounds may tend to be ignorant about certain facts, but then when they say something factually incorrect in a public discussion, the correct answer is to just correct their errors with evidence and plausible arguments.
Saying "you are privileged" at best adds no information and sets a hostile tone, at worst, if you can't support your point with communicable evidence or plausible arguments, is an ad hominem.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-30T20:02:48.634Z · LW(p) · GW(p)

As I understand it, a problem the privilege model is designed to address is people who ignorant about important difficulties, and are unwilling to listen. "Privilege" raises the temperature enough to get some people to bend. Of course, psycho-chemistry being what it is, it gets other people to become more rigid, to melt down, or to explode.

Replies from: Eugine_Nier, taelor
comment by Eugine_Nier · 2014-05-06T02:32:52.215Z · LW(p) · GW(p)

"Privilege" raises the temperature enough to get some people to bend.

In a way that has no reason to correlate with the truth of the issue under discussion.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-06T12:46:00.141Z · LW(p) · GW(p)

None of the typical reactions to "privilege" are reliably related to the truth of the matter.

comment by taelor · 2014-05-02T03:27:00.851Z · LW(p) · GW(p)

That's another problem with overuse of the "priviledge" concept: the more people throw it around, the less punch it packs.

comment by Suryc11 · 2014-04-30T16:54:46.348Z · LW(p) · GW(p)

Agreed, that's a great way of putting it.

comment by David_Gerard · 2014-05-04T09:36:01.028Z · LW(p) · GW(p)

There's the difference between logical fallacy and Bayesian fallacy. Most logical fallacies got evolved into human thinking because they often enough in fact constituted Bayesian evidence. e.g. authorities on a subject often know what the hell they're talking about.

Replies from: V_V
comment by V_V · 2014-05-04T10:18:20.704Z · LW(p) · GW(p)

Sure, many informal fallacies derive from useful heuristics. The problem is occurs when these heuristics are used as hard rules, especially when dismissing criticism.

For instance, the typical 'privilege' argument is: "You are white/male/heterosexual/cisgender/educated/upper class/attractive/fit/neurotypical, therefore your arguments about non-white/female/gay/transgender/uneducated/working class/unattractive/fat/neuroatypical people are wrong."
It is reasonable that people with certain life experiences may have difficulties understanding the issues of people with different life experiences, but this doesn't mean that you need to share life experiences in order to make an informed argument. The "therefore you are wrong" part of the privilege rebuttal is a fallacy.

Replies from: Eugine_Nier, NancyLebovitz, David_Gerard
comment by Eugine_Nier · 2014-05-08T00:29:41.918Z · LW(p) · GW(p)

It is reasonable that people with certain life experiences may have difficulties understanding the issues of people with different life experiences

Notice that this steelmanning of 'privilege' is completely symmetrical, i.e., an "unprivileged" person would have the same problems with respect to the "privileged" person as conversely. Given that this "steelman" has no connection to the common use of the word "privilege" the question arises, of why that word is being used at all? The answer, I suspect, is in order to sneak in the connotations from the regular meaning of the word "privilege".

Replies from: NancyLebovitz, pragmatist
comment by NancyLebovitz · 2014-05-10T00:22:21.457Z · LW(p) · GW(p)

The more power you have, the more damage you can do through ignorance.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-10T05:31:57.887Z · LW(p) · GW(p)

Do you mean individual or collective power? Individually the average poor citizen may not have much power, but collectively they can do stupid things like voting for the candidate promising to "make the rich pay their 'fair share' ".

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-10T13:19:32.881Z · LW(p) · GW(p)

I think the privilege model is neither completely true nor completely false, and one of the ways it falls down is that it's framed as absolute about members of groups (and according to a static list) rather than being about a statistical tilt.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-13T01:22:24.411Z · LW(p) · GW(p)

The problem is as I mentioned, to the extend it is true, it doesn't correspond to the connotations of the word "privilege".

comment by pragmatist · 2014-05-09T11:21:17.010Z · LW(p) · GW(p)

The argument against symmetry is that the privileged perspective is massively over-represented in prominent cultural productions (movies, books, op-eds, etc.), so underprivileged people have many more resources available that allow them some access to the experiences of the privileged. See this, for instance.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-10T00:00:43.529Z · LW(p) · GW(p)

privileged perspective is massively over-represented in prominent cultural productions (movies, books, op-eds, etc.)

Really? What definition of "privilege" are you using here? I agree that certain perspectives are over-represented in cultural products, but those are not the same ones that the SJ-types call "privileged".

comment by NancyLebovitz · 2014-05-04T12:59:57.585Z · LW(p) · GW(p)

If the argument is about how the world of people (as distinct from scientific conclusions) works, then life experiences are important information. What sort of argument about the world (say, an argument about why people are poor) should ignore life experience? Admittedly, the experiences of two people aren't enough, but at least that's a start. It's also worth checking on whether one of the people is arguing from no experience.

comment by David_Gerard · 2014-05-04T10:31:06.488Z · LW(p) · GW(p)

Indeed, "therefore you are wrong" does not follow logically. The usage I more often see is "please, you're being a dick, stop it."

Replies from: V_V
comment by V_V · 2014-05-04T12:46:47.351Z · LW(p) · GW(p)

Which is even worse because it accuses the other party of bad faith. Clearly, that's a conversation stopper.

comment by ChristianKl · 2014-04-30T19:25:45.111Z · LW(p) · GW(p)

Does the article say anything that shouldn't already be obvious to the average LW reader and is therefore worth reading?

Replies from: IlyaShpitser, NancyLebovitz
comment by IlyaShpitser · 2014-04-30T20:33:04.533Z · LW(p) · GW(p)

It says: "don't hate the player, hate the game."

comment by NancyLebovitz · 2014-04-30T19:59:58.422Z · LW(p) · GW(p)

I'm not sure what the average LW reader knows.

I consider it likely that there are LW readers (both left and right) who don't know there's opposition to the privilege model from the left.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-01T00:06:20.393Z · LW(p) · GW(p)

I think the idea that shooting people down based on perceived privilege is an ad hominem is fairly straightforward and obvious.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-01T11:10:35.825Z · LW(p) · GW(p)

Well, it's still encouraging to get a feedback that the public sanity waterline is higher than absolute zero.

Nothing is "straightforward and obvious" for everyone. Especially when it's somehow related to politics.

comment by NancyLebovitz · 2014-04-30T11:09:18.998Z · LW(p) · GW(p)

I don't think "malicious" quite does the delicacy of that sort of very abstract Marxist argument justice, though I'm not sure what word would be better.

"Unfair" doesn't quite do the job, either, though the author does point out that a privilege framework means that the same argument will be approved or ignored depending on who makes it.

"Consciousness itself is complicit." is kind of cool. It could almost be something from LW (or at least Peter Watts), but the author probably means something else by consciousness.

Replies from: Suryc11
comment by Suryc11 · 2014-04-30T16:49:41.945Z · LW(p) · GW(p)

I agree, though to be fair the author himself seems to use malicious and fallacious to describe a privilege framework.

First, I am arguing that no one’s participation in public discourse should be denigrated by appeal to essential features of their identity. If we, as leftists, want to be unashamedly critical of discourse—as we should be—we should do so with reference to structures of power, such as heterosexual hegemony, rather than with reference to essential identities, such as the ‘straightness’ of particular individuals.

...

Second, I am arguing that to situate ideology in identity can not only be malicious, but also fallacious. If a self-identified queer person were to have written “How Gay Pride Backfires”, the privilege framework would collapse as an explanans, as it would no longer be able to appeal to the heterosexual privilege of the author to explain the danger of the argument. Importantly, however, in this alternative scenario, the queerness of the author would not render the article any less ideological and detrimental to the interests of sexual minorities.

comment by Aussiekas · 2014-05-04T09:57:43.585Z · LW(p) · GW(p)

Ok, my utility is probably low considering this open thread closes in 3 days :(

Anyhow, I had a thought when reading the Beautiful Probabilities in the Sequences. http://lesswrong.com/lw/mt/beautiful_probability/

It is a bit beyond my access and resources, but I'd love to see a graph/chart showing the percentage of scientific studies which become invalid or the percent which remain valid as we reduce the p <0.05.

So it would start with 100% of journal articles (take a sampling from the top 3 journals across various disciplines then break them down between Social Science, STEMS, etc.) with p <0.05.

Then we reduce that to p <0.04 down to 0.01 then go logarithmic to show 0.009 on downwards or however it makes sense to represent the data.

I'd be very curious to see the total and differences between fields as the acceptable value for p went down and down. At what point would we loose more than 50% of human knowledge if we had to be more certain about it? I think experimental design is allowed to be more lax than it could be because we aim for the minimum acceptable goal when taking so many competing priorities into consideration. Obviously this doesn't speak entirely to the validity of the knowledge due to wide variance in methodology and review processes, but we could at least gain an idea of how much we think we are certain about with various tolerances. Perhaps this work has been done before and someone will enlighten me to some study of which I am not aware or do not have access.

Just a passing thought, I'm new to the forums, but I take it that the open thread is the place to post wild ideas like this which are not ready for prime time.

Cheers!

comment by fubarobfusco · 2014-04-28T16:10:15.924Z · LW(p) · GW(p)

Poll: Consequentialism and the motive for holding true beliefs


1. Is an action's moral status (rightness or wrongness) dictated solely by its consequences? [pollid:685]

(For calibration — I would expect people who identify strongly as consequentialists to answer "strong yes" on question 1, while people who identify strongly as deontologists to answer "strong no", while people who are somewhere in between would choose one of the middle buttons based on how they lean.)


2. Is the truth value (truth or falsity) of a belief about the world dictated solely by its predictive value? [pollid:686]

(By "belief about the world" I explicitly mean to bracket beliefs about, for instance, mathematical formalisms.)


3. Is possessing the truth an end in itself; as opposed to being valuable for instrumental reasons, for instance that true beliefs equip us to choose our actions better? [pollid:687]


4. Do you expect that — all else being equal — a person equipped with more true beliefs and fewer false ones is more likely to accomplish that person's goals or intentions? [pollid:688]


5. Do you expect that — all else being equal — a person equipped with more true beliefs and fewer false ones is more likely to take actions that are more morally right and less morally wrong? [pollid:689]

Replies from: Alejandro1
comment by Alejandro1 · 2014-04-28T16:16:21.243Z · LW(p) · GW(p)

In question 1, is "consequences" supposed to mean "actual consequences", "expected consequences", "foreseeable consequences"…?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-28T16:48:50.202Z · LW(p) · GW(p)

Any of the above.

comment by mare-of-night · 2014-05-01T14:24:08.439Z · LW(p) · GW(p)

As someone who uses more water than most people, would it be irresponsible for me to move to a dry climate?

I realized that I've been entirely leaving the southwest United States off of my list of options for where to live after I graduate college, because I'd decided when I was much younger that I shouldn't live in the desert. Now, I'm realizing that I have very little idea how important that is compared to other concerns. I'm not sure how to go about weighing the utility of an additional person using too much water in the desert.

I probably use 2-3 times more water than most people, if you don't include things like lawns and car washes. (It's mainly from showers and washing my hands, probably because it takes me longer than normal to feel clean. I also need an extra load of laundry once a week to keep the dust out of my sheets, because of an allergy.)

Replies from: lmm, None
comment by lmm · 2014-05-05T18:49:20.175Z · LW(p) · GW(p)

This is the kind of thing where economic measures are useful. For political reasons you may not get a true cost for residential water, but maybe see how much industries in that area are paying per litre? Then you can calculate the dollar value you'd be imposing by living there, and compare it with the value you put on other differences between the places you're choosing from.

Replies from: mare-of-night
comment by mare-of-night · 2014-05-06T01:54:05.293Z · LW(p) · GW(p)

I see how that would work. Thanks.

comment by [deleted] · 2014-05-02T03:27:47.006Z · LW(p) · GW(p)

I doubt that using 2-3 times more water would end up being the dominant factor in whether you should move.

What other factors exist that you're considering? Is there a possibility of making more money in the dry climate?

Replies from: mare-of-night
comment by mare-of-night · 2014-05-02T07:16:59.551Z · LW(p) · GW(p)

That was pretty much my intuition too, once I actually thought about it.

For context, this is on my mind because I'm graduating from college in a year. I'm still figuring out my values and their relative importances. (I'm going to make a post about it later, if I'm still unsure after talking it over with myself and my friends.) I've heard that the Southwest has a lower cost of living than the rest of the country, and some areas have nice weather, but beyond that I don't know a whole lot about what it's like living there. (I'm on study abroad in Sydney now, and noticing that I go outside more when in an interesting neighborhood with nice weather.)

At the moment, all the cities near the top of my mental list are in the northeast US. I'm pretty sure that's at least partly because that's the only region I've stayed in for more than a couple weeks at a time, and I'm slightly homesick at the moment. Probably what I should do is do a bit of research now and write down what I'm thinking, and then come back to it once I've come back to my hometown and had some time to get bored of it, and maybe again during my last semester of college. It's probably not worth trying to evaluate places out of driving distance of Pennsylvania until I stop being homesick, now that I think of it that way.

Replies from: None
comment by [deleted] · 2014-05-02T07:56:38.616Z · LW(p) · GW(p)

I'm still figuring out my values and their relative importances.

Welcome to the club.

comment by [deleted] · 2014-04-30T16:21:13.774Z · LW(p) · GW(p)

this piece is about whether earning to give is the best way to be altruistic.

but I think a big issue is what altruism is. do most people mostly agree on what's altruistic or good? have effective altruists tried to determine what real people or organizations want?

you don't want to push "altruism given hidden assumptions X, Y and Z that most people don't agree with." for example, in Ben Kuhn's critique he talks about a principle of egalitarianism. But I don't think most people think of "altruism" as something that applies equally to the guy next door and to a person in Africa. Maybe smart idealistic Anglophone folks in the 2010s do. And some people think religion has equal or greater importance than physical human life does. So if you can convert a person to Christianity then you've done a huge good. And abortions and adultery are grave sins and so forth. Also, making political improvements is not a core part of EA.

maybe you should talk about apolitical egalitarian secular altruism.

but there is also another thing effective altruists favor that I think is clearly good: they use evidence. We do want evidence-based altruism. Kinda like evidence-based policy.

I think once you get beyond apolitical secular egalitarian altruism there are lots of different possibilities and it's as hard to figure out where you stand as it is to maximize impact. so maybe we should add something like reflection-based altruism.

I wonder if you can have more political impact through "earning to give" to political causes or through direct political involvement. the answer may vary with the type of cause. We might include the three types of economically left (e.g. socialism), economically neutral (e.g. abortion) and economically right (e.g. abolish estate taxes)

Replies from: David_Gerard, ChristianKl, Jayson_Virissimo
comment by David_Gerard · 2014-05-04T09:33:46.377Z · LW(p) · GW(p)

I do find it disconcerting just how little I see EA talk about changing society. The charity sector's budget in any given country is ridiculously smaller than the government budget; EA advocates talk about directed giving as the best way to change the world, but this appears to me to be deliberately ignoring systemic problems in favour of enshrining personal charity as a substitute for government.

"When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a communist." (Hélder Câmara)

(I realise Singer's original ideas are all about systemic change.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-04T13:01:30.937Z · LW(p) · GW(p)

EA is about things that are relatively easy to measure, and causing political change is hard to measure.

Replies from: fubarobfusco, jpl68, David_Gerard
comment by fubarobfusco · 2014-05-05T05:47:47.008Z · LW(p) · GW(p)

Some policy changes are hard to measure. Some are controversial to measure — you can measure them, but people will call you nasty names for doing so.

I expect that anyone who measured and forecast the health effects of reduction in lead pollution, back in the days of lead paint and leaded gasoline, was probably called "anti-business" or worse. Fortunately, they won anyway, and the effects are indeed measurable — in reduced cases of lead poisoning, and apparently in increased IQs of city residents.

comment by jpl68 · 2014-05-18T16:59:39.848Z · LW(p) · GW(p)

I do not think EA is about things that are relatively easy to measure. It is about doing things with the highest expected value. It is just that due partly to regression to the mean things with measurably high values should have among the highest expected values. See Adam Caseys posts on 80 000 Hours.

comment by David_Gerard · 2014-05-04T21:59:28.931Z · LW(p) · GW(p)

Goodhart, of course: after a short time, only the metric counts.

The solution is obvious: I create enough simulations that are good enough to constitute sentient beings, and make them all happy, that this adds up to MUCH more goodness than my present day job running a highly profitable baby mulching operation to fund it all. Like buying "asshole offsets".

comment by ChristianKl · 2014-04-30T19:39:49.622Z · LW(p) · GW(p)

Also, making political improvements is not a core part of EA.

The Swiss EA people did try to get a referenda passed. They engage with the political system. Getting university cafeterias to be vegan is a political agenda.

It's just not the classic political agenda that you find in the mainstream political debate. 21st century politics is strange. The story that TV news media tells is still so strong that young people seem to think that politics is about fighting the battles of their parents instead of fighting their own battles.

maybe you should talk about apolitical egalitarian secular altruism.

That's no effective catch phrase. You know, EAs actually care about effectiveness ;)

but there is also another thing effective altruists favor that I think is clearly good: they use evidence. We do want evidence-based altruism. Kinda like evidence-based policy.

This is kind of funny. At the Community Weekend in Berlin Jonas spoke about EA movement building and how one should use the label that most effective for a community. Calling it Effective Altruism is a PR move.

I wonder if you can have more political impact through "earning to give" to political causes or through direct political involvement.

I think that largely depends on your skill set. The core political goal should be to get decent people into positions of political power. Maybe some of the people who do today EA movement building also build the kind of skills in the process that they need to run political campaigns in 10 years. Of course at that point they need other EAs to fund their campaigns (at least in the US).

Replies from: None
comment by [deleted] · 2014-05-02T06:26:58.277Z · LW(p) · GW(p)

The Swiss folks may have done that. But I think the major organizations, like GiveWell, Giving What We Can, and 80,000 Hours, are focused on apolitical causes like global health, if you judge from their lists of recommended charities.

Also I don't think there's any getting around taking a position on mainstream political issues to optimally benefit society. Statistically your income is more influenced by which society you happen to be born in than anything you do. If you believe Acemoglu and Robinson, it's the institutions that matter for economic growth.

At the Community Weekend in Berlin Jonas spoke about EA movement building and how one is

Huh?

I think that largely depends on your skill set.

It might. (Thank you for giving a data point.) I find myself drawn toward the earning to give route since then you can use your salary to kinda measure impact. You could measure too with seeking political office although that's not my cup of tea. But with political activism I don't really see how.

Replies from: ChristianKl, Lumifer, ChristianKl
comment by ChristianKl · 2014-05-02T12:02:44.697Z · LW(p) · GW(p)

The Swiss folks may have done that. But I think the major organizations, like GiveWell, Giving What We Can, and 80,000 Hours, are focused on apolitical causes like global health, if you judge from their lists of recommended charities.

I don't think 80,000 hours advice people who seek it's guidance against going into politics.

GiveWell states that they focus on global health issues because those issues provide a good evidence base.

I think Giving What We Can says that it's members can make donation to any charity of their choosing.

Statistically your income is more influenced by which society you happen to be born in than anything you do. If you believe Acemoglu and Robinson, it's the institutions that matter for economic growth.

"Should we do liquid democracy?" is an import question when it comes to designing institutions. It's not a question that left or right in the traditional sense of those words.

In software design a lot of thought went into structuring information and valuing simplicity. Getting that kind of thinking into law making would do a lot of good but it's no mainstream topic.

Opposing corn subsidies isn't a right or left issue. Especially if you do it on the ground that the subsidies make meat too cheap and you want people to eat less meat.

Fighting software patents and patents trolls isn't a right vs. left issue.

Whether or not you have legal responsibility when you route traffic of other people over your own computer isn't a right vs. left issue.

Pushing evidence-based policy making isn't a right vs. left issue.

Ben Goldacre's fight to get trial data out in the open is highly political in nature. You could label it "socialism" to force big pharma to release their knowledge into the commons but I think that heavily screws with the nature of the conflict. I think that even people who see themselves politically on the right are likely to support Goldacres agenda.

But with political activism I don't really see how.

What do you mean with "political activism". The term is frequently used by people who want to signal that they care about an issue but who aren't willing to actually to something that has political effect.

Saul Alinsky would be someone who thought a lot about how to do political activism. It starts with doing community building. In the EA example that means at this point in time most of the activism resources should go towards internal affairs of the EA movement.

comment by Lumifer · 2014-05-02T14:26:04.139Z · LW(p) · GW(p)

Also I don't think there's any getting around taking a position on mainstream political issues to optimally benefit society.

Mainstream political issues are often about what does "optimally benefit society" mean.

comment by ChristianKl · 2014-05-02T11:12:40.337Z · LW(p) · GW(p)

Huh?

I finished that paragraph via editing.

comment by Jayson_Virissimo · 2014-04-30T19:26:00.488Z · LW(p) · GW(p)

We might include the three types of economically left (e.g. socialism), economically neutral (e.g. abortion) and economically right (e.g. abolish estate taxes).

That the effects of abortion are economically neutral seems like an extraordinary claim. What kind of evidence did you have in mind? If those anti-abortion people that hang out on campus are right, then roughly 50 million abortions have taken place in America since Roe v. Wade. How could an extra 50 million people have a neutral effect on the economy?

Replies from: shminux, None
comment by Shmi (shminux) · 2014-04-30T20:31:06.009Z · LW(p) · GW(p)

That the effects of abortion are economically nertral[sic] seems like an extrodinary[sic] claim.

Not really. If I recall, legalizing abortion has almost no effect on the birth rate, accessible contraceptives are somewhat higher, but none come close to changing cultural norms.

roughly 50 million abortions have taken place in America since Roe v. Wade.

50 million mostly legal abortions, even if the figure is correct, does not translate to 50 million more adults, of course. It is not even clear whether the overall effect is increase or decrease in population.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2014-04-30T22:46:48.457Z · LW(p) · GW(p)

Not really. If I recall, legalizing abortion has almost no effect on the birth rate, accessible contraceptives are somewhat higher, but none come close to changing cultural norms.

Well, this is what I found in < 5 minutes of searching:

...Klerman finds that legalization of abortion, particularly the broad access afforded by Roe, had some effect in reducing fertility.

-- Klerman, Jacob Alex. "US abortion policy and fertility." (2000).

I find fairly strong evidence that young women’s birthrates dropped as a result of abortion access as well as evidence that birth control pill access led to a drop in birthrates among whites.

-- Guldi, Melanie. "Fertility effects of abortion and birth control pill access for minors." Demography 45, no. 4 (2008): 817-827.

Our model estimates women ages 14 to 19 will see an 8.7% decline in birth rates, a 4.1% decline for women ages 20 to 24 and a 3% decline for women ages 25 to 29 due to abortion legalization. We predict that abortion legalization is correlated with a 10.2% decrease in the birth rates of black mothers and a 4.5% decrease in the birth rates of white mothers.

-- Coates, Brandi, Alejandro Companioni, and Zachary A. Bethune. "The Impact of Abortion Legalization on Birth Rates!."

Okay, so a quick search for studies on the effects of abortion legalization on birth rates seems to confirm my priors, so...it still looks like an extraordinary claim.

50 million mostly legal abortions, even if the figure is correct, does not translate to 50 million more adults, of course.

Agreed, I shouldn't have used that number, but according the first couple of studies I came across it definitely would be positive and over 40 years time it seems plausible that even some of those people that would have been born would have had kids by this point.

Replies from: shminux
comment by Shmi (shminux) · 2014-05-01T00:16:23.806Z · LW(p) · GW(p)

Interesting, thanks. Incidentally, the CDC data show that the abortions/life births ratio is pretty significant, though it's declined from 36% in 1979 to 22% in 2010. This is surprisingly high. I don't know what to make of it. My prior expectation was maybe a percent or two. Every 5th fetus is aborted? Or am I reading the data wrong? Canadian rates seem to be similar, with every 4th fetus being aborted.

comment by [deleted] · 2014-05-02T04:07:33.974Z · LW(p) · GW(p)

I suggested this division of causes because, first, people who earn to give may join the upper or at least upper middle class. It seems harder to advocate for things like socialism when your peer group is rich. Your opinions aren't going to earn you praise or friends and friends and connections are really important for making money. It's also hard to devote time and energy to maintaining odd views when you're focused on a career that isn't directly involved with acting on those opinions. You're losing some potential synergy. It also possible that, second, the usefulness of cash donations varies with whether the cause has support among the rich or poor, although this might work the other way in that I would expect causes that favor the poor to need money more.

But with a topic like abortion this all seems unclear--although opinions on abortion do correlate some with income, I don't think that correlation is a strong as with outright economic redistribution. What do you think?

If you want to suggest a more clearly neutral topic than abortion I would be interested to hear it.

Replies from: Eugine_Nier, ChristianKl
comment by Eugine_Nier · 2014-05-02T04:24:19.847Z · LW(p) · GW(p)

It seems harder to advocate for things like socialism when your peer group is rich.

Um, there a lot of rich people who at least profess socialist views, the common somewhat dismissive term for them is champagne socialist.

comment by ChristianKl · 2014-05-02T20:15:41.950Z · LW(p) · GW(p)

What do you mean exactly when you say socialism?

As far as the numbers on abortion go, for 75k

comment by hamnox · 2014-04-28T16:09:17.614Z · LW(p) · GW(p)

Hi, CFAR alumni here. Reposting I guess, the OTs are getting confusing.

Is there something like a prediction market running somewhere in discussion?

Going mostly off of Gwern's recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the "flavour" that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.

What would it take to get such a thread going online? I believe one of the reasons it worked so well at minicamp was because we were all in the same area for the same period of time, so it was simple to restrict bets to relevant things we could all verify. Even if most of the posts wind up being relevant only to the local meetups, it would be nice to have them up in the same place for unofficial competition. Is that something you would use?

Replies from: Viliam_Bur, witzvo
comment by Viliam_Bur · 2014-04-29T08:58:40.925Z · LW(p) · GW(p)

CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.

Unfortunately, this would be easy to abuse online. Create a sockpuppet account, make a stupid prediction, and then quickly fix the prediction using your real account. This is equivalent to moving bits from one account to the other.

At CFAR workshop all participants were real people. But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.

Maybe a system more difficult to abuse can be designed, but a direct copy of algorithm used at CFAR isn't it.

Replies from: palladias, hamnox, philh
comment by palladias · 2014-04-29T14:19:06.081Z · LW(p) · GW(p)

At CFAR workshop all participants were real people. But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.

Not quite true. I ran the markets, and I did threaten to fearsomely glare at people who were abusing the system. (And my glare is very fearsome).

comment by hamnox · 2014-04-29T15:41:58.414Z · LW(p) · GW(p)

You're right. Gaming the system is feasible, though I believe it is very low-value.

What exactly would the point of gaming a prediction thread be? The only point-keeping would be informal, so if you're making a bunch of points off of idiotic puppets bets it's still visible as because you were up against an idiotic bet. It'd be like lying on the group diary, almost.

Do note, there was actually a HUGE punishment for losing. You could get into the negative pretty easily by being stupidly overconfident. The scoring was 100 × log2(Your probability of outcome/Previous Bet probability of outcome).. For example: if you updated a 50% house bet to 99% being correct would give you 98.55 "bits", while being wrong would give you -564.39

Replies from: philh
comment by philh · 2014-04-30T13:15:03.231Z · LW(p) · GW(p)

Do note, there was actually a HUGE punishment for losing.

Only to the extent that you care about points, whereas the winner was given a tangible prize (in my case, a book).

Actually, I'm now remembering that that isn't entirely true: there was a prize for the person with most points, but also a prize that was assigned randomly, weighted according to ((player points) - (least number of points of any player) + 1), or something. So the more you lose by, the less chance you have of winning that prize. But if you're near the back anyway, your chance of winning is so small that this is a very small punishment.

(I think we might have had someone who was convinced to get many negative points, to reduce the effective spread among everyone else. Or I might be making that up.)

Replies from: hamnox
comment by hamnox · 2014-04-30T21:01:48.656Z · LW(p) · GW(p)

Ah, I do not believe there was such a prize system at my minicamp.

comment by philh · 2014-04-29T13:47:16.982Z · LW(p) · GW(p)

But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.

I raised this possibility, but an instructor said they'd use human judgment to stop us from doing that.

(My actual idea was along the lines of "if two of us decide that we aren't going to come to agreement on a market, we can just repeatedly alternate our bets, and each expect that we're getting arbitrarily many points from this". The instructor said something like, they'd just ignore all but the final bets if they thought we were doing that.)

comment by witzvo · 2014-04-29T03:20:49.621Z · LW(p) · GW(p)

a market where your scoring was based on how much you updated the previous bet towards the truth.

This is interesting. Can someone point me to documentation of the scoring? Thanks. (unless it's a CFAR secret or something)

Replies from: hamnox
comment by hamnox · 2014-04-30T12:55:10.407Z · LW(p) · GW(p)

100 × log2(Your probability of outcome/Previous Bet probability of outcome).. For example: if you updated a 50% house bet to 99% being correct would give you 98.55 "bits", while being wrong would give you -564.39

It's posted a couple of posts up. I had given no credence to the idea that it could be a CFAR secret.

comment by sebmathguy · 2014-05-02T01:19:44.711Z · LW(p) · GW(p)

I've just made an enrollment deposit at the University of Illinois at Urbana-Champaign, and I'm wondering if any other rationalists are going, and if so, would they be interested in sharing a dorm?

comment by Oscar_Cunningham · 2014-04-29T10:42:28.724Z · LW(p) · GW(p)

LINK: Someone on math.stackexchange ask if politically incorrect conclusions are more likely to be true by Bayesian Logic. The answer given is pretty solid (and says no).

Replies from: philh
comment by philh · 2014-04-29T13:40:32.632Z · LW(p) · GW(p)

It assumes that the ratio of true-to-false statements repeated is the same regardless of political correctness. (If a true PC statement is four times more likely to be repeated than a false PC statement, then a true PI statement is four times more likely to be repeated than a false PI statement.) I'm not sure that's true.

But this does give us the conditions required to assume a PI statement is more likely to be true than a PC statement, which is valuable.

comment by Risto_Saarelma · 2014-05-03T19:44:38.435Z · LW(p) · GW(p)

Charles Murray has an entertainingly cranky review of Nicholas Wade's upcoming book A Troublesome Inheritance: Genes, Race and Human History up at The Wall Street Journal.

comment by palladias · 2014-05-02T00:16:45.188Z · LW(p) · GW(p)

Career advice? I've been offered a fellowship with the Education Pioneers.

For ten months (starting in Sept), I'd be embedded with a school district, charter, or gov't agency to do statistics and other statistical planning. I need to reply to them by next Friday, and I'd appreciate people pointing out questions they think I should ask/weight in my own decisionmaking. Please take a second to think unprimed, before I share some of my own thoughts below.

I'm currently working for less than minimum wage in a journalism internship that ends June 1. I strongly prefer to stay in Washington D.C. (as this would allow me to do) since most of the people I care most about live here. I like using statistics to help people who are frightened of them, which it sounds like this would allow me to do. I do really like writing, so I want to end up with enough leisure time to still do some freelancing and continue to write every day for my blog (hence not being interested in jobs that take over your life).

Replies from: shminux, None
comment by Shmi (shminux) · 2014-05-04T21:19:46.179Z · LW(p) · GW(p)

I strongly prefer to stay in Washington D.C.

What's so good about DC unless you are a politics junkie?

Replies from: palladias
comment by palladias · 2014-05-05T03:39:35.891Z · LW(p) · GW(p)
  • It's a reasonably sized city (giving me theatre, foreign films, lectures, etc)
  • Plus public transportation (I don't know how to drive)
  • Short trip on Amtrak to see my family in NY
  • I am a politics junkie
  • So are a lot of my friends, so the plurality of people I care about most all live here
  • As a result, I get to do group movie nights, DnD, parliamentary debates, babysitting with them

Also, I'd bet the kind of person I ultimately want to marry is most likely to want to live in DC, too.

Replies from: shminux, NancyLebovitz
comment by Shmi (shminux) · 2014-05-05T20:50:56.207Z · LW(p) · GW(p)

Interesting. I'd imagine that, except for Federal politics, most of these needs would be better served some place like NY, but I see what you mean.

comment by NancyLebovitz · 2014-05-06T12:47:41.388Z · LW(p) · GW(p)

A lot of free museums, too.

comment by [deleted] · 2014-05-02T07:54:34.933Z · LW(p) · GW(p)

What are your other options? Do you see any negatives to this fellowship?

Replies from: palladias
comment by palladias · 2014-05-02T15:34:37.458Z · LW(p) · GW(p)

Well, I've sent out a lot of (mostly writing) applications, and not gotten bites on anything good. I've been interviewing here and tipped them off I had an offer, and am waiting to hear back. I've done public policy analysis before and could do it again.

Definitely have written too much Snowden/Manning coverage to get a security clearanced job in DC, so places like Booz Allen are right out. ;)

comment by raisin · 2014-05-01T18:41:05.530Z · LW(p) · GW(p)

How is the picture of the Sirens and Odyssey tied to a mast in the header of Overcoming Bias related to the concepts talked on the site?

Replies from: arundelo
comment by arundelo · 2014-05-01T20:24:00.470Z · LW(p) · GW(p)

Odysseus realized that he couldn't trust his own mind (or those of his sailors) but found a workaround.

To "overcome bias" is to find workarounds for the mind's failure modes.

Replies from: satt
comment by satt · 2014-05-02T01:06:37.008Z · LW(p) · GW(p)

Along similar lines, Jon Elster was so taken by that literary motif that he used it in his Ulysses and the Sirens: Studies in Rationality and Irrationality, as well as his later Ulysses Unbound: Studies in Rationality, Precommitment, and Constraints.

comment by RolfAndreassen · 2014-04-28T01:24:54.644Z · LW(p) · GW(p)

I suggest that siren worlds should be relabeled "Devil's Courtships", after the creepy song of the same name:

"I'll buy you a pennyworth o' priens If that be the way true love begins If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"

"Ye can hae your pennyworth of priens Though that be the way true love begins For I'll never gang wi' you m'dear, I'll never gang wi' you."

"I'll buy you a braw snuff box Nine times opened, nine times locked If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"

"You can hae your braw snuff box Nine times opened, nine times locked For I'll never gang wi' you m'dear, I'll never gang wi' you."

"I'll buy you a silken goon Wi' nine stripes up and nine stripes doon If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"

"You can hae your silken goon Wi' nine stripes up and nine stripes doon For I'll never gang wi' you m'dear, I'll never gang wi' you."

"I'll buy you a nine stringed bell Tae call yer maid when'er you will If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"

"You can keep your nine stringed bell Tae call my maid when'er I will For I'll never gang wi' you m'dear, I'll never gang wi' you."

"I'll gie you a kist o' gold Tae comfort you when you are old If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"

"These are fine words you say So mount up lad you've won the day I'll gang alang wi' you m'dear, I'll gang alang wi' you."

They'd scarcely gone a mile Before she spied his cloven heel "I rue I come wi' you" she says, "I rue I come wi' you."

"I'll grip ye hard and fast, Gold won your virgin heart at last And I'll no part wi' you m'dear, I'll never part wi' you."

And as they were galloping along The cold wind carried her mournful song "I rue I come wi' you" she says, "I rue I come wi' you." "I rue I come wi' you" she says, "I rue I come wi' you."

It expresses the problem of allowing an AI to learn by trial and error what its controllers will agree to. If the woman had had the wit to stick to her first "never gang wi' you" and go away after rejecting the pennyworth of flowers, she wouldn't be having this problem.

comment by Metus · 2014-04-27T23:34:59.618Z · LW(p) · GW(p)

After a contribution to a previous thread I thought some more about what I actually wanted to say, so here is a much more succint version:

The average of any distribution or even worse of a dataset is not a sufficient description without a statement about the distribution.

So often research results are reported as a simple average with a standard deviation. The educated statistician will recognise these two numbers as the first two modes of a distribution. But these two modes completely describe a distribution if it is a normal distribution. Though the central limit theorem gives us justification to use it in quite a number of cases, in general we need to make sure that the dataset has no higher modes. The most obvious case is of a dataset dominated by a single binary random variable.

This statement then, that not all datasets are normally distributed, holds for any field, be it solid state physics, astrophysics, biochemistry, evolutionary biology, population ecology, welfare economics or psychology. To assume that any average together with a standard deviation derives from a normal distribution or even worse that there is no more information in the dataset or the underlying phenomenon is a grave scientific mistake.

Replies from: gjm, Vladimir_Nesov
comment by gjm · 2014-04-28T00:31:09.755Z · LW(p) · GW(p)

first two modes

I think you mean moments, not modes (here and twice more in the same paragraph). I mention this for the benefit of anyone reading this and googling for more information.

has no higher [moments]

I'm guessing you mean "has higher moments matching those of the normal distribution" or something, but I don't see any advantage of this formulation over the simpler "is normally distributed" (or, since you're talking about a dataset rather than the random process that generated it, something like "is drawn from a normal distribution"). Usually, saying something like "such-and-such a distribution has no fourth moment" means something very different (and incompatible with being normal): that its tails are fat enough that the fourth moment is undefined on account of the relevant integral being divergent.

There's a deeper connection between means and normality. One of the reasons why you might summarize a random variable by its mean is that the mean minimizes the expected squared error: that is, if you've got a random variable X and you want to choose x so that E[(X-x)^2] is as small as possible, the correct choice for x is E[X], the mean of X. Or, if you have a dataset (x1,...,xn) and you want to choose x so that the mean of (xi-x)^2 is as small as possible, then the correct choice is the mean of the xi. OK, so why would you want to do that particular thing? Well, if your data are independent samples from a normal distribution, then minimizing the mean of (xi-x)^2 is the same thing as maximizing the likelihood (i.e., roughly, the probability of getting those samples rather than some other set of data). (Which is the same thing as maximizing the posterior probability, if you start out with no information about the mean of the distribution.) So for normally distributed data, choosing the mean of your sample gives you the same result as max likelihood. -- But if what you know, e.g., is that your data are drawn from a Cauchy distribution with unknown parameters, then taking the mean of the samples will not help you at all.

comment by Vladimir_Nesov · 2014-04-28T00:23:57.121Z · LW(p) · GW(p)

The educated statistician will recognise these two numbers as the first two modes of a distribution. But these two modes completely describe a distribution if and only if it is a normal distribution.

(The "only if" is incorrect. For many other families of distributions, knowing mean and variance is also sufficient to pinpoint a unique distribution.)

Replies from: Metus
comment by Metus · 2014-04-28T00:31:37.964Z · LW(p) · GW(p)

I must have mixed it up with some other statement.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-04-28T13:09:32.724Z · LW(p) · GW(p)

"Yeah, sorry I said something that was incorrect. I meant to say something that wasn't incorrect."

I've seen more ballsy responses than this, but not many.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-04-28T15:48:16.004Z · LW(p) · GW(p)

I don't understand. Metus flatly admitted error, end of story.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-04-28T16:37:17.724Z · LW(p) · GW(p)

For clarity, I found what Metus said to be very funny. I commented because I wanted to underscore the humour, not because I wanted to be critical.

Replies from: fezziwig
comment by fezziwig · 2014-04-28T19:27:45.891Z · LW(p) · GW(p)

FWIW, I also read it as an insult. And though I do believe you that that wasn't your intent, I don't see how else to read it even now.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-04-28T20:30:54.346Z · LW(p) · GW(p)

Well, it wasn't intended as a kind comment either, but it clearly fell a lot flatter than I thought it would.

comment by NancyLebovitz · 2014-04-27T21:35:49.721Z · LW(p) · GW(p)

It looks like I made a mistake-- I checked, but somehow failed to see the thread ending on April 27.

I could kill this one, but there's already one legitimate open thread post. I'll go with the consensus on whether to delete it.

Replies from: philh, Oscar_Cunningham, Gunnar_Zarncke
comment by philh · 2014-04-27T22:23:52.626Z · LW(p) · GW(p)

It might be worth editing more instructions to submitters into this post. Along the lines of 'If you notice that the previous thread has expired, feel free to post the next one. It should run monday-sunday, and it should include the open_thread tag so that it gets picked up on the sidebar'.

comment by Oscar_Cunningham · 2014-04-27T21:46:47.760Z · LW(p) · GW(p)

Let it live. Just add the "open_thread" tag and maybe change the title so that this one runs to May 4.

comment by Gunnar_Zarncke · 2014-04-27T21:49:14.134Z · LW(p) · GW(p)

it's too late to kill. See it as a reminder (for the time being) to be more careful.

comment by [deleted] · 2014-04-30T12:38:05.405Z · LW(p) · GW(p)

There's been debate on whether earning to give is the best way to be altruistic.

But it seems to me that the real issue is not what is most altruistic but what altruism is. It's not clear to me that most people mostly agree on what's altruistic or good--or even if one person is self-consistent in different contexts. Is there some case for this besides just saying "I have this intuition that most people agree on what's good"? Has there been much attempt by effective altruists to investigate what real people or organizations want?

comment by Metus · 2014-04-27T23:24:04.686Z · LW(p) · GW(p)

"The burden of proof is on you."

No, most of the time the burden of proof is on both parties. In complete absence of any evidence both the statement and its logical negation have equal weight. So if one party states "you can't predict the shape of the bottle the liquid was poured out of from the glass it is in" and the other party states the opposite, the burden of proof lies on both parties to state their respective evidence. Of course in the special case above the disagreement was about the exact meaning of "can" or "can't" but the general principle still holds. For any given closed system the number of molecules will be either even or odd. So any arbitrary choice of statement will have to be justified. The burden of proof lies on either party claiming the truth of either position.

Replies from: ChristianKl, Vladimir_Nesov, Stabilizer, brazil84
comment by ChristianKl · 2014-04-28T14:40:20.560Z · LW(p) · GW(p)

"The burden of proof is on you."

A burden of proof depends on the context. If you want to convince me to adopt then you have to fulfill a burden of proof and convince me that's a good decision for me to make. If you want simply want to talk about your experience that your new dog is awesome, you don't have to fulfill any burden of proof to me.

If a company wants to bring a new drug on the market they have to establish it's clinical benefits in two statistical significant clinical trials. On the other hand adverse effects have a lower burden or proof. You need far less evidence for the FDA to get a company to bring up a certain effect as adverse effect for a drug.

Is truth different for benefits then adverse effects? No. But burden of proof is. Burden of proof always depends on the purpose for which you want to use information.

If you ask me: What do you believe about topic X? I don't have any burden of proof to prove that my beliefs on X are true. I only have a burden once I want you to change your belief.

comment by Vladimir_Nesov · 2014-04-28T00:30:53.079Z · LW(p) · GW(p)

What is this "burden of proof" and for what purposes is it a useful concept? There are factual questions and people with some capacity and motivation for pursuing them. When social norms dictate how this pursuit should proceed, it's no longer about the questions.

Replies from: fubarobfusco, Kaj_Sotala
comment by fubarobfusco · 2014-04-28T07:13:50.859Z · LW(p) · GW(p)

One sense of "burden of proof" seems to be a game-rule for a (non-Bayesian) adversarial debate game. It is intended to exclude arguments from ignorance, which if permitted would stall the game. The players are adversaries, not co-investigators. The player making a novel claim bears the burden of proof — rather than a person criticizing that claim — so that the players actually have to bring points to bear. Consider:

A: God loves frogs. They are, above all other animals, sacred to him.
B: I don't believe it.
A: But you can't prove that frogs aren't sacred!
B: Well of course not, it never occurred to me to consider as a possibility.

At this point the game would be stalled at zero points.

The burden-of-proof rule forbids A's last move. Since A started the game by making a positive claim — the special status of frogs — A has to provide some evidence for this claim. B can then rebut this evidence, and A can present new evidence, and then we have a game going:

A: God loves frogs. They are, above all other animals, sacred to him.
B: I don't believe it.
A: Well, the God Book says that God loves frogs.
B: But the God Book also says that chickens are a kind of flea, and modern taxonomy shows that's wrong. So the God Book isn't good evidence.
A: I found a frog once that had the word "God" encoded in the spots on its back in Morse code.
B: But the spots on frogs' backs are probably pretty random. How many frogs did you have to check?
A: Umm ... a few thousand. It was a sacred duty!
B: But it would be a lot more convincing if all frogs had that pattern, wouldn't it?
A: Well ... Frogs are sacred in Homestuck, which is the most financially successful webcomic of all time. Surely that's a sign of God's favor.
B: They're sacred to Prospitians, yes, but Dersites think they're blasphemous. Besides, if financial success was a sign of God's favor, we should all be worshiping Berkshire Hathaway, not frogs.

According to the rules of the game, B doesn't have to establish that God hates frogs. B just has to knock down each one of A's arguments. Then, since A has failed to establish any evidence that holds up, B is (so far) winning the game.

Replies from: Alejandro1, sixes_and_sevens
comment by Alejandro1 · 2014-04-28T15:38:25.617Z · LW(p) · GW(p)

One sense of "burden of proof" seems to be a game-rule for a (non-Bayesian) adversarial debate game. It is intended to exclude arguments from ignorance, which if permitted would stall the game.

I like this framing, but "burden of proof" is also used in other contexts than arguments from ignorance. For example, two philosophers with opposing views on consciousness might plausibly get stuck in the following dialog:

A: If consciousness is reducible, then the Chinese room thinks, Mary can know red, zombies are impossible, etc.; all these things are so wildly counterintuitive that the position that the burden of proof falls on those who claim that consciousness is reducible.

B: Consciousness being irreducible would go so completely against all the scientific knowledge we have gained about the universe that the burden of proof falls on those who assert that.

Here "who has the burden of proof?" seems to be functioning as a non-Bayesian approximation for "whose position has the lowest prior probability?" The one with the lowest prior probability is the one that should give more evidence (have a higher P(E|H)/P(E)) if they want their hypothesis to prevail; in absence of new evidence, the one with the highest prior wins by default. The problem is that if the arguers have genuinely different priors this leads to stalemate, as in the example.

ETA: tl.dr, what Stabilizer said.

Replies from: Transfuturist
comment by Transfuturist · 2014-04-29T00:33:12.160Z · LW(p) · GW(p)

I'm not sure how Mary knowing red follows from reducible consciousness. Knowing everything (except the experience) of red does not the experience of red make.

Replies from: Alejandro1
comment by Alejandro1 · 2014-04-29T04:25:15.631Z · LW(p) · GW(p)

It is certainly debatable, but there are philosophers who make this argument, and I only used it as an example.

comment by sixes_and_sevens · 2014-04-28T10:16:06.209Z · LW(p) · GW(p)

"Burden of proof" is also formally assigned under judicial frameworks. "Presumed innocent until proven guilty" and "beyond reasonable doubt" are examples of such assignations.

Outside of a legal context, I tend to assume that if someone in a discussion has made an appeal to "burden of proof", that discussion is probably not a fruitful one.

comment by Kaj_Sotala · 2014-04-28T14:39:47.207Z · LW(p) · GW(p)

If someone is (or seems like they might be) privileging the hypothesis, it seems reasonable to say that the burden of proof is on them, not just as a social norm but also as a question of epistemology.

In other words, if there are a hundred boxes where the diamond could be and I claim that it's in box number 27, then it's reasonable that I ought to provide some evidence for this claim, rather than requiring the other person to come up with a hypothesis for why my claim would be false. There are an infinite number of false hypotheses, and if we try to test them all rather than focusing on the most promising ones, we'll never get anywhere.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-28T18:20:36.996Z · LW(p) · GW(p)

This is covered by the motivation clause in grandparent. If you give me a bad question, I won't be motivated to work on it. I may even be uninterested in your meticulously researched answer.

comment by Stabilizer · 2014-04-28T05:31:12.035Z · LW(p) · GW(p)

If one party is espousing a hypothesis which has a very low prior probability, then they suffer the burden of providing evidence to support this hypothesis. Finding evidence takes time and resources; if you want to support the low probability hypothesis, then you spend the resources.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-28T10:02:57.703Z · LW(p) · GW(p)

a hypothesis which has a very low prior probability

Its probability is different in estimates of the people who disagree, and its best alternative will find the status of having "low probability" in estimate of different people. Just "low probability" doesn't make the situation asymmetric.

if you want to support the low probability hypothesis, then you spend the resources.

You should spend the resources when there is high value of information, otherwise do something else. Improving someone else's beliefs may have high value for them.

comment by Vladimir_Nesov · 2014-04-28T09:56:52.430Z · LW(p) · GW(p)

if you want to support the low probability hypothesis, then you spend the resources

If you want to [test] a low [value of information] hypothesis, you change your mind and stop wanting that. What happens is that people disagree on the probabilities.

comment by brazil84 · 2014-04-28T07:58:56.842Z · LW(p) · GW(p)

In complete absence of any evidence both the statement and its logical negation have equal weight

But there is never "complete absence of any evidence." For example, if I claim to you that I have an invisible flying pig in my backyard, we both have a lifetime of experiences to draw on which are inconsistent with such a claim. e.g. witnessing pigs and similar animals running around but not flying; feeling solid objects which have always been visible in normal light; and so on. So I would bear the burden of proving my claim.

comment by Oscar_Cunningham · 2014-04-27T20:46:58.541Z · LW(p) · GW(p)

So much for starting open threads on a Monday.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-27T21:15:50.595Z · LW(p) · GW(p)

I'd forgotten that-- I'd just noticed that we were running past the end date on the previous thread.

It won't be a big deal to let this one stretch till a week from tomorrow.

Replies from: Tenoke
comment by Tenoke · 2014-04-27T21:21:59.966Z · LW(p) · GW(p)

I'd forgotten that-- I'd just noticed that we were running past the end date on the previous thread.

Uhm, we weren't.