Open thread, Dec. 21 - Dec. 27, 2015

post by MrMind · 2015-12-21T07:56:58.570Z · LW · GW · Legacy · 233 comments

Contents

233 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

233 comments

Comments sorted by top scores.

comment by gwern · 2015-12-22T20:50:04.120Z · LW(p) · GW(p)

Correlation!=causation: returning to my old theme (latest example: is exercise/mortality entirely confounded by genetics?), what is the right way to model various comparisons?

By which I mean, consider a paper like "Evaluating non-randomised intervention studies", Deeks et al 2003 which does this:

In the systematic reviews, 8 studies compared results of randomised and non-randomised studies across multiple interventions using metaepidemiological techniques. A total of 194 tools were identified that could be or had been used to assess non-randomised studies. 60 tools covered at least 5 of 6 pre-specified internal validity domains. 14 tools covered 3 of 4 core items of particular importance for non-randomised studies. 6 tools were thought suitable for use in systematic reviews. Of 511 systematic reviews that included nonrandomised studies, only 169 (33%) assessed study quality. 69 reviews investigated the impact of quality on study results in a quantitative manner. The new empirical studies estimated the bias associated with non-random allocation and found that the bias could lead to consistent over- or underestimations of treatment effects, also the bias increased variation in results for both historical and concurrent controls, owing to haphazard differences in case-mix between groups. The biases were large enough to lead studies falsely to conclude significant findings of benefit or harm. ...Conclusions: Results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Nonrandomised studies may still give seriously misleading results when treated and control groups appear similar in key prognostic factors. Standard methods of case-mix adjustment do not guarantee removal of bias. Residual confounding may be high even when good prognostic data are available, and in some situations adjusted results may appear more biased than unadjusted results.

So we get pairs of studies, more or less testing the same thing except one is randomized and the other is correlational. Presumably this sort of study-pair dataset is exactly the kind of dataset we would like to have if we wanted to learn how much we can infer causality from correlational data.

But how, exactly, do we interpret these pairs? If one study finds a CI of 0-0.5 and the counterpart finds 0.45-1.0, is that confirmation or rejection? If one study finds -0.5-0.1 and the other 0-0.5, is that confirmation or rejection? What if they are very well powered and the pair looks like 0.2-0.3 and 0.4-0.5? A criterion of overlapping confidence intervals is not what we want.

We could try to get around it by making a very strict criterion: 'what fraction of pairs have confidence intervals excluding zero for both studies, and the studies are opposite signed?' This seems good: if one study 'proves' that X is helpful and the other study 'proves' that X is harmful, then that's as clearcut a case of correlation!=causation as one could hope for. With a pair of studies like -0.5/-0.1 and +0.1-+0.5, that is certainly a big problem.

The problem with that is that it is so strict that we would hardly ever conclude a particular case was correlation!=causation (few of the known examples are so wellpowered clearcut), leading to systematic overoptimism, and it inherits the typical problems of NHST like generally ignoring costs (if exercise reduces mortality by 50% in correlational studies and 5% in randomized studies, then to some extent correlation=causation but the massive overestimate could easily tip exercise from being worthwhile to not being worthwhile).

We also can't simply do a two-group comparison and get a result like 'correlational studies always double the effect on average, so to correct, just halve the effect and then see if that is still statistically-significant', which is something you can do with, say, blinding or publication bias because it turns out to not be that conveniently simple - it's not an issue of researchers predictably biasing ratings toward the desired higher outcome or publishing only the results/studies which show the desired results. The randomized experiments seem to turn in larger, smaller, or opposite-signed results at, well, random.

This is a similar problem as with the Reproducibility Project: we would like the replications of the original psychology studies to tell us, in some sense, how 'trustworthy' we can consider psychology studies in general. But most of the methods seem to diagnose lack of power as much as anything (the replications were generally powered 80%+, IIRC, which still means that a lot will not be statistically-significant even if the effect is real). Using Bayes factors is helpful in getting us away from p-values but still not the answer.

It might help to think about what is going on in a generative sense. What do I think creates these results? I would have to say that the results are generally being driven by a complex causal network of genes, biochemistry, ethnicity, SES, varying treatment methods etc which throws up an even more complex & enormous set of multivariate correlations (which can be either positive or negative), while effective interventions are few & rare (likewise, can be both positive or negative) but drive the occasional correlation as well. When a correlation is presented by a researcher as an effective intervention, it might be drawn from the large set of pure correlations or it might have come from the set of causals. It is unlabeled and we are ignorant of which group it came from. There is no oracle which will tell us that a particular correlation is or is not causal (that would make life too easy), but then (in this case) we can test it, and get a (usually small) amount of data about what it does in a randomized setting. How do we analyze this?

I would say that what we have here is something quite specific: a mixture model. Each intervention has been drawn from a mixture of two distributions, all-correlation (with a wide distribution allowing for many large negative & positive values) and causal effects (narrow distribution around zero with a few large values), but it's unknown which of the two it was drawn from and we are also unsure what the probability of drawing from one or the other is. (The problem is similar to my earlier noisy polls: modeling potentially falsified poll data.)

So when we run a study-pair through this, then if they are not very discrepant, the posterior estimate shifts towards having drawn from the causal group in that case - and also slightly increases the overall estimate of the probability of drawing from the causal group; and vice-versa if they are heavily discrepant, in which case it becomes much more probable that there was a draw from the correlational group, and slightly more probable that draws from the correlation group are more common. At the end of doing this for all the study-pairs, we get estimates of causal/correlation posterior probability for each particular study-pair (which automatically adjusts for power etc and can be further used for decision-theory like 'does this reduce the expected value of the specific treatment of exercise to <=$0?), but we also get an overall estimate of the switching probability - which tells us in general how often we can expect tested correlations like these to be causal.

I think this gives us everything we want. Working with distributions avoids the power issues, for any specific treatment we can give estimates of being causal, we get an overall estimate as a clear unambiguous probability, etc.

Replies from: IlyaShpitser, None
comment by IlyaShpitser · 2015-12-22T22:21:04.978Z · LW(p) · GW(p)

Hi.

I am not sure I understand your question.

So we get pairs of studies, more or less testing the same thing except one is randomized and the other is correlational.

If I got such data I would (a) be very happy, (b) use the RCT to inform policy, and (c) use the pair to point out how correct causal inference methods can recover the RCT result if assumptions hold (hopefully they hold in the observational study). We can try to combine strength of two studies, but then the results live or die by assumptions on how treatments were assigned in the observational study.

I am also not a fan of classifying biases like they do (it's a common silly practice). For example, it's really not informative to say "confounding bias," in reality you can have a lot of types of confounding, with different solutions necessary depending on the type (I like to draw pictures to understand this).

I think Robins et al (?Hernan?) at some point recovered the result of an RCT via his g methods from observational data.

Replies from: Anders_H, gwern
comment by Anders_H · 2015-12-23T05:33:44.478Z · LW(p) · GW(p)

I think Robins et al (?Hernan?) at some point recovered the result of an RCT via his g methods from observational data.

The paper you are referring to is "Observational Studies Analyzed Like Randomized Experiments: An application to Postmenopausal Hormone Therapy and Coronary Heart Disease" by Hernan et al. It is available at https://cdn1.sph.harvard.edu/wp-content/uploads/sites/343/2013/03/observational-studies.pdf

The controversy about hormone replacement therapy is fascinating as a case study. Until 2002, essentially all women who reached menopause got medical advise to start taking pills containing horse estrogen. It was very widely believed that this would reduce their risk of having a heart attack. This belief primarily based on biological plausibility: Estrogen is known to reduce cholesterol, and cholesterol is believed to increase the risk of heart disease. Also, there were many observational studies that seemingly suggested that women who took hormone replacement therapy (HRT) had less risk of heart disease. (In my view, this was not surprising: Observational studies always show what the investigators expect to find.)

In 2002, the Women's Health Initiative randomized trial was stopped early because it showed that estrogen replacement therapy actually substantially increased the risk of having a heart attack. Overnight, the medical establishment stopped recommending estrogen for menopausal women. But a perhaps more important consequence was that many clinicians stopped trusting observational studies altogether. In my opinion, this was mostly a good thing.

The largest observational study to show a protective effect of estrogen the Nurses Health Study. In 2008, my thesis advisor Miguel Hernan re-analyzed this dataset using Jamie Robins' g-methods (which are equivalent to Pearl), and was essentially able to reproduce the results of the WHI trial. Miguel's paper uses valid methods and gets the correct results. In my view, this shows that the new methods might work, but the paper would have meant much more if it was published prior to the randomized trials.

Miguel and Jamie's paper sparked off a very interesting methodological debate with the original investigators at the Nurses Health Study, who are still clinging to their original analysis. See http://www.ncbi.nlm.nih.gov/pubmed/18813017 .

Many people still believe that Estrogen/HRT is beneficial. The most popular theory is that WHI recruited too many old women (sometimes in their 90s!) and that estrogen is harmful if given that long after menopause. A new randomized trial which is limited to women at menopause is currently being conducted. A second theory is that the results in the trial were due to differences in statin usage. I analyzed the second theory for my doctoral thesis, but found that this had negligible impact on the results.

It is also interesting to note that while it is true that the trial found that estrogen increased the risk of heart disease, it also showed a (non-significant) reduction in all-cause mortality. So the increased risk of cardiovascular disease didn't even result in more deaths. Presumably, people care more about all-cause mortality than heart attacks. However, since it was "non-significant", not even the most dedicated proponents of estrogen treatment ever point out this fact.

Replies from: Lumifer, IlyaShpitser
comment by Lumifer · 2015-12-23T18:41:18.662Z · LW(p) · GW(p)

A side question, prompted by an amusing factoid in the Hernan paper: "...we restricted the population to women who had reported plausible energy intakes (2510 –14,640 kJ/d)".

In the statistical analysis in this paper, and also as a general practice in medical publications based on questionnaire data, are there adjustments for uncertainty in the questionnaire responses?

When you have a data point that says, for example, that person #12345 reports her caloric intake as 4,000 calories/day, do you take it as a hard precise number, or do you take it as an imprecise estimate with its own error which propagates into the model uncertainty, etc.?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T20:02:45.436Z · LW(p) · GW(p)

Keyword is "measurement error." People think hard about this. Anders_H knows this paper in a lot more detail than I do, but I expect these particular authors to be careful.

This issue is also related to "missing data." What you see might be different from the underlying truth in systematic ways, e.g. you get systematic bias in your data, and you need to deal with that. This is also related to that causal inference stuff I keep going on about.

Replies from: Lumifer
comment by Lumifer · 2015-12-23T20:19:12.493Z · LW(p) · GW(p)

Keyword is "measurement error." People think hard about this.

People like engineers and physicists think a lot about this. I am not sure that medical researchers think a lot about this. The usual (easy) way is to throw out unreasonable-looking responses during the data cleaning and then take what remains as rock-solid. Accepting that your independent variables are uncertain leads to a lot of inconvenient problems (starting with the OLS regression not being a theoretically-correct form any more).

What you see might be different from the underlying truth in systematic ways, e.g. you get systematic bias in your data, and you need to deal with that.

Yes, that's another can of worms. In some areas (e.g. self-reported food intake) the problem is so blatant and overwhelming that you have to deal with it, but if it looks minor not many people want to bother.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T20:24:40.445Z · LW(p) · GW(p)

Clinicians do not, "methodology people" (who often partner up with "domain experts") to do data analysis, absolutely do.

comment by IlyaShpitser · 2015-12-23T17:16:22.564Z · LW(p) · GW(p)

Miguel and Jamie's paper sparked off a very interesting methodological debate with the original investigators at the Nurses Health Study

Yes, I was told the full gory details of this story (not going to repeat it here). Thanks for sharing this!

By the way, are you at Stanford now? I should find a way to drop by, Jacob's there too.

comment by gwern · 2015-12-22T23:08:04.239Z · LW(p) · GW(p)

I am not sure I understand your question.

Just putting the idea out for comment in case there's some way this fails to deliver what I want it to deliver. Excerpting out all the comparisons and writing up the mixture model in JAGS would be a lot of work; just reading the papers takes long enough as it is.

If I got such data I would (a) be very happy, (b) use the RCT to inform policy, and (c) use the pair to point out how correct causal inference methods can recover the RCT result if assumptions hold (hopefully they hold in the observational study)

Indeed. You can imagine that when I stumbled across Deeks and the rest of then in Google Scholar (my notes), I was overjoyed by their obvious utility (and because it meant I didn't have to do it myself, as I was musing about doing using FDA trials) but also completely baffled: why had I never heard of these papers before?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T23:27:37.972Z · LW(p) · GW(p)

I am not following your mixture model idea. For every data point you know if it comes from the RCT or observational study. You don't need uncertainty about treatment assignment. What you need is figuring out how to massage observational data to get causal conclusions (e.g. what I think about all day long).

If you have specific observational data you want to look at, email me if you want to chat more.

Replies from: gwern, None
comment by gwern · 2015-12-23T02:43:01.334Z · LW(p) · GW(p)

For every data point you know if it comes from the RCT or observational study. You don't need uncertainty about treatment assignment.

No, the uncertainty here isn't about which of the two studies a datapoint came from, but about whether (for a specific treatment/intervention) the correlational study datapoint was drawn from the same distribution as the randomized study datapoint or a different one, and (over all treatments/interventions) what the probability of being drawn from the same distribution is. Maybe it'll be a little clearer if I narrate how the model might go.

So say you start off with a prior probability of 50-50 about which group a result is drawn from, a switching probability that will be tweaked as you look at data. (If you are studying turtles which could be from a large or a small species, then if you find 2 larger turtles and 8 smaller, you're probably going to update from P=0.5 to a mixture probability more like P>0.20, since it's most likely - but not certain - that 1 or 2 of the larger turtles came from the large species and the 8 smaller ones came from the small species.)

For your first datapoint, you have a pair of results: xyzcillin reduces all-cause mortality to RR=0.5 from a correlational study (cohort, cross-sectional, case-control, whatever), and the randomized study of xyzcillin has RR=1.1. What does this mean? Now, of course you know that 0.5 is the correlational result and 1.1 is the randomized result, but we can imagine two relatively distinct scenarios here: 'xyzcillin actually works but the causal effect is really more like RR=0.7 and the randomized trial was underpowered', or, 'xyzcillin has no causal effect whatsoever on mortality and it's just a bunch of powerful confounds producing results like RR=0.6-0.8'. We observe that 1.1 supports the latter more, and we update towards 'xyzcillin has 0 effect' and now give 'non-causal scenarios are 55% likely', but not too much because the xyzcillin studies were small and underpowered and so they don't support the latter scenario that much.

Then for the next datapoint, 'abcmycin reduces lung cancer', we get a pair looking like 0.9 and 0.7, and we observe these large trials are very consistent with each other and so they highly support the former theory instead and we update towards 'abcmycin causally reduces lung cancer' and 'noncausal scenarios are 39% likely'.

Then for the third datapoint about defracic surgery for backpain, we again get consistency like d=0.7 and d=0.5 and we update the probability that 'defracic surgery reduces back pain' and also push even further 'noncausal scenarios are 36% likely" because their sample sizes were decent.

And we do update for each pair we finish, and after bouncing back and forth with each pair, we wind up with an estimate that Nature draws from the non-causal scenario 37% of the time (ie the switching probability of the mixture is p=0.37). And now we can use that as a prior in evaluating any new medicine or surgery.

If you have specific observational data you want to look at, email me if you want to chat more.

If you want to look at specific study-pairs, they're all listed & properly cited in the papers I've collated & provided fulltext links for. I suspect that the more advanced methods will require individual level patient data, which sadly only a very few studies will release, but perhaps you can still find enough of those to make it worth your while and analyze if Robins et al can get a publishable paper out of just 1 RCT.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T19:33:35.306Z · LW(p) · GW(p)

If I understood you correctly, there are two separate issues here.

The first is what people call "transportability" (how to sensibly combine results of multiple studies if units in those studies aren't the same). People try all sorts of things (Gelman does random effects models I think?) Pearl's student Elias Barenboim (now at Purdue) thinks about that stuff using graphs.

I wish I could help, but I don't know as much about this subject as I want. Maybe I should think about it more.

The second issue is that in addition to units in two studies "not being the same" one study is observational (has weird treatment assignment) and one is randomized properly. That part I know a lot about, that's classical causal inference -- how to massage observational data to make it look like an RCT.


I would advise thinking about these problems separately, that is start trying to solve combining two RCTs.

edit: I know you are trying to describe things to me on the level of individual points to help me understand. But I think a more helpful way to go is to ignore sampling variability entirely, and just start with two joint distributions P1 and P2 that represent variables in your two studies (in other words you assume infinite sample size, so you get the distributions exactly). How do we combine them into a single conclusion (let's say the "average causal effect": difference in outcome means under treatment vs placebo)? Even this is not so easy to work out.

Replies from: gwern
comment by gwern · 2015-12-24T22:51:17.043Z · LW(p) · GW(p)

I would advise thinking about these problems separately, that is start trying to solve combining two RCTs.

I think when you break it into two separate problems like that, you miss the point. Combining two RCTs is reasonably well-solved by multilevel random effects models. I'm also not trying to solve the problem of inferring from a correlational dataset to specific causal models, which seems well in hand by Pearlean approaches. I'm trying to bridge between the two: assume a specific generative model for correlation vs causation and then infer the distribution.

How do we combine them into a single conclusion (let's say the "average causal effect": difference in outcome means under treatment vs placebo)?

But this is exactly the problem! Apparently, there is no meaningful 'average causal effect' between correlational and causational studies. In one study, it was much larger; in the next, it was a little smaller; in the next, it was much smaller; in the one after that, the sign reversed... If you create a regular multilevel meta-analysis of a bunch of randomized and correlational studies, say, and you toss in a fixed-effect covariate and regress 'Y ~ Randomized', you get an estimate of ~0. The actual effect in each case may be quite large, but the average over all the studies is a wash.

This is different from other methodological problems. With placebos, there is a predictable systematic bias which gives you a large positive bias. Likewise, publication bias skews effects up. Likewise, non-blinding of raters. And so on and so forth. You can easily estimate with an additive fixed-effect / linear model and correct for particular biases. But with random vs correlation, it seems that there's no particular direction the effects head in, you just know that whatever they are, they'll be different from your correlational results. So you need to do something more imaginative in modeling.

But I think a more helpful way to go is to ignore sampling variability entirely, and just start with two joint distributions P1 and P2 that represent variables in your two studies (in other words you assume infinite sample size, so you get the distributions exactly).

OK, let's imagine all our studies are infinite sized. I collect 5 study-pairs, correlational vs randomized, d effect size:

  1. 0.5 vs 0.1 (difference: 0.4)
  2. -0.22 vs -0.22 (difference: 0)
  3. 0.8 vs -0.2 (difference: -1.0)
  4. 0.3 vs 0.3 (difference: 0
  5. 0.5 vs -0.1 (difference: 0.6)

I apply my mixture model strategy.

We see that in study #2 and #4, the correlational and causal effects are identical, 100% confidence, and thus both were drawn from the randomized distribution. With two datapoints -0.22 and 0.3, we begin to infer that the distribution of causal effects is probably fairly narrow around 0 and we update our normal distribution appropriately to be skeptical about any claims of large causal effects.

We see in study #1, #3, and #5, that the correlational and causal effects differ, 100% confidence, and thus we know that the correlational effect for that particular treatment was drawn from the general correlational distribution. The correlational effects are .5, -.8. .5 - all quite large, and so we infer that correlational effects tend to be quite large and its distribution has a large standard deviation (or whatever).

We then note that in 2/5 of the pairs, the correlational effect was the causal effect, and so we estimate that the probability of a correlational effect having been drawn from the causal distribution rather than the correlation distribution is P=2/5. Or in other words, correlation=causality 40% of the time. However, if we had tried to calculate an additive variable like in a meta-regression, we would find that the Randomized covariate was estimated at exactly 0 (mean(c(0.4, 0, -1.0, 0, 0.6)) ~> [1] 0) and certainly is not statistically-significant.

Now when someone comes to us with an infinite-sized correlational trial that purified Egyptian mummy reduces allergy symptoms by d=0.5, we feed it into our mixture model and we get a useful posterior distribution which exhibits a bimodal pattern where it is heavily peaked at 0 (reflecting the more-likely-than-not scenario that mummy is mummery) but also peaked at d=0.4 or so, reflecting shrinkage of the scenario that mummy is munificent, which will predict better than if we naively tried to just shift the d=0.5 posterior distribution up or down some units.


The problem with real studies is that they are not infinitely sized, so when the point-estimates disagree and we get 0.45 vs 0.5, obviously we cannot strongly conclude which distribution in the mixture it was drawn from, and so we need to propagate that uncertainty through the whole model and all its parameters.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-29T20:00:22.950Z · LW(p) · GW(p)

I think when you break it into two separate problems like that, you miss the point.

I am pretty sure I am not, but let's see. What you are basically saying is "analysis => synthesis doesn't work."

Combining two RCTs is reasonably well-solved by multilevel random effects models.

Hierarchical models are a particular parametric modeling approach for data drawn from multiple sources. People use this type of stuff to good effect, but saying it "solves the problem" here is sort of like saying linear regression "solves" RCTs. What if the modeling assumptions are wrong? What if you are not sure what the model should be?

I'm also not trying to solve the problem of inferring from a correlational dataset to specific causal models, which > seems well in hand by Pearlean approaches.

Let's call them "interventionist approaches." Pearl is just the guy people here read. People have been doing causal analysis from observational data since at least the 70s, probably earlier in certain special cases.

I'm trying to bridge between the two: assume a specific generative model for correlation vs causation and then > infer the distribution.

Ok.

But this is exactly the problem! Apparently, there is no meaningful 'average causal effect' between correlational and causational studies.

This is what we should talk about.

If there is one RCT, we have a treatment A (with two levels a, and a') and outcome Y. Of interest is outcome under hypothetical treatment assignment to a value, which we write Y(a) or Y(a'). "Average causal effect" is E[Y(a)] - E[Y(a')]. So far so good.

If there is one observational study, say A is assigned based on C, and C affects Y, what is of interest is still Y(a) or Y(a'). Interventionist methods would give you a formula for E[Y(a)] - E[Y(a')] in terms of p(A,C,Y). You can then construct an estimator for that formula, and life is good. So far so good.

Note that so far I made no modeling assumptions on the relationship of A and Y at all. It's all completely unrestricted by choice of statistical model. I can do crazy non-parametric random forest to model the relationship of A and Y if I wanted. I can do linear regression. I can do whatever. This is important -- people often smuggle in modeling assumptions "too soon." When we are talking about prediction problems like in machine learning, that's ok. We don't care about modeling too much we just want good predictive performance. When we care about effects, the model is important. This is because if the effect is not strong and your model is garbage, it can mislead you.


If there are two RCTs, we have two sets of outcomes: Y1(a), Y1(a') and Y2(a), Y2(a'). Even here, there is no one causal effect so far. We need to make some sort of assumption on how to combine these. For example, we may try to generalize regression models, and say that a lot of the way A affects Y is the same regression across the two studies, but some of the regression terms are allowed to differ to model population heterogeneity. This is what hierarchical models do.

In general we have E[f(Y1(a), Y2(a))] - E[f(Y1(a'),Y2(a'))], for some f(.,.) that we should justify. At this level, things are completely non-parametric. We can model the relationship of A and Y1,Y2 however we want. We can model f however we want.


If we have one RCT and one observational study, we still have Y1(a), Y1(a') for the RCT, and Y2(a), Y2(a') for the observational study. To determine the latter we use "interventionist approaches" to express them in terms of observational data. We then combine things using f(.,.) as before. As before we should justify all the modeling we are doing.


I am pretty sure Barenboim thought about this stuff (but he doesn't do statistical inference, just the general setup).

Replies from: gwern, Richard_Kennaway
comment by gwern · 2015-12-30T15:51:48.973Z · LW(p) · GW(p)

What you are basically saying is "analysis => synthesis doesn't work."

I am pretty sure it is not going to let you take an effect size and a standard error from a correlation study and get out a accurate posterior distribution of the causal effect without doing something similar to what I'm proposing.

If there are two RCTs, we have two sets of outcomes: Y1(a), Y1(a') and Y2(a), Y2(a'). Even here, there is no one causal effect so far. We need to make some sort of assumption on how to combine these. For example, we may try to generalize regression models, and say that a lot of the way A affects Y is the same regression across the two studies, but some of the regression terms are allowed to differ to model population heterogeneity. This is what hierarchical models do. In general we have E[f(Y1(a), Y2(a))] - E[f(Y1(a'),Y2(a'))], for some f(.,.) that we should justify. At this level, things are completely non-parametric. We can model the relationship of A and Y1,Y2 however we want. We can model f however we want.

Ok, and how do we model them? I am proposing a multilevel mixture model to compare them.

If we have one RCT and one observational study, we still have Y1(a), Y1(a') for the RCT, and Y2(a), Y2(a') for the observational study. To determine the latter we use "interventionist approaches" to express them in terms of observational data. We then combine things using f(.,.) as before. As before we should justify all the modeling we are doing.

Which is not going to work since in most, if not all, of these studies, the original patient-level data is not going to be available and you're not even going to get a correlation matrix out of the published paper, and I haven't seen any intervention-style algorithms which work with just the effect sizes which is what is on offer.

To work with the sparse data that is available, you are going to have to do something in between a meta-analysis and an interventionist analysis.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-30T19:38:32.338Z · LW(p) · GW(p)

I am proposing a multilevel mixture model to compare them.

Ok. You can use whatever statistical model you want, as long as we are clear what the underlying object is you are dealing with. The difficulty here isn't the statistical modeling, but being clear about what it is that is being estimated (in other words the interpretation of the parameters of the model). This is why I don't talk about statistical modeling at first.

haven't seen any intervention-style algorithms which work with just the effect sizes which is what is on offer.

If all you have is reported effect sizes you won't get anything good out. You need the data they used.

comment by Richard_Kennaway · 2015-12-30T09:28:53.650Z · LW(p) · GW(p)

Pearl is just the guy people here read.

Is there anyone you would recommend studying in addition?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-31T20:47:52.200Z · LW(p) · GW(p)

Depends on what you want. It doesn't matter "who has priority" when it comes to learning the subject. Pearl's book is good, but one big disadvantage of reading just Pearl is Pearl does not deal with the statistical inference end of causal inference very much (by choice). Actually, I heard Pearl has a new book in the works, more suitable for teaching.

But ultimately we must draw causal conclusions from actual data, so statistical inference is important. Some big names that combine causal and statistical inference: Jamie Robins, Miguel Hernan, Eric Tchetgen Tchetgen, Tyler VanderWeele (Harvard causal group), Mark van der Laan (Berkeley), Donald Rubin et al (Harvard), Frangakis, Rosenblum, Scharfstein, etc. (Johns Hopkins causal group), Andrea Rotnitzky (Harvard), Susan Murphy (Michigan), Thomas Richardson (UW), Phillip Dawid (Cambridge, but retired, incidentally the inventor of conditional independence notation). Lots of others.

I believe Stephen Cole posts here, and he does this stuff also (http://sph.unc.edu/adv_profile/stephen-r-cole-phd/).


Miguel Hernan and Jamie Robins are working on a new causal inference book that is more statistical, might be worth a look. Drafts available online:

http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/

comment by [deleted] · 2015-12-24T12:37:22.731Z · LW(p) · GW(p)

what I think about all day long

You specialise in identifying the determinants of biases in causal inference? Just curious :) Interesting

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-24T17:27:45.523Z · LW(p) · GW(p)

And how to make those biases go away, yes.

comment by [deleted] · 2015-12-22T23:34:09.284Z · LW(p) · GW(p)

You're using correlation in what I would consider a weird way. Randomization is intended to control for selection effects to reduce confounds, but when somebody says correlational study I get in my head that they mean an observational study in which no attempt was made to determine predictive causation. When an effect shows up in a nonrandomized study, it's not that you can't determine whether the effect was causative; it's that it's more difficult to determine whether the causation was due to the independent variable or an extraneous variable unrelated to the independent variable. It's not a question of whether the effect is due to correlation or causation, but whether the relationship between the independent and dependent variable even exists at all.

Replies from: Anders_H, None
comment by Anders_H · 2015-12-23T00:41:05.887Z · LW(p) · GW(p)

(1) Observational studies are almost always attempts to determine causation. Sometimes the investigators try to pretend that they aren't, but they aren't fooling anyone, least of all the general public. I know they are attempting to determine causation because nobody would be interested in the results of the study unless they were interested in causation. Moreover, I know they are attempting to determine causation because they do things like "control for confounding". This procedure is undefined unless the goal is to estimate a causal effect

(2) What do you mean by the sentence "the study was causative"? Of course nobody is suggesting that the study itself had an effect on the dependent variable?

(3) Assuming that the statistics were done correctly and that the investigators have accounted for sampling variability, the relationship between the independent and dependent variable definitely exists. The correlation is real, even if it is due to confounding. It just doesn't represent a causal effect

Replies from: Lumifer, None
comment by Lumifer · 2015-12-23T16:40:18.920Z · LW(p) · GW(p)

You are assuming a couple of things which are almost always true in your (medical) field, but are not necessarily true in general. For example,

Observational studies are almost always attempts to determine causation

Nope. Another very common reason is to create a predictive model without caring about actual causation. If you can't do interventions but would like to forecast the future, that's all you need.

Assuming that the statistics were done correctly and that the investigators have accounted for sampling variability, the relationship between the independent and dependent variable definitely exists.

That further assumes your underlying process is stable and is not subject to drift, regime changes, etc. Sometimes you can make that assumption, sometimes you cannot.

Replies from: Vaniver
comment by Vaniver · 2015-12-23T20:45:34.508Z · LW(p) · GW(p)

Another very common reason is to create a predictive model without caring about actual causation. If you can't do interventions but would like to forecast the future, that's all you need.

You'd also like a guarantee that others can't do interventions, or else your measure could be gamed. (But if there's an actual causal relationship, then 'gaming' isn't really possible.)

comment by [deleted] · 2015-12-23T01:03:11.142Z · LW(p) · GW(p)

(1) I just think calling a nonrandomized study a correlational study is weird.

(2) I meant to say effect; not study; fixed

(3) If something is caused by a confounding variable, then the independent variable may have no relationship with the dependent variable. You seem to be using correlation to mean the result of an analysis, but I'm thinking of it as the actual real relationship which is distinct from causation. So y=x does not mean y causes x or that x causes y.

Replies from: Anders_H
comment by Anders_H · 2015-12-23T01:18:54.256Z · LW(p) · GW(p)

I don't understand what you mean by "real relationship". I suggest tabooing the terms "real relationship" and "no relationship".

I am using the word "correlation" to discuss whether the observed variable X predicts the observed variable Y in the (hypothetical?) superpopulation from which the sample was drawn. Such a correlation can exist even if neither variable causes the other.

If X predicts Y in the superpopulation (regardless of causality), the correlation will indeed be real. The only possible definition I can think of for a "false" correlation is one that does not exist in the superpopulation, but which appears in your sample due to sampling variability. Statistical methodology is in general more than adequate to discuss whether the appearance of correlation in your sample is due to real correlation in the superpopulation. You do not need causal inference to reason about this question. Moreover, confounding is not relevant.

Confounding and causal inference are only relevant if you want to know whether the correlation in the superpopulation is due to the causal effect of X on Y. You can certainly define the causal effect as the "actual real relationship", but then I don't understand how it is distinct from causation.

Replies from: None
comment by [deleted] · 2015-12-23T01:26:21.021Z · LW(p) · GW(p)

The only possible definition I can think of for a "false" correlation is one that does not exist in the superpopulation, but which appears in your sample due to sampling variability.

Right. Which is the problem randomization attempts to correct for, which I think of as a separate problem from causation.

Replies from: Anders_H
comment by Anders_H · 2015-12-23T01:38:10.857Z · LW(p) · GW(p)

No. Randomization abolishes confounding, not sampling variability

If your problem is sampling variability, the answer is to increase the power.

If your problem is confounding, the ideal answer is randomization and the second best answer is modern causality theory.

Statisticians study the first problem, causal inference people study the second problem

Replies from: None
comment by [deleted] · 2015-12-23T03:02:56.004Z · LW(p) · GW(p)

Intersample variability is a type of confound. Increasing sample size is another method for reducing confounding due to intersample variability. Maybe you meant intrasample variability, but that doesn't make much sense to me in context. Maybe you think of intersample variability as sampling error? Or maybe you have a weird definition of confounding?

Either way, confounding is a separate problem from causation. You can isolate the confounding variables from the independent variable to determine the correlation between x and y without determining a causal relationship. You can also determine the presence of a causal relationship without isolating the independent variable from possible confounding variables.

The nonrandomized studies are determining causality; they're just doing a worse job at isolating the independent variable, which is what gwern appears to be talking about here.

Replies from: Anders_H
comment by Anders_H · 2015-12-23T03:40:29.232Z · LW(p) · GW(p)

Intersample variability is a type of confound.

No it isn't

Or maybe you have a weird definition of confounding?

I use the standard definition of confounding based on whether E(Y| X=x) = E(Y| Do(X=x)), and think about it in terms of whether there exists a backdoor path between X and Y.

Either way, confounding is a separate problem from causation.

The concept of confounding is defined relative to the causal query of interest. If you don't believe me, try to come up with a coherent definition of confounding that does not depend on the causal question.

You can isolate the confounding variables from the independent variable to determine the correlation between x and y without determining a causal relationship.

With standard statistical techniques you will be able to determine the correlation between X and Y. You will also be able to determine the correlation between X and Y conditional on Z. These are both valid questions and they are both are true correlations. Whether either of those correlations is interesting depends on your causal question and on whether Z is a confounder for that particular query.

You can also determine the presence of a causal relationship without isolating the independent variable from possible confounding variables.

No you can't. (Unless you have an instrumental variable, in which case you have to make the assumption that the instrument is unconfounded instead of the treatment of interest)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T20:51:20.206Z · LW(p) · GW(p)

Anders_H, you are much more patient than I am!

(re: last sentence, also have to assume no direct effect of instrument, but I am sure you knew that, just emphasizing the confounding assumption since discussion is about confounding).


Grand parent's attitude is precisely what is wrong with LW culture's complete and utter lack of epistemic/social humility (which I think they inherited from Yudkowsky and his planet-sized ego). Him telling you of all people that you are using a weird definition of confounding is incredibly amusing.

comment by [deleted] · 2015-12-23T04:01:22.977Z · LW(p) · GW(p)

I just realized the randomized-nonrandomized study was just an example and not what you were talking about.

comment by James_Miller · 2015-12-22T15:31:13.204Z · LW(p) · GW(p)

I just published an article in the conservative FrontPageMag on college safe spaces. It uses a bit of LW like reasoning.

Replies from: ChristianKl, Viliam
comment by ChristianKl · 2015-12-23T12:02:40.093Z · LW(p) · GW(p)

Congrats on the courage to pick this fight.

Replies from: James_Miller, gjm
comment by James_Miller · 2015-12-23T14:40:26.482Z · LW(p) · GW(p)

Thanks.

comment by gjm · 2015-12-23T13:48:38.979Z · LW(p) · GW(p)

Publishing that article in Front Page Mag doesn't take much courage, so far as I can see. It's not as if many FPM readers are going to disagree with it. What would take courage would be a vigorous defence of "safe spaces" in FPM, or an article like James's in a lefty magazine.

[EDITED to add:] For the avoidance of doubt, I'm not objecting to James's article or saying that publishing it was an act of cowardice! Only that it doesn't seem to require any unusual bravery.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-12-23T16:55:39.324Z · LW(p) · GW(p)

Publishing that article in Front Page Mag doesn't take much courage, so far as I can see. It's not as if many FPM readers are going to disagree with it.

That's not the point. There is a very active witchhunt going on at many US colleges. Mobs with torches and pitchforks are diligently searching for while males guilty of, and I quote

white supremacy, colonialism, anti-black racism, anti-Latinx racism, anti-Native American racism, anti-Native/ indigenous racism, anti-Asian racism, anti-Middle Eastern racism, heterosexism, cis-sexism, xenophobia, anti-Semitism, ableism, mental health stigma, and classism

Any published text or blog post or a tweet, etc. can be construed as evidence of wrongthink. It's basically the Chinese Cultural Revolution, this time as a farce. Most people in academia keep their heads down -- see e.g. this.

Replies from: MrMind, gjm
comment by MrMind · 2015-12-24T11:00:59.249Z · LW(p) · GW(p)

That's not the point. There is a very active witchhunt going on at many US colleges. Mobs with torches and pitchforks are diligently searching for while males guilty of, and I quote

white supremacy, colonialism, anti-black racism, anti-Latinx racism, anti-Native American racism, anti-Native/ indigenous racism, anti-Asian racism, anti-Middle Eastern racism, heterosexism, cis-sexism, xenophobia, anti-Semitism, ableism, mental health stigma, and classism

Well... from my point of view, there's a lot of that in the US. And I come from Italy, for Omega's sake, we have news about the Pope every lunch and dinner. I cannot not imagine how the view from Denmark should be.
With this, I don't mean that witchhunting is the answer, just that it can possibly be understood as a transient overshoot, to which the proper response is a transient undershoot.

Replies from: Lumifer
comment by Lumifer · 2015-12-28T16:05:05.980Z · LW(p) · GW(p)

from my point of view, there's a lot of that in the US

I would venture a guess that your estimate has a lot to do with what kind of media you read and a fair amount to do with what your baseline is :-/

comment by gjm · 2015-12-23T20:29:28.691Z · LW(p) · GW(p)

Note that the person on your last link, despite professing to be terrified of his students, seems to have been happy enough to publish that article with his real name on it. Note also that he links to a number of other pieces by other academics expressing similar opinions, all also apparently not so terrified as to avoid publishing such opinions with their names attached.

So far as I know, no academic has in fact got into any sort of trouble for expressing opinions like those, or like the (milder) ones expressed in James's article.

People died in the Cultural Revolution. This is not in any useful sense "basically the Cultural Revolution". Nor does it bear anything like the same resemblance to the Cultural Revolution as Louis Napoleon's assumption of power did to his uncle's. Calling someone courageous for daring to say openly that the idea of "safe spaces" may have gone too far is like calling someone courageous for daring to say "Merry Christmas". Can we get a bit of perspective here?

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-12-23T21:29:51.019Z · LW(p) · GW(p)

Note that the person on your last link, despite professing to be terrified of his students, seems to have been happy enough to publish that article with his real name on it.

Vox: Edward Schlosser is a college professor, writing under a pseudonym.

Replies from: gjm
comment by gjm · 2015-12-24T17:34:11.974Z · LW(p) · GW(p)

Ha. I actually checked for that, but obviously not carefully enough. My apologies.

[EDITED to add:] OK, so I went back and searched the page, and it doesn't say that anywhere. (Though buried in the middle of the article is a statement along the lines of "all controversial things I write, like this article, are anonymous or pseudonymous", so I still should have known.) Perhaps it's because I'm reading on a mobile device?

Replies from: ChristianKl
comment by ChristianKl · 2015-12-25T00:00:36.857Z · LW(p) · GW(p)

You get that line if you click on the author's name.

The article starts by saying: I'm a professor at a midsize state school. If you read between the lines that's a decision against revealing the name of the school and thus a decision to protect anonymity.

In general the media likes to use pseudonyms when it can't use the real name, so the fact that you have a name on the top is no good evidence that the article isn't written anonymously or under a pseudonym.

Replies from: gjm
comment by gjm · 2015-12-25T16:05:07.901Z · LW(p) · GW(p)

That's why I looked for a statement at the start or end that the name was pseudo. I think not finding such a thing genuinely was evidence of non-pseudonymity, though clearly not enough evidence was it turned out. I didn't think of clicking on the name because I'm an idiot.

comment by Lumifer · 2015-12-23T20:48:39.184Z · LW(p) · GW(p)

So far as I know, no academic has in fact got into any sort of trouble for expressing opinions like those

Let me enhance your knowledge.

People died in the Cultural Revolution.

"History repeats itself, first as tragedy, second as farce." -- Karl Marx

Replies from: gjm
comment by gjm · 2015-12-23T21:26:55.607Z · LW(p) · GW(p)

Let me enhance your knowledge.

You link to three stories, but so far as I can see only the first of them is actually anything like an example of what we were talking about. Still, that's one more than I knew of, so thank you.

The way many students at Yale responded to Christakis is shocking, for sure. But, again, this is a long long long way from the Cultural Revolution. She didn't lose her life or even her job. And this is an unusually extreme case.

Karl Marx

I knew where the quotation comes from, and what it refers to, and what it means, as you could have worked out:

Nor does it bear anything like the same resemblance to the Cultural Revolution as Louis Napoleon's assumption of power did to his uncle's.

comment by ChristianKl · 2015-12-23T14:11:38.578Z · LW(p) · GW(p)

It's not as if many FPM readers are going to disagree with it.

There days everybody can share a link to an article and students in James Miller's college can pass around an article written by their prof.

comment by Viliam · 2015-12-25T22:47:00.132Z · LW(p) · GW(p)

I’m almost certain that many Amherst students think the list of demands is dangerous and/or silly.

You could test this hypothesis by providing them a way to give you anonymous feedback. The provided method would have to avoid two things:

  • Despite anonymity, you have to prevent spamming, where one student would give you dozen answers, indistinguishable from dozen answers given by dozen students. This rules out methods like "send me an e-mail from a throwaway account".

  • Not only the content of the feedback, but even the fact whether a student gave you feedback or not, must be kept secret. Otherwise it is easy for the majority to decide not to give you feedback, exposing every student giving you feedback as a likely traitor. This rules out methods like "here is a questionnaire, check the appropriate boxes and throw it into this basket".

Of course, an overly complicated feedback method would be a trivial inconvenience, so less people would respond. Also, a complicated method would make them question whether it is really anonymous.

Replies from: James_Miller
comment by James_Miller · 2015-12-25T23:49:24.310Z · LW(p) · GW(p)

Great idea. If I'm ever asked to speak at Amherst I could give out forms to be immediately filled out. To protect people against being discovered I could first say "Everyone think of a number between 1 and 10. OK if you thought of the number 3 give false answers on this form."

comment by passive_fist · 2015-12-21T20:01:45.707Z · LW(p) · GW(p)

Last week was a gathering of physicists in Oxford to discuss string theory and the philosophy of science.

From the article:

Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism...

Gross concurred, saying that, upon learning about Bayesian confirmation theory from Dawid’s book, he felt “somewhat like the Molière character who said, ‘Oh my God, I’ve been talking prose all my life!’”

That the Bayesian view is news to so many physicists is itself news to me, and it's very unsettling news. You could say that modern theoretical physics has failed to be in-touch with other areas of science, but you could also make the argument that the rationalist community has failed to properly reach out and communicate with scientists.

Replies from: Mitchell_Porter, IlyaShpitser, Luke_A_Somers, None
comment by Mitchell_Porter · 2015-12-22T12:11:51.474Z · LW(p) · GW(p)

The character from Molière learns a fancy name ("speaking in prose") for the way he already communicates. David Gross isn't saying that he is unfamiliar with the Bayesian view, he's saying that "Bayesian confirmation theory" is a fancy name for his existing epistemic practice.

comment by IlyaShpitser · 2015-12-21T20:20:18.082Z · LW(p) · GW(p)

Rationalist community needs to learn a little humility. Do you realize the disparity in intellectual firepower between "you guys" and theoretical physicists?

Replies from: jacob_cannell, passive_fist, MrMind, Bryan-san
comment by jacob_cannell · 2015-12-22T07:14:01.628Z · LW(p) · GW(p)

This is the overgeneralized IQ fantasy. A really smart physicist may be highly competent at say string theory, but know very little about french pasteries or cuda programming or - more to the point - solomonoff induction.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T14:32:59.921Z · LW(p) · GW(p)

As I said, here I am. Tell me how Solomonoff induction is going to change how I do my business. I am listening.

Replies from: None, MrMind, Lumifer
comment by [deleted] · 2015-12-23T09:43:20.870Z · LW(p) · GW(p)

As I said, here I am. Tell me how Solomonoff induction is going to change how I do my business. I am listening.

You are already a lesswronger - would you say that lesswrong has changed the way you think at all? Why do you keep coming back?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T17:09:46.014Z · LW(p) · GW(p)

I post here, but I don't identify as a rationalist. Two most valuable ideas (to me) that circulate here are tabooing and steelmanning (but they were not invented here).

I think I try to cultivate what you would call the "rationalist mindset" in order to do math. But I view it as a tool for certain problems only, not a part of my identity.

Do you want me to leave?

Replies from: philh, None
comment by philh · 2015-12-23T21:13:51.687Z · LW(p) · GW(p)

I like you being here.

comment by [deleted] · 2015-12-24T05:41:10.792Z · LW(p) · GW(p)

Do you want me to leave?

That wasn't my point. My point was, you are the best one to answer your own question.

comment by MrMind · 2015-12-23T09:04:14.897Z · LW(p) · GW(p)

Solomonoff induction is uncomputable, so it's not going to help you in any way.
But Jaynes (who was a physicist) said that using Bayesian methods to analyze magnetic resonance data helped him gain an unprecedented resolution. Quoting from his book:

In the 1987 Ph.D. thesis of G. L. Bretthorst, and more fully in Bretthorst (1988), we applied Bayesian analysis to estimation of frequencies of nonstationary sinu- soidal signals, such as exponential decay in nuclear magnetic resonance (NMR) data, or chirp in oceanographic waves. We found – as was expected on theoretical grounds – an improved resolution over the previously used Fourier transform methods. If we had claimed a 50% improvement, we would have been believed at once, and other researchers would have adopted this method eagerly. But, in fact, we found orders of magnitude improvement in resolution.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-23T17:12:56.131Z · LW(p) · GW(p)

jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.

Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.


If we are talking about Bayesian methods for data analysis, almost no one on LW who is breathlessly excited about Bayesian stuff actually knows what they are talking about (with 2-3 exceptions, who are stats/ML grad students or up). And when called on it retreat to the "Bayesian epistemology" motte.


Bayesian methods didn't save Jaynes from being terminally confused about causality and the Bell inequalities.

Replies from: iarwain1, MrMind, jacob_cannell
comment by iarwain1 · 2015-12-24T15:00:20.880Z · LW(p) · GW(p)

I still haven't figured out what you have against Bayesian epistemology. It's not like this is some sort of LW invention - it's pretty standard in a lot of philosophical and scientific circles, and I've seen plenty of philosophers and scientists who call themselves Bayesians.

Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.

My understanding is that Solomonoff induction is usually appealed to as one of the more promising candidates for a formalization of Bayesian epistemology that uses objective and specifically Occamian priors. I haven't heard Solomonoff promoted as much outside LW, but other similar proposals do get thrown around by a lot of philosophers.

Bayesian methods didn't save Jaynes from being terminally confused about causality and the Bell inequalities.

Of course Bayesianism isn't a cure-all by itself, and I don't think that's controversial. It's just that it seems useful in many fundamental issues of epistemology. But in any given domain outside of epistemology (such as causation or quantum mechanics), domain-relevant expertise is almost certainly more important. The question is more whether domain expertise plus Bayesianism is at all helpful, and I'd imagine it depends on the specific field. Certainly for fundamental physics it appears that Bayesianism is often viewed as at least somewhat useful (based on the conference linked by the OP and by a lot of other things I've seen quoted from professional physicists).

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-24T18:02:28.269Z · LW(p) · GW(p)

I don't have any problem with Bayesian epistemology at all. You can have whatever epistemology you want.

What I do have a problem with is this "LW myopia" where people here think they have something important to tell to people like Ed Witten about how people like Ed Witten should be doing their business. This is basically insane, to me. This is strong evidence that the type of culture that gets produced here isn't particularly sanity producing.


Solomonoff induction is useless to know about for anyone who has real work to do (let's say with actual data, like physicists). What would people do with it?

Replies from: iarwain1
comment by iarwain1 · 2015-12-24T18:58:51.798Z · LW(p) · GW(p)

In many cases I'd agree it's pretty crazy, especially if you're trying to go up against top scientists.

On the other hand, I've seen plenty of scientists and philosophers claim that their peers (or they themselves) could benefit from learning more about things like cognitive biases, statistics fallacies, philosophy of science, etc. I've even seen experts claim that a lot of their peers make elementary mistakes in these areas. So it's not that crazy to think that by studying these subjects you can have some advantages over some scientists, at least in some respects.

Of course that doesn't mean you can be sure that you have the advantage. As I said, probably in most cases domain expertise is more important.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-24T19:04:55.373Z · LW(p) · GW(p)

Absolutely agree it is important for scientists to know about cognitive biases. Francis Bacon, the father of the empirical method, explicitly used cognitive biases (he called them "idols," and even classified them) as a justification for why the method was needed.

I always said that Francis Bacon should be LW's patron saint.

Replies from: iarwain1, polymathwannabe, MrMind
comment by iarwain1 · 2015-12-24T20:11:35.574Z · LW(p) · GW(p)

So it sounds like you're only disagreeing with the OP in degree. You agree with the OP that a lot of scientists should be learning more about cognitive biases, better statistics, epistemology, etc., just as we are trying to do on LW. You're just pointing out (I think) that the "informed laymen" of LW should have some humility because (a) in many cases (esp. for top scientists?) the scientists have indeed learned lots of rationality-relevant subject matter, perhaps more than most of us on LW, (b) domain expertise is usually more important than generic rationality, and (c) top scientists are very well educated and very smart.

Is that correct?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-24T20:48:25.963Z · LW(p) · GW(p)

Yup!


edit: Although I should say LW "trying to learn better statistics" is too generous. There is a lot more "arguing on the internet" and a lot less "reading" happening.

comment by polymathwannabe · 2015-12-28T13:13:26.834Z · LW(p) · GW(p)

I nominate Carneades, the inventor of the idea of degrees of certainty.

comment by MrMind · 2015-12-28T08:02:13.401Z · LW(p) · GW(p)

I always said that Francis Bacon should be LW's patron saint.

Harry J.E. Potter did receive Bacon's diary as a gift from his DADA teacher, after all.

comment by MrMind · 2015-12-24T10:24:27.608Z · LW(p) · GW(p)

jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.

I think a more charitable read would go like this: being smarter doesn't necessarily mean that you know everything there's to know nor that you are more rational than other people. Since being rational or knowing about Bayesian epistemology is important in every field of science, physicists should be motivated to learn this stuff. I don't think he was suggesting that French pastries are literally useful to them.

Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.

Well, LW was born as a forum about artificial intelligence. Solomonoff induction is like an ideal engine for generalized intelligence, which is very cool!

Bayesian methods didn't save Jaynes from being terminally confused about causality and the Bell inequalities.

That's unfortunate, but we cannot ask of anyone, even geniuses, to transcend their time. Leonardo da Vinci held some ridiculous beliefs, for our standars, just like Ramanujan or Einstein. With this I'm not implying that Jaynes was a genius of that caliber, I would ascribe that status more to Laplace. On the 'bright' side, in our time nobody knows how to reconcile epistemic probability and quantum causality :)

Replies from: ChristianKl, ChristianKl, IlyaShpitser
comment by ChristianKl · 2015-12-24T10:41:12.239Z · LW(p) · GW(p)

Solomonoff induction is like an ideal engine for generalized intelligence

That seems to be a pretty big claim. Can you articulate why you believe it to be true?

Replies from: jacob_cannell, MrMind
comment by jacob_cannell · 2016-01-20T20:49:00.496Z · LW(p) · GW(p)

As far as I am aware, Solomonoff induction describes the singularly correct way to do statistical inference in the limits of infinite compute. (It computes generalized/full Bayesian inference)

All of AI can be reduced to universal inference, so understanding how to do that optimally with infinite compute perhaps helps one think more clearly about how practical efficient inference algorithms can exploit various structural regularities to approximate the ideal using vastly less compute.

comment by MrMind · 2015-12-28T08:30:24.441Z · LW(p) · GW(p)

Because AIXI is the first complete mathematical model of a general AI and is based on Solomonoff induction.
Also, computable approximation to Solomonoff prior has been used to teach small AI to play videogames unsupervised.
So, yeah.

comment by ChristianKl · 2015-12-24T10:57:22.779Z · LW(p) · GW(p)

That's unfortunate, but we cannot ask of anyone, even geniuses, to transcend their time.

If you don't consider Jaynes to be comtemporary, which author do you consider to be his successor that updated where Jaynes went wrong?

Replies from: MrMind
comment by MrMind · 2015-12-28T08:28:41.517Z · LW(p) · GW(p)

While Bretthorst is his immediate and obvious successor, unfortunately nobody that I know of has taken up the task to develop the field the way Jaynes did.

comment by IlyaShpitser · 2015-12-24T18:08:01.518Z · LW(p) · GW(p)

A really smart physicist may be highly competent at say string theory, but know very little about french pasteries or cuda programming or - more to the point - solomonoff induction.

I am pretty sure jacob_connell specifically brought up Solomonoff induction. I am still waiting for him to explain why I (let alone Ed Witten) should care about this idea.

Since being rational or knowing about Bayesian epistemology is important in every field of science

How do you know what is important in every field of science? Are you a scientist? Do you publish? Where is your confidence coming from, first principles?

Solomonoff induction is like an ideal engine for generalized intelligence, which is very cool!

Whether Solomonoff induction is cool or not is a matter of opinion (and "mathematical taste,") but more to the point the claim seems to be it's not only cool but vital for physicists to know about. I want to know why. It seems fully useless to me.

we cannot ask of anyone, even geniuses, to transcend their time.

Jaynes died in 1997. Bayesian networks (the correct bit of math to explain what is going on with Bell inequalities) were written up in book form in 1988, and were known about in various special case forms long before that.

???

Replies from: MrMind, None
comment by MrMind · 2015-12-28T08:26:00.161Z · LW(p) · GW(p)

Where is your confidence coming from, first principles?

Well, yes of course. Cox' theorem. Journals are starting to refute papers based on the "p<0.05" principle. Many studies in medicine and psychology cannot be replicated. Scientists are using inferior analysis methods when better are available just because they were not taught to.
I do say there's a desperate need to divulge Bayesian thinking.

Jaynes died in 1997. Bayesian networks (the correct bit of math to explain what is going on with Bell inequalities) were written up in book form in 1988, and were known about in various special case forms long before that.

I wasn't referring to that. Jaynes knew that quantum mechanics was incompatible with the epistemic view of probability, and from his writing, while never explicit, it's clear that he was thinking about a hidden variables model.
Undisputable violation of the Bell inequalities were performed only this year. Causality was published in 2001. We still don't know how to stitch epistemic probabilities and quantum causality.
What I'm saying is that the field was in motion when Jaynes died, and we still don't know a large deal about it. As I said, we cannot ask anyone not to hold crazy ideas from time to time.

comment by [deleted] · 2015-12-25T09:26:32.281Z · LW(p) · GW(p)

Datapoint: in [biological] systematics in its broadest sense, Bayesian methods are increasingly important (molecular evolution studies,...), but I've never heard about pure Bayesian epistemology being in demand. Maybe because we leave it all to our mathematicians.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-25T17:25:49.104Z · LW(p) · GW(p)

Part of the issue I keep harping about is people keep confusing Bayes rule, Bayesian networks, Bayesian statistical inference, and Bayesian epistemology. I don't have any issue with a thoughtful use of Bayesian statistical inference when it is appropriate -- how could I?

My issue is people being confused, or people having delusions of grandeur.

comment by jacob_cannell · 2016-01-20T20:44:19.646Z · LW(p) · GW(p)

jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.

Nah - I was just using that as an example of things physicists (regardless of IQ) don't automatically know.

Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated by) Bayesian epistemology (if you don't believe that, it's not worth my time to debate). In at least some problem domains, the difference in predictive capability between the two methodologies are becoming significant.

Physicists don't automatically update their epistemologies, it isn't something they are using to having to update.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2016-01-20T20:50:45.710Z · LW(p) · GW(p)

Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated by) Bayesian epistemology (if you don't believe that, it's not worth my time to debate).

Heh, ok. Thanks for your time!

Replies from: jacob_cannell
comment by jacob_cannell · 2016-01-20T21:00:51.734Z · LW(p) · GW(p)

Ok, so I lied, I'll bite.

I equate "Bayesian epistemology" with a better approximation of universal inference. It's easy to generate example environments where Bayesian agents dominate Popperian agents, while the converse is never true. Popperian agents completely fail to generalize well from small noisy datasets. When you have very limited evidence, popperian reliance on hard logical falsifiability just fails.

This shouldn't even really be up for debate - do you actually believe the opposite position, or are you just trolling?

comment by Lumifer · 2015-12-22T16:31:23.604Z · LW(p) · GW(p)

French pastries (preferably from a Japanese/Korean pastry shop) are better than Solomonoff induction -- they are yummier.

Replies from: MrMind
comment by MrMind · 2015-12-24T10:28:48.294Z · LW(p) · GW(p)

Ha, but a robot programmed with a Solomonoff induction-like software will learn to do French pastries long before pastries will learn how to do Solomonoff induction!

Replies from: Lumifer
comment by Lumifer · 2015-12-28T16:07:45.412Z · LW(p) · GW(p)

robot programmed with a Solomonoff induction-like software will learn to do French pastries long before pastries will learn how to do Solomonoff induction!

French pastries correspond to a pretty long bit-string so you may have to wait for a very long time (and eat a lot of very bad possibly-pastries in the meantime :-P). A physicist can learn to make pastries much quicker.

comment by passive_fist · 2015-12-21T21:08:31.208Z · LW(p) · GW(p)

It could be that the attitude/belief that theoretical physicists are far smarter than anyone else (and therefore, by implication, do not need to listen to anyone else) is part of the problem I'm outlining.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-21T21:15:52.331Z · LW(p) · GW(p)

It could be, but I think theoretical physicists actually are very intelligent. Do you disagree?


edit: But let's leave them aside, and talk about me, since I am actually here. I am not in the same league as Ed Witten, not even close. Do you (generic sense) have something sensible to communicate to me about how I go about my business?

Replies from: lfghjkl, passive_fist
comment by lfghjkl · 2015-12-22T11:30:24.494Z · LW(p) · GW(p)

edit: But let's leave them aside, and talk about me, since I am actually here. I am not in the same league as Ed Witten, not even close. Do you (generic sense) have something sensible to communicate to me about how I go about my business?

When did you become a theoretical physicist?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T14:31:13.005Z · LW(p) · GW(p)

I am not. But I do theory work, and some of it is even related to analyzing data (and I am actually here to have this conversation, whereas Ed is not). So -- what do you have to teach me?

Replies from: moridinamael
comment by moridinamael · 2015-12-22T16:00:14.737Z · LW(p) · GW(p)

I dunno. I have PhD in engineering. In my graduate research and in my brief life as a practicing scientist, I used rationalist skills like "search for more hypotheses" and "think exclusively about the problem for five minutes before doing anything else" and generally leveraged LW-style thinking, that I didn't learn in school, to be more successful and productive than I probably would have been otherwise. I could probably write a lengthy article about how I perceive LW to have helped me in my life, but I know that it would seem extremely post hoc and you could also probably say that the skills I'm using are not unique to LW. All I can say is that the core insight that formed the crux of my dissertation arose because I was using a very LW-style approach to analyzing a problem.

The thing about rationalist skills is that LW does not any cannot have a monopoly on them. In fact, the valuable function of LW (at least in the past) has been to both aggregate and sort through potentially actionable strategic directives and algorithms.

What's interesting to me is that school doesn't do that at all. I got through however-many years of schooling and earned a PhD without once taking a class about Science, about how to actually do it, about what the process of Science is. I absorbed some habits from advisers and mentors, that's about it. The only place that I even know of where people talk at length about the inner operations of mind that correspond to the outer reality where one observes discoveries being made is Less Wrong.

And if you're an entrepreneur and don't care about science, then Less Wrong is also one of a few places where people talk at length about how to marshal your crappy human brain and coax it to working productively on tasks that you have deliberately and strategically chosen.

One problem is that I'm probably thinking of the Less Wrong of four years ago rather than the Less Wrong of today. In any case, all those old posts that I found so much value in are still there.

Replies from: Vaniver
comment by Vaniver · 2015-12-22T18:17:37.558Z · LW(p) · GW(p)

the skills I'm using are not unique to LW.

I feel like this is an important point that goes a long way to give one the intellectual / social humility IlyaShpitser is pointing at, and I agree completely that the value of LW as a site/community/etc. is primarily in sorting and aggregating. (It's the people that do the creating or transferring.)

comment by passive_fist · 2015-12-21T21:28:08.031Z · LW(p) · GW(p)

You are correct in that surveys of IQ and other intelligence scores consistently show physicists having some of the highest. But mathematics, statistics, computer science, and engineering are the same, and most studies I've seen generally see very little, if any, significant difference in intelligence scores between these fields.

'Rationalist' isn't a field or specialization, it's defined more along the lines of refining and improving rational thinking. Based on the lesswrong survey, fields like mathematics and computer science are heavily represented here. There are actually more physicists (4.3%) than philosophers (2.4%). If this is inconsistent with your perception of the community, update your prior.

From all of this it is safe to assume that the average LW'er is 'very smart', and that LW contains a mini-community of rationalist scientists. One data point: Me. I have a PhD in engineering and I'm a practising scientist. Maybe I should have phrased my initial comment as: "It might be better if the intersection of rationalists and scientists were larger."

Replies from: ChristianKl
comment by ChristianKl · 2015-12-21T22:30:16.114Z · LW(p) · GW(p)

While 4.3% of LW people are physicists the reverse isn't true.

comment by MrMind · 2015-12-22T08:32:52.665Z · LW(p) · GW(p)

If only smart people were automatically bias free...

comment by Bryan-san · 2015-12-21T21:01:31.750Z · LW(p) · GW(p)

Could you expand on this further? I'm not sure I understand your argument. Also, intellectual humility or social humility?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-21T21:08:57.835Z · LW(p) · GW(p)

Re: your last question: yes.

(a) It is very difficult to perceive qualitative differences for people 1 sigma+ above "you" (for any value of "you"), but it is enormous.

(b) How much "science process" does this community actually understand? How many are practicing scientists, as in publish real stuff in journals?

The outside view worry is there might be a bit of a "twenty something knowitall" going on. You read some stuff, and liked it. That's great! If the stuff isn't universally adopted by very smart folks, there are probably very good reasons for that! Read more!


My argument boils down to: "no, really, very smart people are actually very smart."

Replies from: 9eB1
comment by 9eB1 · 2015-12-22T03:38:23.867Z · LW(p) · GW(p)

The median IQ at LessWrong is 139, the average Nobel laureate is reputed to have an IQ of 145. Presumably that means many people at LessWrong are in a position to understand the reasoning of Nobel laureates, at least.

Replies from: IlyaShpitser, philh
comment by IlyaShpitser · 2015-12-22T03:50:03.298Z · LW(p) · GW(p)

The gap between the average Nobel laureate (in physics, say) and the average LWer is enormous. If your measure says it isn't, it's a crappy measure.

Replies from: Vaniver, MrMind
comment by Vaniver · 2015-12-22T14:40:38.740Z · LW(p) · GW(p)

I calculate about 128 for the average IQ of a survey respondent who provides one and I suspect that nonresponse means the actual average is closer to 124 or so. (Thus I agree with you that there is a significant gap between the average Nobel laureate and the average LWer.)

I think the right way to look at LW's intellectual endowment is that it's very similar to a top technical college, like Harvey Mudd. There are a handful of professor/postdoc/TA types running around, but as a whole the group skews very young (graph here, 40 is 90th percentile) and so even when people are extraordinarily clever they don't necessarily have the accomplishments or the breadth for that to be obvious. (And because of how IQ distributions work, especially truncated ones with a threshold, we should expect most people to be close to the threshold.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T14:58:06.377Z · LW(p) · GW(p)

I agree with this. I think looking at a typical LWer as a typical undergrad at Harvey Mudd is a good model. (This is not a slur, btw, Harvey Mudd is great).

Replies from: Kaj_Sotala
comment by MrMind · 2015-12-22T08:29:58.540Z · LW(p) · GW(p)

What makes you so confident that your model is correct, instead of the data disproving it?
No sarcasm, it's a honest question.

Replies from: Vaniver, IlyaShpitser
comment by Vaniver · 2015-12-22T15:02:05.103Z · LW(p) · GW(p)

I look at the IQ results for the survey every year. A selected handful of comments:

Karma vs. multiple IQ tests: positive correlation (.45) between self-report and Raven's for users with positive karma, negative correlation (-.11) between self-report and Raven's for users without positive karma.

SATs are very high: 96th percentile in the general population is lower quartile here. (First place I make the Harvey Mudd comparison.)

SAT self-report vs. IQ self-report: average SAT, depending on which one you look at and how you correct it, suggests that the average LWer is somewhere between 98th and 99.5th percentile. (IQ self-report average is above 99.5th percentile, and so I call the first "very high" and the second "extremely high.")


I've interacted with a handful of Nobel laureates, I'm familiar with the professors and students at two top 15 graduate physics programs, and I've interacted with a bunch of LWers. LW as whole seems roughly comparable to a undergraduate physics department, active LWers roughly comparable to a graduate physics department, and there are top LWers at the level of the Nobel laureates (but aging means the ~60 year old Nobel laureates are not a fair comparison to the ~30 year old top LWers, and this is selecting just the math genius types from the top LWers, not the most popular top LWers). Recall Marcello comparing Conway and Yudkowsky.

Replies from: IlyaShpitser, MrMind
comment by IlyaShpitser · 2015-12-22T20:37:18.886Z · LW(p) · GW(p)

That last link is kinda cringeworthy.

Replies from: Vaniver
comment by Vaniver · 2015-12-22T21:19:05.221Z · LW(p) · GW(p)

I just spent the last minute or so trying to figure out what you didn't like about my percentile comparisons. ;)

The underlying subject is often painful to discuss, so even handled well there will be things to cringe about.

comment by MrMind · 2015-12-24T10:32:53.887Z · LW(p) · GW(p)

I don't know if you knew that my question was directed at IlyaShpitser and not at you... I do not doubt your data.

comment by IlyaShpitser · 2015-12-22T14:34:15.848Z · LW(p) · GW(p)

Because I hung out with some top academic people, I know what actual genius is like.


Incidentally, when I talk about people being "very smart" I don't mean "as measured by IQ." As I mentioned lots of times before, I think IQ is a very poor measure of math smarts, and a very poor measure of generalized smarts at the top end. Intelligence is too heterogeneous, and too high dimensional. But there is such as thing as being "very smart," it's just a multidimensional thing.

So in this case, I just don't think there is a lot of info in the data. I much prefer looking at what people have done as a proxy for their smarts. "If you are so smart, where are all your revolutionary papers?" This also correctly adjusts for people who actually are very smart, but who bury their talents (and so their hypothetical smarts are not super interesting to talk about).

Replies from: MrMind
comment by MrMind · 2015-12-24T10:37:54.161Z · LW(p) · GW(p)

I've already had this discussion with someone else, about another topic: I pointed out that statistically, lottery winners end up not happier than they were before winning. He said that he knew how to spend them well to be effectively much happier.
In our discussion, you have some insight that from my perspective are biased, but from your point of view are not. Unfortunately, your data rely on uncommunicable evidence, so we should just disagree and call it a day.

Replies from: gjm, IlyaShpitser
comment by gjm · 2015-12-24T18:06:16.610Z · LW(p) · GW(p)

lottery winners end up not happier

Lottery winners do end up happier.

Replies from: MrMind
comment by MrMind · 2015-12-28T08:35:56.780Z · LW(p) · GW(p)

Thanks, I updated!

comment by IlyaShpitser · 2015-12-24T17:53:57.336Z · LW(p) · GW(p)

Well, you don't have to agree if you don't want.

But the situation is not as hopeless as it seems. Try to find some people at the top of their game, and hang out with them for a bit. Honestly, if you think "Mr. Average Less Wrong" and Ed Witten are playing in the same stadium, you are being a bit myopic. But this is the kind of thing more info can help with. You say you can't use my info (and don't want to take my word for it), but you can generate your own if you care.

Replies from: MrMind
comment by MrMind · 2015-12-28T08:36:33.297Z · LW(p) · GW(p)

Try to find some people at the top of their game, and hang out with them for a bit.

Will do! We'll see how it pans out :D

comment by philh · 2015-12-22T15:32:04.786Z · LW(p) · GW(p)

the average Nobel laureate is reputed to have an IQ of 145.

Is there a reliable source for this?

[1] is one source. Its method is: "Jewish IQ is distributed like American-of-European-ancestry IQ, but a standard deviation higher. If you look at the population above a certain IQ threshold, you see a higher fraction of Jews than in the normal population. If you use the threshold of 139, you see 27% Jews, which is the fraction of Jews who are Nobel laureates. So let's assume that Nobel laureate IQ is distributed like AOEA IQ after you cut off everyone with IQ below 139. It follows that Nobel laureates have an average IQ of 144."

I hope you'll agree that this seems dubious.

[2] agrees that it's dubious, and tries to calculate it a different way (still based on fraction of Jews), and gets 136. (It's only reported by field, but it would be the same as chemistry and literature, because they're both 27% Jews.) It gets that number by doing a bunch of multiplications which I suspect are the wrong multiplications to do. (Apparently, if IQ tests had less g loading, and if self-identified ethnicity correlated less with ancestry, then the g loading of Jewishness would go up?) But even if the calculations do what they're supposed to, it feels like a long chain of strong assumptions and noisy data, and this method seems about equally dubious to me.

comment by Luke_A_Somers · 2015-12-21T21:04:33.538Z · LW(p) · GW(p)

What gets me more is the guy who was complaining that the atomic theory is left in the same framework with 1-epsilon probability.

No, this is not a problem.

comment by [deleted] · 2015-12-22T07:19:00.744Z · LW(p) · GW(p)

I tried to get a discussion going on this exact subject in my post this week, but there seemed to be little interest. A major weakness of the standard Bayesian inference method is that it assumes a problem only has two possible solutions. Many problems involve many possible solutions, and many times the number of possible solutions is unknown, and in many cases the correct solution hasn't been thought of yet. In such instances, confirmation through inductive inference may not be the best way of looking at the problem.

Replies from: IlyaShpitser, passive_fist, MrMind
comment by IlyaShpitser · 2015-12-22T15:19:21.168Z · LW(p) · GW(p)

A major weakness

Where did you get this from? Maintaining beliefs over an entire space of possible solutions is a strength of the Bayesian approach. Please don't talk about Bayesian inference after reading a single thing about updating beliefs on whether a coin is fair or not. That's just a simple tutorial example.

Replies from: None
comment by [deleted] · 2015-12-22T16:03:19.963Z · LW(p) · GW(p)

If I have 3 options, A, B, and C, and I'm 40% certain the best option is A, 30% certain the best option is B, and 30% certain the best option is C, would it be correct to say that I've confirmed option A instead of say my best evidence suggests A? This can sort of be corrected for with the standard Bayesian confirmation model, but the problem becomes larger as the number of possibilities increases to the point where you can't get a good read on your own certainty, or to the point where the number of possibilities is unknown.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T16:32:36.205Z · LW(p) · GW(p)

I don't understand your question. Is this about maintaining beliefs over hypotheses or decision-making?

Replies from: None
comment by [deleted] · 2015-12-22T16:56:53.983Z · LW(p) · GW(p)

I'm arguing that Bayesian confirmation theory as a philosophy was originally conceived as a model using only two possibilities (A and ~A), and then this model was extrapolated into problems with more than two possibilities. If it had been originally conceived using more than two possibilities, it wouldn't have made any sense to use the word confirmation. So explanations of Bayesian confirmation theory will often entail considering theories or decisions in isolation rather than as part of a group of decisions or theories.

So if there are 20 possible explanations for a problem, and there is no strong evidence suggesting any one explanation, then I will have 5% certainty of the average explanation. Unless I am extremely good at calibration, then I can't confirm any of them, and if I consider each explanation in isolation from the other explanations, then all of them are wrong.

It doesn't matter whether we're talking about hypotheses or decision-making.

Replies from: gjm, Lumifer
comment by gjm · 2015-12-22T19:20:38.782Z · LW(p) · GW(p)

Bayesian confirmation theory as a philosophy was originally conceived as a model using only two possibilities

I'm not sure whether this is true, but it's irrelevant. Bayesian confirmation theory works just fine with any number of hypotheses.

then I can't confirm any of them

If by "confirm" you mean "assign high probability to, without further evidence", yes. That seems to me to be exactly what you'd want. What is the problem you see here?

comment by Lumifer · 2015-12-22T17:09:22.676Z · LW(p) · GW(p)

If it had been originally conceived using more than two possibilities, it wouldn't have made any sense to use the word confirmation.

You sound confused. The "confirmation" stems from

In Bayesian Confirmation Theory, it is said that evidence confirms (or would confirm) hypothesis H (to at least some degree) just in case the prior probability of H conditional on E is greater than the prior unconditional probability of H

(source)

Replies from: None
comment by [deleted] · 2015-12-22T17:27:17.946Z · LW(p) · GW(p)

So what if p(H) = 1, p(H|A) = .4, p(H|B) = .3, and p(H|C) = .3? The evidence would suggest all are wrong. But I have also determined that A, B, and C are the only possible explanations for H. Clearly there is something wrong with my measurement, but I have no method of correcting for this problem.

Replies from: Lumifer, gjm, IlyaShpitser, LawChan
comment by Lumifer · 2015-12-22T17:38:15.354Z · LW(p) · GW(p)

H is Hypothesis. You have three: HA, HB, and HC. Let's say your prior is that they are equally probable, so the unconditional P(HA) = P(HB) = P(HC) = 0.33

Let's also say you saw some evidence E and your posteriors are P(HA|E) = 0.4, P(HB|E) = 0.3, P(HC|E) = 0.3. This means that evidence E confirms HA because P(HA|E) > P(HA). This does not mean that you are required to believe that HA is true or bet your life's savings on it.

Replies from: None
comment by [deleted] · 2015-12-22T17:57:35.146Z · LW(p) · GW(p)

That's a really good explanation of part of the problem I was getting at. But that requires considering the three hypotheses as a group rather than in isolation from all other hypotheses to calculate 0.33.

Replies from: Lumifer, Vaniver
comment by Lumifer · 2015-12-22T18:06:17.037Z · LW(p) · GW(p)

But that requires considering the three hypotheses as a group rather than in isolation from all other hypotheses to calculate 0.33

No, it does not.

Let's say you have a hypothesis HZ. You have a prior for it, say P(HZ) = 0.2 which means that you think that there is a 20% probability that HZ is true and 80% probability that something else is true. Then you see evidence E and it so happens that the posterior for HZ becomes 0.25, so P(HZ|E) = 0.25. This means that evidence E confirmed hypothesis HZ and that statement requires nothing from whatever other hypotheses HA,B,C,D,E,etc. might there be.

Replies from: None
comment by [deleted] · 2015-12-22T18:14:31.910Z · LW(p) · GW(p)

How would you calculate that prior of 0.2? In my original example, my prior was 1, and then you transformed it into 0.33 by dividing by the number of possible hypotheses. You wouldn't be able to do that without taking the other two possibilities into account. As I said, the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is unknown. However, frequently philosophical theories of bayesian confirmation theory don't consider this problem. From this paper by Morey, Romeijn, and Rouder:

Overconfident Bayes is problematic because it lacks the necessary humility that accompanies the understanding that inferences are based on representations. We agree that there is a certain silliness in computing a posterior odds between model A and model B, seeing that it is in favour of model A by 1 million to one, and then declaring that model A has a 99.9999% probability of being true. But this silliness arises not from model A being false. It arises from the fact that the representation of possibilities is quite likely impoverished because there are only two models. This impoverished representation makes translating the representational statistical inferences into inferences pertaining to the real world difficult or impossible.

Replies from: Lumifer, IlyaShpitser
comment by Lumifer · 2015-12-22T18:25:57.253Z · LW(p) · GW(p)

You need to read up on basic Bayesianism.

In my original example, my prior was 1

Priors are always for a specific hypothesis. If your prior is 1, this means you believe this hypothesis unconditionally and no evidence can make you stop believing it.

You are talking about the requirement that all mutually exclusive probabilities must sum to 1. That's just a property of probabilities and has nothing to do with Bayes.

the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is unknown.

Yes, it can. To your "known" hypotheses you just add one more which is "something else".

Really, just go read. You are confused because you misunderstand the basics. Stop with the philosophy and just figure out how the math works.

Replies from: None
comment by [deleted] · 2015-12-22T18:43:58.727Z · LW(p) · GW(p)

I'm not arguing with the math; I'm arguing with how the philosophy is often applied. Consider the condition where my prior is greater than my evidence for all choices I've looked at, the number of possibilities is unknown, but I still need to make a decision about the problem? As the paper I was originally referencing mentioned, what if all options are false?

Replies from: Lumifer
comment by Lumifer · 2015-12-22T18:55:30.632Z · LW(p) · GW(p)

I'm not arguing with the math; I'm arguing with how the philosophy is often applied.

You are not arguing, you're just being incoherent. For example,

my prior is greater than my evidence for all choices I've looked at

...that sentence does not make any sense.

what if all options are false?

Then the option "something else" is true.

Replies from: None
comment by [deleted] · 2015-12-22T18:59:00.725Z · LW(p) · GW(p)

But you can't pick something else; you have to make a decision

Replies from: Lumifer
comment by Lumifer · 2015-12-22T19:06:58.559Z · LW(p) · GW(p)

What does "have to make a decision" mean when "all options are false"?

Are you thinking about the situation when you have, say, 10 alternatives with the probabilities of 10% each except for two, one at 11% and one at 9%? None of them are "true" or "false", you don't know that. What you probably mean is that even the best option, the 11% alternative, is more likely to be false than true. Yes, but so what? If you have to pick one, you pick the RELATIVE best and if its probability doesn't cross the 50% threshold, well, them's the breaks.

Replies from: None
comment by [deleted] · 2015-12-22T19:22:16.857Z · LW(p) · GW(p)

Yes that is exactly what I'm getting at. It doesn't seem reasonable to say you've confirmed the 11% alternative. But then there's another problem, what if you have to make this decision multiple times? Do you throw out the other alternatives and only focus on the 11%? That would lead to status quo bias. So you have to keep the other alternatives in mind, but what do you do with them? Would you then say you've confirmed those other alternatives? This is where the necessity of something like falsification comes into play. You've got to continue analyzing multiple options as new evidence comes in, but trying to analyze all the alternatives is too difficult, so you need a way to throw out certain alternatives, but you never actually confirm any of them. These problems come up all the time in day to day decision making such as deciding on what's for dinner tonight.

Replies from: Lumifer, gjm
comment by Lumifer · 2015-12-22T19:31:48.393Z · LW(p) · GW(p)

It doesn't seem reasonable to say you've confirmed the 11% alternative.

In the context of the Bayesian confirmation theory, it's not you who "confirms" the hypothesis. It's evidence which confirms some hypothesis and that happens at the prior -> posterior stage. Once you're dealing with posteriors, all the confirmation has already been done.

what if you have to make this decision multiple times?

Do you get any evidence to update your posteriors? Is there any benefit to picking different alternatives? If no and no, then sure, you repeat your decision.

That would lead to status quo bias.

No, it would not. That's not what the status quo bias is.

You keep on using words without understanding their meaning. This is a really bad habit.

Replies from: None
comment by [deleted] · 2015-12-22T19:44:49.487Z · LW(p) · GW(p)

When I say throw out I'm talking about halting tests, not changing the decision.

Replies from: Lumifer
comment by Lumifer · 2015-12-22T19:54:33.035Z · LW(p) · GW(p)

If your problem is which tests to run, then you're in the experimental design world. Crudely speaking, you want to rank your available tests by how much information they will give you and then do those which have high expected information and discard those which have low expected information.

Replies from: None
comment by [deleted] · 2015-12-22T19:58:28.516Z · LW(p) · GW(p)

True.

comment by gjm · 2015-12-22T19:31:02.189Z · LW(p) · GW(p)

All you have to do is not simultaneously use "confirm" to mean both "increase the probability of" and "assign high probability to".

As for throwing out unlikely possibilities to save on computation: that (or some other shortcut) is sometimes necessary but it's an entirely separate matter from Bayesian confirmation theory or indeed Popperian falsificationism. (Popper just says to rule things out when you've disproved them. In your example, you have a bunch of things near to 10% and Popper gives you no licence to throw any of them out.

Replies from: None
comment by [deleted] · 2015-12-22T20:39:59.389Z · LW(p) · GW(p)

Yes, sorry. I'm considering multiple sources which I recognize the rest of you haven't read, and trying to translate them into short comments which I'm probably not the best person to do so, so I recognize the problem I'm talking about may come out a bit garbled, but I think the quote from the Morey et al. paper I quoted above describes the problem the best.

Replies from: gjm
comment by gjm · 2015-12-22T22:03:41.871Z · LW(p) · GW(p)

You see how Morey et al call the position they're criticizing "Overconfident Bayesianism"? That's because they're contrasting it with another way of doing Bayesianism, about which they say "we suspect that most Bayesians adhere to a similar philosophy". They explicitly say that what they're advocating is a variety of Bayesian confirmation theory.

Replies from: None, None
comment by [deleted] · 2015-12-22T22:34:07.931Z · LW(p) · GW(p)

The part about deduction from the Morey et al. paper:

GS describe model testing as being outside the scope of Bayesian confirmation theory, and we agree. This should not be seen as a failure of Bayesian confirmation theory, but rather as an admission that Bayesian confirmation theory cannot describe all aspects of the data analysis cycle. It would be widely agreed that the initial generation of models is outside Bayesian confirmation theory; it should then be no surprise that subsequent generation of models is also outside its scope.

Replies from: gjm
comment by gjm · 2015-12-24T10:41:45.508Z · LW(p) · GW(p)

Who has been claiming that Bayesian confirmation theory is a tool for generating models?

(It can kinda-sorta be used that way if you have a separate process that generates all possible models, hence the popularity of Solomonoff induction around here. But that's computationally intractable.)

comment by [deleted] · 2015-12-22T22:15:20.557Z · LW(p) · GW(p)

As stated in my original comment, confirmation is only half the problem to be considered. The other half is inductive inference which is what many people mean when they refer to Bayesian inference. I'm not saying one way is clearly right and the other wrong, but that this is a difficult problem to which the standard solution may not be best.

You'd have to read the Andrew Gelman paper they're responding to to see a criticism of confirmation.

comment by IlyaShpitser · 2015-12-22T19:03:22.545Z · LW(p) · GW(p)

As I said, the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is unknown

You don't need to know the number, you need to know the model (which could have infinite hypotheses in it).

Your model (hypothesis set) could be specified by an infinite number of parameters, say "all possible means and variances of a Gaussian." You can have a prior on this space, which is a density. You update the density with evidence to get a new density. This is Bayesian stats 101. Why not just go read about it? Bishop's machine learning book is good.

Replies from: None
comment by [deleted] · 2015-12-22T19:07:06.863Z · LW(p) · GW(p)

True, but working from a model is not an inductive method, so it can't be classified as confirmation through inductive inference which is what I'm criticizing.

Replies from: Lumifer, IlyaShpitser
comment by Lumifer · 2015-12-22T19:16:40.913Z · LW(p) · GW(p)

You are severely confused about the basics. Please unconfuse yourself before getting to the criticism stage.

Replies from: None
comment by [deleted] · 2015-12-22T19:35:19.378Z · LW(p) · GW(p)

??? IlyaShpitser if I understand correctly is talking about creating a model of a prior, collecting evidence, and then determining whether the model is true or false. That's hypothesis testing, which is deduction; not induction.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T19:42:39.724Z · LW(p) · GW(p)

You don't understand.

You have a (possibly infinite) set of hypotheses. You maintain beliefs about this set. As you get more data, your beliefs change. To maintain beliefs you need a distribution/density. To do that you need a model (a model is just a set of densities you consider). You may have a flexible model and let the data decide how flexible you want to be (non-parametric Bayes stuff, I don't know too much about it), but there's still a model.

Suggesting for the third and final time to get off the internet argument train and go read a book about Bayesian inference.

Replies from: None
comment by [deleted] · 2015-12-22T19:50:04.044Z · LW(p) · GW(p)

Oh, sorry I misunderstood your argument. That's an interesting solution.

Replies from: gjm
comment by gjm · 2015-12-22T22:06:05.440Z · LW(p) · GW(p)

That interesting solution is exactly what people doing Bayesian inference do. Any criticism you may have that doesn't apply to what Ilya describes isn't a criticism of Bayesian inference.

comment by IlyaShpitser · 2015-12-22T19:35:00.602Z · LW(p) · GW(p)

As much as I hate to do it, I am going to have to agree with Lumifer, you sound confused. Go read Bishop.

comment by Vaniver · 2015-12-22T18:13:12.210Z · LW(p) · GW(p)

But that requires considering the three hypotheses as a group rather than in isolation from all other hypotheses to calculate 0.33.

Not really. A hypothesis's prior probability comes from the total of all of your knowledge; in order to determine that P(HA)=0.33 Lumifer needed the additional facts that there were three possibilities that were all equally likely.

It works just as well if I say that my prior is P(HA)=0.5, without any exhaustive enumeration of the other possibilities. Then evidence E confirms HA if P(HA|E)>P(HA).

(One should be suspicious that my prior probability assessment is a good one if I haven't accounted for all the probability mass, but the mechanisms still work.)

Replies from: None
comment by [deleted] · 2015-12-22T18:46:06.396Z · LW(p) · GW(p)

One should be suspicious that my prior probability assessment is a good one if I haven't accounted for all the probability mass, but the mechanisms still work.

Which is one of the other problems I was getting at

comment by gjm · 2015-12-22T19:24:09.035Z · LW(p) · GW(p)

If you start with inconsistent assumptions, you get inconsistent conclusions. If you believe P(H)=1, P(A&B&C)=1, and P(H|A) etc. are all <1, then you have already made a mistake. Why are you blaming this on Bayesian confirmation theory?

comment by IlyaShpitser · 2015-12-22T17:36:08.390Z · LW(p) · GW(p)

You are confused. If p(H) = 1, p(H, anything) = 1 or 0, so p(H | anything) = 1 or 0, if p(anything) > 0.

comment by LawrenceC (LawChan) · 2015-12-22T17:31:41.763Z · LW(p) · GW(p)

Wait, how would you get P(H) = 1?

Replies from: None
comment by [deleted] · 2015-12-22T17:36:30.596Z · LW(p) · GW(p)

Fine. p(H) = 0.5, p(H|A) = 0.2, p(H|B) = 0.15, p(H|C) = 0.15 It's not really relevant to the problem.

Replies from: Vaniver, IlyaShpitser
comment by Vaniver · 2015-12-22T18:28:44.112Z · LW(p) · GW(p)

It's not really relevant to the problem.

The relevance is that it's a really weird way to set up a problem. If P(H)=1 and P(H|A)=0.4 then it is necessarily the case that P(A)=0. If that's not immediately obvious to you, you may want to come back to this topic after sleeping on it.

Replies from: None
comment by [deleted] · 2015-12-22T18:37:09.332Z · LW(p) · GW(p)

Fair enough.

comment by IlyaShpitser · 2015-12-22T17:38:06.261Z · LW(p) · GW(p)

\sum_i p(H|i) need not add up to p(H) (or indeed to 1).

Replies from: None
comment by [deleted] · 2015-12-22T18:00:03.560Z · LW(p) · GW(p)

No, it doesn't.

Edit - I'm agreeing with you. Sorry if that wasn't clear.

comment by passive_fist · 2015-12-22T09:26:04.351Z · LW(p) · GW(p)

A major weakness of the standard Bayesian inference method is that it assumes a problem only has two possible solutions.

This is not true at all.

Replies from: None
comment by [deleted] · 2015-12-22T15:30:10.266Z · LW(p) · GW(p)

A large chunk of academics would say that it is. For example, from the paper I was referencing in my post:

At some point in history, a statistician may well write down a model which he or she believes contains all the systematic influences among properly defined variables for the system of interest, with correct functional forms and distributions of noise terms. This could happen, but we have never seen it, and in social science we have never seen anything that comes close. If nothing else, our own experience suggests that however many different specifications we thought of, there are always others which did not occur to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.

Replies from: gjm
comment by gjm · 2015-12-22T15:47:46.755Z · LW(p) · GW(p)

That doesn't at all say Bayesian reasoning assumes only two possibilities. It says Bayesian reasoning assumes you know what all the possibilities are.

Replies from: None
comment by [deleted] · 2015-12-22T15:56:03.500Z · LW(p) · GW(p)

True, but how often do you see an explanation of Bayesian reasoning in philosophy that uses more than two possibilities?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-12-22T17:26:10.483Z · LW(p) · GW(p)

. . .

comment by MrMind · 2015-12-22T08:25:08.642Z · LW(p) · GW(p)

A major weakness of the standard Bayesian inference method is that it assumes a problem only has two possible solutions.

This is a weird sentence to me. I learned about Bayesian inference through Jaynes' book and surely it doesn't portray that inference as having only two possible solutions.
The other book I know about, Sivia's, doesn't do this either.

Replies from: None
comment by [deleted] · 2015-12-22T15:30:57.028Z · LW(p) · GW(p)

You're referring to how it is described in statistics textbooks. I'm talking about confirmation theory as a philosophy.

comment by cleonid · 2015-12-21T13:27:05.922Z · LW(p) · GW(p)

From Omnilibrium:

Replies from: LessWrong1
comment by Gunslinger (LessWrong1) · 2015-12-21T15:22:33.220Z · LW(p) · GW(p)

Gives me the Wellkept Gardens Die by Pacifism feel.

How isomorphic is society and online communities? Can the Wellkept Gardens argument be applied that liberally?

comment by Zubon · 2015-12-22T22:02:07.189Z · LW(p) · GW(p)

How much do you trust economic data released by the Chinese government? I had assumed that economic indicators were manipulated, but recent discussion suggests it is just entirely fabricated, at least as bad as anything the Soviet Union reported. For example, China has reported a ~4.1% unemployment rate for over a decade. Massive global recession? 4.1% unemployment. Huge economic boom? 4.1% unemployment.

One of the largest, most important economies in the world, and I don't know that we can reliably say much about it at all.

Replies from: Lumifer
comment by Lumifer · 2015-12-22T22:28:14.183Z · LW(p) · GW(p)

How much do you trust economic data released by the Chinese government?

Not much.

If you want to explore further, I recommend this, for example this post.

Replies from: VoiceOfRa, ChristianKl
comment by VoiceOfRa · 2015-12-23T03:04:43.415Z · LW(p) · GW(p)

One interesting point, not expanded up on, is this:

One writer chalks this concern up to a bunch of “conspiracy theor(ies)”.

Balding dismisses this by citing Premier Li Keqiang, but I think this objection illustrates a deeper problem with the way the phrase "conspiracy theory" is used. It's frequently used to dismiss any suggestion that someone in authority is behaving badly regardless of whether an actual conspiracy would be required.

Let's look at what it would take for Chinese economic data to be bad. The data is gathered by the central government by delegating gathering the data to appropriate individual branches, by province, industry, etc. So what happens if someone at that level decides to fudge with the data for whatever reason (possibly to make his province and/or industry look better). The aggregate data will be wrong. And that's just one person on one level. In reality, of course, there are many levels in the hierarchy and many corrupt people in all of them.

Replies from: Richard_Kennaway, Lumifer
comment by Richard_Kennaway · 2015-12-23T14:35:21.177Z · LW(p) · GW(p)

Stamp's Law.

"The government are very keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first instance from the chowky dar (village watchman in India), who just puts down what he damn pleases."

Josiah Stamp, 1st Baron Stamp

comment by Lumifer · 2015-12-23T16:47:06.990Z · LW(p) · GW(p)

this objection illustrates a deeper problem with the way the phrase "conspiracy theory" is used. It's frequently used to dismiss any suggestion that someone in authority is behaving badly regardless of whether an actual conspiracy would be required.

You misunderstand Balding -- he asserts loudly and explicitly that the Chinese authorities misbehave with respect to statistical data. The conspiracy theories he is talking about are the conspiracies of China-watchers and the point of them would be to sow FUD about the Chinese economic development, presumably after shorting China.

Replies from: username2
comment by username2 · 2015-12-24T06:10:36.526Z · LW(p) · GW(p)

I believe VoiceOfRa was talking about the person Balding quoted.

comment by ChristianKl · 2015-12-25T21:25:02.025Z · LW(p) · GW(p)

Do you have a 90% confidence interval for what you consider China's real GDP to be?

Replies from: Lumifer
comment by Lumifer · 2015-12-28T16:00:05.348Z · LW(p) · GW(p)

Why would I care?

comment by Lumifer · 2015-12-22T21:34:46.013Z · LW(p) · GW(p)

That was a bit... strange.

Huw Price, a professional philosopher who happens to be one of the founders and the Academic Director of the Centre for the Study of Existential Risk (the one in Cambridge, UK), wrote a piece which is quite optimistic about cold fusion in general and Andrea Rossi in particular.

Replies from: knb, Gunnar_Zarncke, RomeoStevens
comment by knb · 2015-12-24T04:08:06.784Z · LW(p) · GW(p)

I don't follow LENR research closely, but Rossi seems like one of the least trustworthy people in the field, which speaks poorly of Huw Price's judgement, since he especially emphasizes the plausibility of E-Cat.

I'm very OK with using "sociological" factors to make judgments about these things. Rossi has been involved in a number of extremely suspicious operations and did a stint in prison for fraud. Here's a skeptic's look at the "independent tests" verifying Rossi's device.

comment by Gunnar_Zarncke · 2015-12-23T08:51:59.989Z · LW(p) · GW(p)

LENR is under-populated. Independent of whether it is valid or not the social effects dominate the scientific ones.

Also interesting: The Fleischman-Pons-Effect may be unreliable in general but the heat/helium ratio is claimed to be stable.

Added: I don't think the paper by the swedish physicists is smelly either (except in so far as it mentions the E-Cat): Nuclear Spallation and Neutron Capture Induced by Ponderomotive Wave Forcing - note that the specific resonance frequency of the effect could explain the unreliability of the experiment.

It may appear strange that one of the authors Rickard Lundin is an astrophysicist but he well established there (look at the citations) and does have significant experience with interactions of ions in strong fields.

Replies from: Lumifer
comment by Lumifer · 2015-12-23T16:48:38.013Z · LW(p) · GW(p)

LENR is under-populated.

Saying this implies that you know what the proper population level is. How do you know?

Social effects dominate the attempts to build a perpetuum mobile as well.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-12-23T21:27:12.958Z · LW(p) · GW(p)

Saying this implies that you know what the proper population level is. How do you know?

In this I rely on the evaluation of Huw Price who surely has a much better grasp of the field(s) than I do.

comment by RomeoStevens · 2015-12-23T04:02:52.408Z · LW(p) · GW(p)

Indeed strange. Following up on the linked citations finds things that smell pretty dubious.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-12-23T09:11:15.065Z · LW(p) · GW(p)

Could you link to the citations you find smelly?

Replies from: RomeoStevens
comment by RomeoStevens · 2015-12-23T22:02:05.257Z · LW(p) · GW(p)

Top comment here on the Alexander Parkhomov replication: http://www.e-catworld.com/2014/12/30/alexander-parkhomov-on-calibration-in-his-test/

Claims that this: http://animpossibleinvention.com/2015/10/15/swedish-scientists-claim-lenr-explanation-break-through/ bolsters the case smells like typical aggrandizing claim since it is not a replication, but simply a speculative paper on the causal mechanism if such an effect exists. As has been repeated many times, no one is questioning that the energy is there, it's the mechanism by which it actually provides excess power at low temperatures that is under question, see the comments thread in the next big future piece here:http://nextbigfuture.com/2015/06/chinas-lenr-is-getting-excess-600-watts.html#soa_062bbe85

A review of the more credible replication does cause an update in the positive direction, but only a small one: http://www.infinite-energy.com/iemagazine/issue118/analysis.html

comment by Lyyce · 2015-12-21T14:34:11.125Z · LW(p) · GW(p)

I am confused about free will. I tried to read about it (notably from the sequences) but am still not convinced.

I make choices, all the time, sure, but why do I chose one solution in particular?

My answer would be the sum of my knoledge and past experiences (nurture) and my genome (nature), with quantum randomness playing a role as well, but I can't see where does free will intervene.

It feels like there is something basic I don't understand, but I can't grasp it.

Replies from: Armarren, mwengler, Dagon, Zubon
comment by Armarren · 2015-12-21T15:39:59.190Z · LW(p) · GW(p)

Let's try a car analogy for a compatibilist position, as I understand it: there is car, and why does it move? Because it has an engine and wheels and other parts all arranged in a specific pattern. There is no separate "carness" that makes it move ("automobileness" if you will), it is the totality of its parts that makes it a car.

Will is the same, it is the totality of your identity which creates a process by which choices are made. This doesn't mean there is no such thing any more than the fact that a car is composed of identifiable parts means that no car exists, it is just not a basic indivisible thing.

comment by mwengler · 2015-12-21T18:56:44.138Z · LW(p) · GW(p)

Your insight is pretty consistent with a lot of philosophers, including my own personal favorite Daniel Dennett. Even if there is a pseudorandom number generator (or a quantum random number generator which might not be pseudo), that our "choices" would be random in this way does not really feel like what people want free will to mean. My reading of Dennett is that our "choices" arise from the law-like operation of our minds, which may be perfectly predictable (if there is no randomness only pseudorandmness of classical thermal noise) or might be as predicatable as any other physical phenomenon within the limits of quantum unpredictability (if you accept that explanation for what is seen in experiments such as two slit and so on).

The thing that amazes me about "free will" is that the "inputs" to what our brain does include the previous "outputs" of what our brain does. So I have decided that obviously if I believe that will power exists, the choices my brain makes in the future will be more likely consistent with what I consciously want my brain to do. So in some sense, free will does exist, well almost, if I get myself believing I have choices my brain will fall more often in the direction of choosing what I consciously want.

There is no substitute for reading Dennett in my opinion, and it is not an easy thing to do.

comment by Dagon · 2015-12-22T03:37:48.777Z · LW(p) · GW(p)

That sum you speak of is encoded in a massive biological calculator called a brain. Free will is the introspective module of that computer as it examines it's own calculations, and that data affects the state of the network to be part of future calculations.

Replies from: RaelwayScot
comment by RaelwayScot · 2015-12-22T15:41:57.996Z · LW(p) · GW(p)

Is that actually the 'strange loop' that Hofstadter writes about?

Replies from: Dagon
comment by Dagon · 2015-12-22T16:42:29.293Z · LW(p) · GW(p)

Hofstadter (as I remember - it's been a long time) took it a step further, granting consciousness to our models of others, and to the models of us that we model in others, etc....

comment by Zubon · 2015-12-22T21:47:16.853Z · LW(p) · GW(p)

You've stated compatibilism, and from that perspective free will tends to look trivial ("you can choose things") or like magical thinking.

Many people have wanted there to be something special about the act of choosing or making decisions. This is necessary for several moral theories, as they demand a particular sense in which you are responsible for your actions that does not obtain if all your actions have prior causes. This is often related to theories that call for a soul, some sort of you apart from your body, brain, genetics, environment, and randomness. You have a sense of self and many people want that to be very important, as you think of yourself as important (to you, if no one else).

You may have read Douglas Adams and recall him describing the fundamental question of philosophy as what life is all about when you really get down to it, really, I mean really. A fair amount of philosophy can be understood as people tacking "really" onto things and considering that a better question. "Sure you choose, but do you choose what you choose to choose? Is our will really free? I mean really, fundamentally free, when you take away everything else, really?"

comment by [deleted] · 2015-12-21T10:51:16.545Z · LW(p) · GW(p)

Thoughts this week:

Career stategy

Thiel isn't decisive on the topic. Is the definite-optimist view is the dominant approach to candidacy in the grand marketplace of talent today?

Kumon

Kumon franchises are cheap. The branding and rep is good. Tutoring is a very attractive market in general and kumon makes it easier for the teachers. But is it ethical, I wonder? To me it's ethical if it delivers value to the students. A caveat is that it seemed cruel the kind of mind-numbing maths done by my classmates as a kid who attended Kumon.

A study with very small sample size) says 'that there may be a significant relationship between participation in the Kumon programme and development in computation skills (p = 0.053), but not with development in mathematical reasoning skills (p = 0.867)'

Going by that alone, it would seem short-term experimental studies don't explain the small-sample (2 out of 2) size correlation between Kumon attendance as kids and extreme adult success I see in my friends today. A stackexchange-esque Q&A suggests no further effects are suggested by other experimental Kumon literature

Political psychology

If human rights were reframed as entitlements, and laws that protect those contrasted with laws that protect discretions, those two having obvious tensions, I wonder what impact that would have on human rights law reform. Branding is influential.

Effective Altruism politics

Based on Sam Deere, Effective Altruism affiliate and former ALP staffer: tractability of political ideas for conversion to legislation comes down to: novelty, high impact, cost-effectiveness, budgetary considerations, interaction with other policies, popular, ideological and strategic/tactical considerations.

A hereustic for lobbying is to focus on Ministers, not departments, other politicians, yada yada...

Nara

I have heard of an exotic Namibian plant, crowned with thorns that grows in the desert called Nara that bears a fruit which when dried, tastes like chocolate but is highly nutritious. I'm skeptical. It sounds too good to be true. I wonder why it isn't a common treat around the world now. If the rumours are true, I would love some to grow in the Australian outback, biosecurity permitting.

Is data science a profitable industry?

Anyway, it's super easy to transmit machine learning how-to online, it's the most popular class at Stanford. Can I get a 'market efficiency'? It's not long till we automate the basic tasks since machine learning is an empirical field too, so it can be subject to it's own learning in polynomial time. I reckon people neglect this (and thus, the market efficiency is quickly gained) because there is little public education to self taught programmers about tihs kind of thing, outside of boring lectures

If you're a data science training provider - yes.

If you're employing data scientists in a data rich operating environment in 2016 - yes.

If you're anyone else...I doubt it.

I'm extremely skeptical about the data science boom. Just because a field is valuable doesn't mean companies around them are strategically places to capture that value in a market for their owners.

Data science currently operates as:

(1) Big data products - on marketplaces like Amazon Web Services where machine learning algorithms are available via the cloud.

(2) Product - offline data analysis automation tools

(3) Service - manually doing (1) or (2)

Amazon currently dominates (1). Microsoft is a close 2nd. Their delivery is on point. They've basically created a platform for people to trade their knowledge about data science as algorithms. Since there are a finite number of algorithms, that can be combinatorially generated and tagged for particular applications, the only real challenge is creating a system for triaging a user's needs into which particular algorithmic application.

If there is to be an big money to be made in this space by new players, it will be solving that problem.

The issue of algorithm generation for statistical analysis should not be confused with the sophisticated tasks of software and application development. The former requires little creativity, while the latter can utilise immense creativity. I say that as someone who's forte is data analysis, rather than software.

As for (2), there is likely to be high efficiency in a market between cloud based algorithms and algorithms implemented offline due to extreme low barriers to entry. Basically those first person in with a good method of translating those algorithms offline, surmounting potential legal hazards, and scaling up (no trivial tasks) will make a quick buck. Though, these are problems that I can define. If I can define them, the big names probably already have and are working on solutions for them. You're out of luck, garage entrepreneurs.

Now (3), the one lay people think of when they think data science, and the one aspiring data scientists entering Kaggle competitions and hopping onto data camp think of. This is a highly commodifiable area, subject to total automation, and outsourcing. There may be some money to be made here as the ubiquity of data rises and you find loyal, computer illiterate clients. However, you'll be picking up scraps the same way that web design as a profitable avenue for freelancing oddballs to make money worked and continues to work - by essentially ripping people off who don't know better, when absurdly user friendly DIY web design options are a reality or the guy down the road is a million times better than you, your client just doesn't know better. Sure, it may be profitable, but ethically it's questionable.

First principles

Elon Musk cites first principle thinking in physics as a key to identifying neglected market opportunities. Can someone give me an example of how it may work in that application?

Social skills

Simply reframing ''approach anxiety'' to the crude, macho 'bitch butterflies'' has done wonders to dampen the phenomenon. I wonder what if that formula could dampen other anxieties...

Concept learning

I wonder what it is like to have genius level verbal abstract reasoning: e.g. 2SD+for instance as reported by the usual neuropsychological tests. The Wechsler Adult Intelligence Scale (WAIS) 'Similarities' subtest measures verbal abstract reasoning or ‘concept learning' (see concept learning on Wikipedia and the Edutechwiki.. Subjects are asked to say how two seemingly dissimilar items might in fact be similar.

When an average person talks to someone at just 30 points of IQ less than average (IQ 70 - the cut off point for intellectual disability), that experience may be comparable to a genius (130) talking to an average person. When it comes to particular subscales that relate to ‘’understand’’ such as concept learning, this may not in fact match up with IQ. It’s conceivable there may be savants with incredible high concept formation with incredibly low IQ’s. This presents an additional layer of complexity to a hypothetical interaction between a low IQ concept savant and high IQ person concept lay-person that I can’t even simulate, mentally. So, I’m opening it to the floor for a fun thought experiment.

Replies from: Daniel_Burfoot, ChristianKl, Vaniver, ChristianKl, ChristianKl, None
comment by Daniel_Burfoot · 2015-12-21T14:25:54.143Z · LW(p) · GW(p)

If you're employing data scientists in a data rich operating environment in 2016 - yes.

The big reason for the rise of "data science" is that all operating environments are now, or will soon become, data rich.

An example: I have a friend who is a chemical engineer by training and works for E-Ink. His mandate is to improve the efficiency of the chemical manufacturing plants that produce the material. This work involves a small amount of actual chemistry, and a large amount of statistical analysis of the vast trove of sensor readings and measurements produced by the plant's operation.

comment by ChristianKl · 2015-12-21T11:35:14.728Z · LW(p) · GW(p)

Elon Musk cites first principle thinking in physics as a key to identifying neglected market opportunities. Can someone give me an example of how it may work in that application?

Recently moridinamael wrote about diswashers: As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?

If you reason from first principles then there's nothing stopping a device in which you input a pile of disher and that afterwards sorts them into the cupboard from existing. Especially with the recent advances in machine vision and google opensourcing Tensor flow.

Another nonautomated kitchen task is cutting vegetables. There no good reason why a robot shouldn't cut vegetables as well as humans.

Replies from: entirelyuseless, passive_fist, NancyLebovitz, VoiceOfRa
comment by passive_fist · 2015-12-21T23:43:20.702Z · LW(p) · GW(p)

You could just have a two-dishwasher system where the dishwasher takes the place of the cupboard.

It seems like a robot that automated the task of moving clean dishes into a cupboard would be an idea where the potential benefits, if any, are too small to currently justify the major development effort that would be required. Maybe in the future when AI becomes far more widespread and 'easy' to develop.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-22T12:13:56.484Z · LW(p) · GW(p)

I think that there are people who don't like to deal with washing dishes even through they have a dishwasher. I don't think the task is trival in a sense that people wouldn't be willing to invest money into a device that fixes the issue.

Apart from that a redesigned device that builds on smart sensors and nanotech filters could also operate with a lot less water.

GE's design of a kitchen of the future with a smart sink that can automatically wash dishes is also interesting.

If I look into my kitchen the most recent invention is the microwave.

A few health conscious people I know have nanotech water filters for the water in their sink but apart from that the kitchen is mostly didn't change.

I think that it would be possible to build something better by investing the kind of money that went into Tesla and SpaceX.

I would expect that in a decade we see a lot more sensors in the average kitchen then today.

The orbital-systems shower is a good example how nanotech plus sensors can produce a shower that performs better than the old shower.

comment by NancyLebovitz · 2015-12-24T15:32:42.116Z · LW(p) · GW(p)

I think my dream system would be something I can pile all the dirty dishes into which melts them down, separates out the food into something that goes into the trash (this doesn't have to be automated) and then reconstitutes the dishes.

Replies from: None
comment by [deleted] · 2015-12-24T16:12:58.936Z · LW(p) · GW(p)

Something like a Star Trek transporter for tableware?

comment by VoiceOfRa · 2015-12-23T02:26:32.465Z · LW(p) · GW(p)

Recently moridinamael wrote about diswashers: As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?

Imagine what it was like before the dishwasher.

Replies from: Lumifer
comment by Lumifer · 2015-12-23T16:41:17.386Z · LW(p) · GW(p)

Imagine what it was like before the dishwasher.

/goes off to watch Downton Abbey :-P

comment by Vaniver · 2015-12-21T18:23:00.129Z · LW(p) · GW(p)

the only real challenge is creating a system for triaging a user's needs into which particular algorithmic application.

Yes, but the full solution to this is basically AI-complete.

There may be some money to be made here as the ubiquity of data rises and you find loyal, computer illiterate clients. However, you'll be picking up scraps the same way that web design as a profitable avenue for freelancing oddballs to make money worked and continues to work - by essentially ripping people off who don't know better, when absurdly user friendly DIY web design options are a reality or the guy down the road is a million times better than you, your client just doesn't know better. Sure, it may be profitable, but ethically it's questionable.

Do you feel the same way about all within-firm IT services? (Including stuff like internal web design in 'IT.')

comment by ChristianKl · 2015-12-21T11:28:11.886Z · LW(p) · GW(p)

This presents an additional layer of complexity to a hypothetical interaction between a low IQ concept savant and high IQ person concept lay-person that I can’t even simulate, mentally

I think that various debates about postmoderism could be of that nature. Postmodernists often have a high amount of concepts.

comment by ChristianKl · 2015-12-22T12:43:55.138Z · LW(p) · GW(p)

If you hire a good webdesiner for doing your website the designer has experience in creating websites and ordering information. A good designer can do a better job.

In the same sense a person who doesn't understand what concepts like sensitivity and specificity mean won't be able to use data analysis tools well.

comment by [deleted] · 2015-12-22T04:16:28.759Z · LW(p) · GW(p)

As for (2), there is likely to be high efficiency in a market between cloud based algorithms and algorithms implemented offline due to extreme low barriers to entry. Basically those first person in with a good method of translating those algorithms offline, surmounting potential legal hazards, and scaling up (no trivial tasks) will make a quick buck. Though, these are problems that I can define. If I can define them, the big names probably already have and are working on solutions for them. You're out of luck, garage entrepreneurs.

People are snapping up data scientists in preparation for the move from Data science as a product to data science as a commodity. Historically, big companies have been awful at making this transition, but once they realize their mistake, eager to make up lost time.. An entrepreneur who looks to be bought up buy one of these market laggards could to really well.

Elon Musk cites first principle thinking in physics as a key to identifying neglected market opportunities. Can someone give me an example of how it may work in that application?

The classic Musk example is taking the cost of raw materials to make a spaceship - he saw that they were many orders of magnitude the actual cost of a spaceship, so he figured there were probably efficiency problems that people simply hadn't solved.

comment by [deleted] · 2015-12-22T10:18:11.621Z · LW(p) · GW(p)

Could somebody who has the English translation of The Spanish Ballad by Feuchtwanger post that piece about Lancelot being in disgrace over his hesitation to sit in the cart into rationality quotes thread? Thank you.

Replies from: username2
comment by username2 · 2015-12-25T14:27:47.379Z · LW(p) · GW(p)

Could you post your own translation?

Replies from: None
comment by [deleted] · 2015-12-25T17:01:12.672Z · LW(p) · GW(p)

I only have the Russian translation from German:(

comment by Daniel_Burfoot · 2015-12-21T14:33:55.774Z · LW(p) · GW(p)

The Fed recently announced a small interest rate hike, but rates remain astonishingly low in the US and in most other countries. In several countries the interest rate is negative - you have to pay the bank to hold your money - a bizarre situation which many economists previously dismissed as a theoretical impossibility.

How should individuals respond to this weird macroeconomic situation? My naive analysis is that demand for investment opportunities far outstrips supply, so we should be trying to find new ways to invest money. Perhaps we should all be doing part-time real estate investing? Are there other simple investment strategies that individuals are in a better position to pursue than big investment firms?

Replies from: Lumifer, Tem42
comment by Lumifer · 2015-12-21T16:02:33.464Z · LW(p) · GW(p)

How should individuals respond to this weird macroeconomic situation?

Do not buy bonds and be wary of bubbles (a lot of underutilized money sloshing around tends to lead to asset inflation).

My naive analysis is that demand for investment opportunities far outstrips supply, so we should be trying to find new ways to invest money.

I would probably say that the supply of investment funds far outstrips demand :-) but you would be concerned about "new ways to invest money" only if you had significant money to invest, which is not a common situation on LW.

Are there other simple investment strategies that individuals are in a better position to pursue than big investment firms?

Yes, there is class of investment strategies which go by the name of "liquidity constrained". If there is a small... market inefficiency out of which you can extract, say, $100,000/year but no more, none of the big investment firms would bother -- it's not worth their time. But for an individual it often is.

Replies from: mwengler
comment by mwengler · 2015-12-21T18:58:47.455Z · LW(p) · GW(p)

Yes, there is class of investment strategies which go by the name of "liquidity constrained". If there is a small... market inefficiency out of which you can extract, say, $100,000/year but no more, none of the big investment firms would bother -- it's not worth their time. But for an individual it often is.

Can you please say more about these and how to find them?

Replies from: Lumifer
comment by Lumifer · 2015-12-21T19:36:15.131Z · LW(p) · GW(p)

Liquidity is a characteristic of a financial asset which, without going into technicalities, is an indicator of how quickly and how cheaply can one buy or sell large amounts of this particular asset in the open market.

Some assets -- like the common stock of Apple or US Treasury bills -- are very liquid. There is a continuous market, very large volumes are changing hands daily, and orders to buy and sell are filled rapidly, with low transaction costs and without pushing the market.

Some assets -- like specific bonds or, say, tracts of land in Maine -- are not liquid. Buying or selling them will take time and will be expensive in terms of transaction costs. If you want to buy (or sell) a lot of these, you will likely push the market (if you're buying you'll push the price up, if you're selling you'll push the price down), sometimes considerably so.

The problem with investment strategies which rely on buying and selling illiquid assets is that they do not scale. You might be able to achieve high returns on small amounts of capital, but you cannot put more capital into this trade because the trade will then break. Hedge funds and such are not interested in investment strategies which do not scale because it's too few dollars for too much hassle.

What this means is that trade opportunities in small, obscure, illiquid niches of financial markets are not exploited by the big fish and so could remain "open" for a long time. Remember that the self-adjusting feature of the market is not magic, it only works if somebody does commit capital to "fixing" the market inefficiency. If no one does, the inefficiency does not go away on its own.

This implies that if you search the (preferably obscure) little nooks and crannies of markets, your chances of finding a free lunch are much higher than in popular, liquid markets that everyone likes to play in.

Two warnings, though. First, what appears to be free cheese might turn out to be located in a mousetrap. Examine the circumstances carefully. Second, niche markets often have local players which understand this particular market better than you do, so see the first point.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2015-12-21T20:22:55.775Z · LW(p) · GW(p)

What this means is that trade opportunities in small, obscure, illiquid niches of financial markets are not exploited by the big fish and so could remain "open" for a long time.

Could you give examples of what you mean that existed a few years ago and that are now exploited, so that no further money can be made and you don't lose something be openly sharing the information?

Replies from: passive_fist, Tem42
comment by passive_fist · 2015-12-21T23:37:15.500Z · LW(p) · GW(p)

During the 80's and 90's a number of firms sprouted up around buying and selling penny stocks via strategies like cold calling.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-22T15:08:45.677Z · LW(p) · GW(p)

I'm not sure that hiring a bunch of people to do annoying phone calls is what Lumifer has in mind when he talks about trading opportunities in illiquid niches of the financial markets.

comment by Tem42 · 2015-12-21T21:39:01.449Z · LW(p) · GW(p)

Here's an example that did not scale well: The New York Time Magazine: Paper Boys

Replies from: ChristianKl
comment by ChristianKl · 2015-12-21T21:59:58.637Z · LW(p) · GW(p)

I don't think that the article says something about problems that come with scale. It rather suggests that the first attempts were lucky.

There are also ethical issues with the business model of buying up debt and then hiring ex-convicts to collect that debt.

Replies from: Tem42
comment by Tem42 · 2015-12-22T03:15:42.948Z · LW(p) · GW(p)

Another, much smaller, example.

Edit:typo.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-22T11:15:20.851Z · LW(p) · GW(p)

I don't think that's an example of someone investing money into an asset. It's a bet on default rates not changing.

comment by ChristianKl · 2015-12-21T20:17:28.440Z · LW(p) · GW(p)

This implies that if you search the (preferably obscure) little nooks and crannies of markets, your chances of finding a free lunch are much higher than in popular, liquid markets that everyone likes to play in.

Do you believe that people without special expertise are capable of finding and evaluating those opportunities?

Replies from: Lumifer
comment by Lumifer · 2015-12-21T21:36:37.824Z · LW(p) · GW(p)

What's "special expertise"? Like most everything in life, this requires the capability and the willingness to learn and figure things out.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-21T21:49:21.957Z · LW(p) · GW(p)

How many hours do you think the average person on LW would need to invest to pick up the relevant skills?

Replies from: Lumifer
comment by Lumifer · 2015-12-21T21:58:31.009Z · LW(p) · GW(p)

The question is ill-defined, it's like asking in how many hours can you learn to program in, say, R. The answers can plausibly range from "a few hours" to "a lifetime".

Besides, in this case some skills will be general (e.g. risk management), equivalent to the ability to code, and some will be very very specific (e.g. knowledge of the regulatory regime in a particular narrow field), equivalent to understanding some library very well.

comment by Tem42 · 2015-12-21T22:36:31.664Z · LW(p) · GW(p)

I don't think that anyone ever thought that paying the bank to hold your money was a theoretical impossibility -- paid checking accounts are not a new thing. What is supposed to be 'impossible' is for bank loans have a negative interest rate -- if the bank pays you to borrow money. Of course, even that was/is only 'impossible' with certain exceptions (specifically, deflation is bad for lenders; but they try to predict deflation, and try not to loan at a negative real rate).

comment by username2 · 2015-12-26T11:02:30.859Z · LW(p) · GW(p)

If reports are correct, this is sort of an example of a transplant version of the Trolley problem in the wild: http://timesofindia.indiatimes.com/world/middle-east/Islamic-State-sanctioned-organ-harvesting-in-document-taken-in-US-raid/articleshow/50326036.cms

Replies from: Jiro
comment by Jiro · 2015-12-26T17:46:44.617Z · LW(p) · GW(p)

Not really.. It's not a trolley problem to divert the trolley into a group of ants, and ISIS thinks of non-Muslims like ants.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-26T20:49:18.612Z · LW(p) · GW(p)

Additionally it wants to scare it's enemies by being brutal.

comment by username2 · 2015-12-24T18:57:45.931Z · LW(p) · GW(p)

Where can I find The Browser's Golden giraffes competition nominees? They have deleted the list and I don't have an offline copy.

Replies from: Manfred
comment by Manfred · 2015-12-24T23:51:23.373Z · LW(p) · GW(p)

Can you find an archived copy on archive.org?

Replies from: username2
comment by username2 · 2015-12-25T14:17:44.344Z · LW(p) · GW(p)

No, they don't have it.

comment by [deleted] · 2015-12-24T12:39:33.766Z · LW(p) · GW(p)

Thoughts this week, part 2

Sweat equity marketplaces

Anyone know why online sweat equity marketplaces never took off? Their website is basically non-functional. I can see the potential for sweat-equity marketplace focusing on a surprising number of fields - say cash strapped writers looking for an editor for instance.

Nuremburg principles

I was just following norms

-Normies the Normenberg trails for norm crimes

Love and subjective well-being

Love has too complex a relationship with happiness for me to want to try to make rational decisions in relation to (sentence structure pls)

Health prioritization

Suicide remains the leading cause of death for men aged 14 to 44, and 80% of all suicides in Australia are men.

Wow. Did not expect that.

I read it in an article about stigma of mental health in entrepreneurs. To try and find the original source, I googled the statistic and found this lovely designed page, an advertisement

This led me to this lovely project which led me to this initiative which I forsee financial analysts using to make stock market predictions based on people's sentiment!

Replies from: ChristianKl, Vaniver
comment by ChristianKl · 2015-12-25T12:36:46.553Z · LW(p) · GW(p)

The article says:

One potential stumbling point for WebEquity, which TechNation writer Kim Heras picks up on, is the apparent lack of support in working out team agreements. For now, Middleton intends to leave the specifics of equity distribution deals and IP rights to the collaborators on each project to figure out themselves.

No existing term sheets seems to be a huge barrier for getting the project adopted.

say cash strapped writers looking for an editor for instance.

The traditional model is that there are publishing houses that have expertise in judging which book is likely to be successful. A cash strapped writer can go to one of these and then the publishing house pays for all additional expeneses.

On the other hand the average editor doesn't want to invest the time to see whether certain books are viable before investing into the book.

comment by Vaniver · 2015-12-24T14:23:53.824Z · LW(p) · GW(p)

Why would an editor want to do work whose compensation is very risky (maybe you're editing the next Harry Potter, but >80% of the time it's going to lose money) instead of work whose compensation is certain?

This is compounded by adverse selection: the better my thing is, the less willing I will be to sell equity, and so the equity markets will be mostly flooded with garbage.