Brief Query- An Idea
post by Carinthium · 2011-04-14T19:55:04.308Z · LW · GW · Legacy · 17 commentsContents
Idea: None 17 comments
I had an idea- has it been done before, and if not shouldn't somebody try to do it? I live in Melbourne where LWers aren't organised, but perhaps the New York branch or some other group could try this? (IF it hasn't been done, that is- I haven't seen in mentioned, which is why I'm checking)
Idea:
-Recruit a non-rationalist scientist (or better group of scientists) either by persuading friends or perhaps getting some money together to pay somebody (or finding a helpful volenteer) or perhaps several.
-Have THEM come up with a series of tests to test rationalists relative to a control group.
If 'sucessful' (in the sense of a significant difference between rationalists and non-rationalists, the result we on LW would presumably predict), it would provide enough evidence to justify a formal test (there would likely be a few weaknesses, such as recruiting friends of rationalists), which could then (again, assuming such a result) persuade scientists to become rationalists (benefits should be obvious on a bit of thought) and generate publicity. If a 'failure' or indecisive, it justifies a serious reevaluation of site methodology.
17 comments
Comments sorted by top scores.
comment by Giles · 2011-04-15T15:13:28.477Z · LW(p) · GW(p)
Sorry for the dumb question, but: test rationalists on what relative to a control group?
Replies from: XFrequentist↑ comment by XFrequentist · 2011-04-15T21:47:03.291Z · LW(p) · GW(p)
Not at all a dumb question.
I assume the idea is to test "Rationality".
Basically, how good are LW readers at being right about tricky stuff relative to similarly intelligent folk who aren't LW readers?
I would think this would encompass (non-exhaustively) decision-making under risk and uncertainty, inference from incomplete information, making accurate predictions, and being well-calibrated.
Precisely what you would test (which may have been your point) is a very good question and is not at all obvious.
Replies from: Carinthium↑ comment by Carinthium · 2011-04-16T03:03:56.630Z · LW(p) · GW(p)
My original idea was to test the extent of cognitive biases (known to be possible as scientific testing discovered them in the first place), but that works too- either would serve the idea's purposes.
comment by Pavitra · 2011-04-14T23:40:23.492Z · LW(p) · GW(p)
Please choose more informative titles in future.
Regular reality checks are definitely a good idea.
How rigorous would a test like this need to be in order to yield useful information about whether we should run a more serious followup?
Replies from: Carinthium↑ comment by Carinthium · 2011-04-15T10:50:57.649Z · LW(p) · GW(p)
I'd try to answer that, but my knowledge of the scientific method isn't too good.
On a very similiar point though, a sufficently rigorous test to persuade scientists (rationally or irrationally) to conduct a professional test would be sufficent- as such a test would be the optimal result in theory if it could come about some other way, I think it would be a pragmatic choice.
comment by bentarm · 2011-04-14T23:12:57.136Z · LW(p) · GW(p)
Two comments:
the hard part about this seems to be finding a control group. I'm pretty sure that the average LW reader would have done better on any test you can find that's supposed to measure "rationality" before they'd read any of the site, where do you get a group of "people who haven't read LW yet, but are the sort of person who might read LW"?
if we did manage to find a control group, what's supposed to be the benefit of asking a non-LWer to decide on the tests? This is supposed to be an experiment to actually find out stuff about the universe: we have just as much interest, if not more, in its results being accurate as the average person.
↑ comment by JoshuaZ · 2011-04-14T23:34:20.326Z · LW(p) · GW(p)
Regarding 2, the reason to have a non-LWer is presumably because we are more likely to be biased and thus introduce subtle biases that favor LWians. Don't underestimate the human capacity for self-deception.
Replies from: shokwave↑ comment by shokwave · 2011-04-15T03:44:28.136Z · LW(p) · GW(p)
You have to compare that to the baseline chance of someone being biased, though. It might be that the amount of bias wanting LessWrong to show actual gains brings is less than the gap between a LWer and the average.
Replies from: Carinthium, orthonormal↑ comment by Carinthium · 2011-04-15T10:53:08.802Z · LW(p) · GW(p)
You also have to consider that a typical scientist is less biased at work (as shown by the fact that their scientific tests tend to be more accurate than, say, their life choices or political opinions) and is used to rigorous standards in such things.
↑ comment by orthonormal · 2011-04-19T14:05:59.340Z · LW(p) · GW(p)
It may be, but would you trust any such test run by another non-mainstream group, if they used one of their own to adjudicate the result?
Replies from: shokwave↑ comment by prase · 2011-04-15T12:05:42.316Z · LW(p) · GW(p)
As suggested in the OP, they have to create the tests, not only evaluate their results. Even if average LWers want to find out whether LW memes are actually helpful, they are likely to be biased in choosing the criteria of rationality. For example, a test made by a LWer would more likely include a Newcombesque question where one-boxing would be classified as the rational answer, and since one-boxers are certainly more prevalent among LWers than in nearly any other group, the results would show that LW memes improve rationality. But the OP is not interested in testing whether LW memes improve LW-style extended rationality (it would be quite weird if they didn't) but a practical, real-life relevant rationality. We are not impartial judges when it comes to determining the boundary between these two.
Or more generally, you can never be too careful about possible biases. Not seeing a reason for a self-serving bias is a pretty weak evidence for its non-existence.
↑ comment by torekp · 2011-04-17T21:14:31.387Z · LW(p) · GW(p)
where do you get a group of "people who haven't read LW yet, but are the sort of person who might read LW"?
They're called newbies. People who just recently started reading LW. Measure the improvement in rationality for the control group and for the experimental, newbie group.
↑ comment by ewang · 2011-04-15T04:54:14.176Z · LW(p) · GW(p)
Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn't a rationalist.
Replies from: David_Gerard, JoshuaZ↑ comment by David_Gerard · 2011-04-15T12:12:47.820Z · LW(p) · GW(p)
Yes, the implicit identification of "LessWrong" and "rationalist" is a local trope only.
↑ comment by JoshuaZ · 2011-04-15T05:06:15.442Z · LW(p) · GW(p)
Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn't a rationalist.
This seems off to me. First of all, LW rationality is a specific brand of that which focuses on pro-actively dealing with cognitive biases. Second of all, the interest that Eliezer and others have in the Singularity and related issues creates a serious status hit in the general population. Third, one doesn't need someone who actively identifies as not a rationalist, just someone with no prior connection to LW.