Opinions survey (with rationalism score at the end)

post by tailcalled · 2024-02-17T00:41:20.188Z · LW · GW · 13 comments

This is a link post for https://docs.google.com/forms/d/e/1FAIpQLSdSKvHo-6HyZqHprCDoBD-VjKxF-Rhp2qNhHV8d0SY40JJdjA/viewform

Based on the results from the recent LW census, I quickly threw together a test that measures how much of a rationalist you are.

I'm mainly posting it here because I'm curious how well my factor model extrapolates. I want to have this data available when I do a more in-depth analysis of the results from the census.

I scored 14/24.

13 comments

Comments sorted by top scores.

comment by gjm · 2024-02-17T02:54:01.996Z · LW(p) · GW(p)

There are definitely answers that your model wants rationalists to give but that I think are incompatible with LW-style rationalism. For instance:

  • "People's anecdotes about seeing ghosts aren't real evidence for ghosts" (your model wants "agree strongly"): of course people's anecdotes about seeing ghosts are evidence for ghosts; they are more probable if ghosts are real than if they aren't. They're just really weak evidence for ghosts and there are plenty of other reasons to think there aren't ghosts.
  • "We need more evidence that we would benefit before we charge ahead with futuristic technology that might irreversibly backfire" (your model wants "disagree" or "disagree strongly"): there's this thing called the AI alignment problem that a few rationalists are slightly concerned about, you might have heard of it.

And several others where I wouldn't go so far as to say "incompatible" but where I confidently expect most LWers' positions not to match your model's predictions. For instance:

  • "It is morally important to avoid making people suffer emotionally": your model wants not-agreement, but I think most LWers would agree with this.
  • "Workplaces should be dull to reflect the oppressiveness of work": your model wants not-disagreement, but I think most LWers would disagree (though probably most would think "hmm, interesting idea" first).
  • "Religious people are very stupid"; your model wants agreement, but I think most LWers are aware that there are plenty of not-very-stupid religious people (indeed, plenty of very-not-stupid religious people) and I suspect "disagree strongly" might be the most common response from LWers.

I don't claim that the above lists are complete. I got 11/24 and I am pretty sure I am nearer the median rationalist than that might suggest.

Replies from: tailcalled, None
comment by tailcalled · 2024-02-17T08:09:03.822Z · LW(p) · GW(p)

I agree with these points but as I mentioned in the test:

Warning: this is not necessarily an accurate or useful test; it's a test that arose through irresponsible statistics rather than careful thought.

The reason I made this survey is to get more direct data on how well the model extrapolates (and maybe also to improve the model so it extrapolates better).

comment by [deleted] · 2024-02-17T04:32:33.085Z · LW(p) · GW(p)
comment by Richard_Kennaway · 2024-02-17T09:41:34.171Z · LW(p) · GW(p)

"Workplaces should be dull to reflect the oppressiveness of work"? Where did that come from? (The "correct" answer is to not disagree.)

"Women don't work in construction because it is unglamorous." I remember when this could be said unironically with a straight face. That was about fifty years ago. Being the only woman in an all-male working-class environment might be more salient these days.

"Religious people are very stupid." Is this a test for straw Vulcan rationality? Actually, you do say it measures "how much of a stereotypical rationalist you are", but on the other hand, you say these are "LessWrong-related questions". What are you really trying to measure?

Replies from: tailcalled
comment by tailcalled · 2024-02-17T10:17:39.048Z · LW(p) · GW(p)

I originally asked people qualitatively what they think the role of different jobs in society are. Then based on that I made a survey with about 100 questions and found there to be about 5 major factors. I then qualitatively asked people about these factors, which lead to me finding additional items that I incorporated in additional surveys. Eventually I had a pool of around 1000 items covering beliefs in various domains, albeit with the same 5-factor structure as originally.

I suggested that 20 of the items from different factors should be included in the LW census, which allowed me to estimate where LW was in terms of those factors. These 24 new items were then selected from the items in the pool that are the most extreme correlates of the delta indicated by the original 20.

Obviously since this procedure is quite distinct from actual rationalism (but also related since it does incorporate LW's answer to the 20), it's quite likely that this is a baseless extrapolation that doesn't actually generalize well. In fact this is specifically one of the things I want to test for, since it seems wise to not overgeneralize LW ideology from a sample of only 20 beliefs to a sample of more than 1000 beliefs. By taking the 24 most extreme correlates of LW's mean out of the 1000 items, I am stress-testing the model and seeing just how extremely wrong it can get.

comment by [deleted] · 2024-02-17T02:54:18.527Z · LW(p) · GW(p)

21/24. Surprising because I have been downvoted and punished for having a divergent opinion.

The ones I "missed" 2 of them I think because rationalists are being insufficiently rational (the "correct" answer is incorrect in terms of what accepted factual evidence by the most credible sources says).

comment by DanielFilan · 2024-02-17T01:33:35.600Z · LW(p) · GW(p)

Mine was 12/24.

Replies from: niplav
comment by niplav · 2024-02-17T02:43:35.661Z · LW(p) · GW(p)

Also 12—what's going on?

Replies from: gjm, tailcalled
comment by gjm · 2024-02-17T02:54:49.512Z · LW(p) · GW(p)

What's going on is that tailcalled's factor model doesn't in fact do a good job of identifying rationalists by their sociopolitical opinions. Or something like that.

[EDITED to add:] Here's one particular variety of "something like that" that I think may be going on: an opinion may be highly characteristic of a group even if it is very uncommon within the group. For instance, suppose you're classifying folks in the US on a left/right axis. If someone agrees with "We should abolish the police and close all the prisons" then you know with great confidence which team they're on, but I'm pretty sure the great majority of leftish people in the US disagree with it. If someone agrees with "We should bring back slavery because black people aren't fit to run their own lives" then you know with great confidence which team they're on, but I'm pretty sure the great majority of rightish people in the US disagree with it.

Tailcalled's model isn't exactly doing this sort of thing to rationalists -- if someone says "stories about ghosts are zero evidence of ghosts" then they have just proved they aren't a rationalist, not done something extreme but highly characteristic of (LW-style) rationalists -- but it's arguably doing something of the sort to a broader fuzzier class of people that are maybe as near as the model can get to "rationalists". Roughly the people some would characterize as "Silicon Valley techbros".

Replies from: tailcalled
comment by tailcalled · 2024-02-17T08:10:31.075Z · LW(p) · GW(p)

My model takes the prevalence of the opinion into account; it's the reason that sometimes you have to e.g. agree strongly and other times you merely have to not-disagree. There's unpopular opinions that the factor model does place correctly, e.g. I can't remember whether I have a question about abolishing the police, but supporting human extinction clearly went under the leftism factor even though leftists also disagreed (because leftists were less likely to disagree and disagreed less strongly in a quantitative sense).

I think the broader/fuzzier class point applies more directly though; from a causal perspective you'd expect rationalists to have some ideology that exists in the general population (e.g. techbros) plus our own idiosyncratically developed ideology. But a factor model only captures low-rank information, so it's not going to accurately model idiosyncratic factors that only exist for a small portion of population.

comment by tailcalled · 2024-02-17T08:20:38.280Z · LW(p) · GW(p)

In theory according to the model, rationalists should score slightly above 12 on average, and because we expect a wide spread of opinions, this means according to the model we should also expect a lot of rationalists to just score 12 directly. So there's nothing funky if you score 12.

Replies from: DanielFilan
comment by DanielFilan · 2024-02-17T18:25:46.824Z · LW(p) · GW(p)

What does the model predict non-rationalists would score?

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-02-17T01:10:52.713Z · LW(p) · GW(p)

I got a 14 as well. An odd theme in there.

comment by stavros · 2024-02-17T01:56:21.674Z · LW(p) · GW(p)

+1 for the 14/24 club.