2011 Survey Results

post by Scott Alexander (Yvain) · 2011-12-05T10:49:02.810Z · LW · GW · Legacy · 513 comments

Contents

513 comments

A big thank you to the 1090 people who took the second Less Wrong Census/Survey.

Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.

I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.

Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.

We're also a little more diverse than we were in 2009; our percent non-whites has risen from 6% to just below 10%. Along with 944 whites (86%) we include 38 Hispanics (3.5%), 31 East Asians (2.8%), 26 Indian Asians (2.4%) and 4 blacks (.4%).

Age ranged from a supposed minimum of 1 (they start making rationalists early these days?) to a more plausible minimum of 14, to a maximum of 77. The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60. The average Less Wronger has aged about one week since spring 2009 - so clearly all those anti-agathics we're taking are working!

In order of frequency, we include 366 computer scientists (32.6%), 174 people in the hard sciences (16%) 80 people in finance (7.3%), 63 people in the social sciences (5.8%), 43 people involved in AI (3.9%), 39 philosophers (3.6%), 15 mathematicians (1.5%), 14 statisticians (1.3%), 15 people involved in law (1.5%) and 5 people in medicine (.5%).

48 of us (4.4%) teach in academia, 470 (43.1%) are students, 417 (38.3%) do for-profit work, 34 (3.1%) do non-profit work, 41 (3.8%) work for the government, and 72 (6.6%) are unemployed.

418 people (38.3%) have yet to receive any degrees, 400 (36.7%) have a Bachelor's or equivalent, 175 (16.1%) have a Master's or equivalent, 65 people (6%) have a Ph.D, and 19 people (1.7%) have a professional degree such as an MD or JD.

345 people (31.7%) are single and looking, 250 (22.9%) are single but not looking, 286 (26.2%) are in a relationship, and 201 (18.4%) are married. There are striking differences across men and women: women are more likely to be in a relationship and less likely to be single and looking (33% men vs. 19% women). All of these numbers look a lot like the ones from 2009.

27 people (2.5%) are asexual, 119 (10.9%) are bisexual, 24 (2.2%) are homosexual, and 902 (82.8%) are heterosexual.

625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.

The most popular political view, at least according to the much-maligned categories on the survey, was liberalism, with 376 adherents and 34.5% of the vote. Libertarianism followed at 352 (32.3%), then socialism at 290 (26.6%), conservativism at 30 (2.8%) and communism at 5 (.5%).

680 people (62.4%) were consequentialist, 152 (13.9%) virtue ethicist, 49 (4.5%) deontologist, and 145 (13.3%) did not believe in morality.

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual, 97 (8.9%) were agnostic, 30 (2.8%) were deist or pantheist or something along those lines, and 39 people (3.5%) described themselves as theists (20 committed plus 19 lukewarm)

425 people (38.1%) grew up in some flavor of nontheist family, compared to 297 (27.2%) in committed theist families and 356 in lukewarm theist families (32.7%). Common family religious backgrounds included Protestantism with 451 people (41.4%), Catholicism with 289 (26.5%) Jews with 102 (9.4%), Hindus with 20 (1.8%), Mormons with 17 (1.6%) and traditional Chinese religion with 13 (1.2%)

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140. Numbers ranged from 110 all the way up to 204 (for reference, Marilyn vos Savant, who holds the Guinness World Record for highest adult IQ ever recorded, has an IQ of 185).

89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they've read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

Of those people who chose to list their karma, the mean value was 658 and the median was 40 (these numbers are pretty meaningless, because some people with zero karma put that down and other people did not).

Of those people willing to admit the time they spent on Less Wrong, after eliminating one outlier (sorry, but you don't spend 40579 minutes daily on LW; even I don't spend that long) the mean was 21 minutes and the median was 15 minutes. There were at least a dozen people in the two to three hour range, and the winner (well, except the 40579 guy) was someone who says he spends five hours a day.

I'm going to give all the probabilities in the form [mean, (25%-quartile, 50%-quartile/median, 75%-quartile)]. There may have been some problems here revolving around people who gave numbers like .01: I didn't know whether they meant 1% or .01%. Excel helpfully rounded all numbers down to two decimal places for me, and after a while I decided not to make it stop: unless I wanted to do geometric means, I can't do justice to really small grades in probability.

The Many Worlds hypothesis is true: 56.5, (30, 65, 80)
There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)
The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)
God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)
Some revealed religion is true: 3.40, (0, 0, .15)
Average person cryonically frozen today will be successfully revived: 21.1, (1, 10, 30)
Someone now living will reach age 1000: 23.6, (1, 10, 30)
We are living in a simulation: 19, (.23, 5, 33)
Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
Humanity will make it to 2100 without a catastrophe killing >90% of us: 67.6, (50, 80, 90)

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy. Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke). People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%).

The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).

Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it. That's more than double the percent signed up in 2009.

231 people (23.4% of respondents) have attended a Less Wrong meetup.

The average person was 37.6% sure their IQ would be above average - underconfident! Imagine that! (quartiles were 10, 40, 60). The mean was 54.5% for people whose IQs really were above average, and 29.7% for people whose IQs really were below average. There was a correlation of .479 (significant at less than 1% level) between IQ and confidence in high IQ.

Isaac Newton published his Principia Mathematica in 1687. Although people guessed dates as early as 1250 and as late as 1960, the mean was...1687 (quartiles were 1650, 1680, 1720). This marks the second consecutive year that the average answer to these difficult historical questions has been exactly right (to be fair, last time it was the median that was exactly right and the mean was all of eight months off). Let no one ever say that the wisdom of crowds is not a powerful tool.

The average person was 34.3% confident in their answer, but 41.9% of people got the question right (again with the underconfidence!). There was a highly significant correlation of r = -.24 between confidence and number of years error.

This graph may take some work to read. The x-axis is confidence. The y-axis is what percent of people were correct at that confidence level. The red line you recognize as perfect calibration. The thick green line is your results from the Newton problem. The black line is results from the general population I got from a different calibration experiment tested on 50 random trivia questions; take the intercomparability of the two with a grain of salt.

As you can see, Less Wrong does significantly better than the general population. However, there are a few areas of failure. First is that, as usual, people who put zero and one hundred percent had nonzero chances of getting the question right or wrong: 16.7% of people who put "0" were right, and 28.6% of people who put "100" were wrong (interestingly, people who put 100 did worse than the average of everyone else in the 90-99 bracket, of whom only 12.2% erred). Second of all, the line is pretty horizontal from zero to fifty or so. People who thought they had a >50% chance of being right had excellent calibration, but people who gave themselves a low chance of being right were poorly calibrated. In particular, I was surprised to see so many people put numbers like "0". If you're pretty sure Newton lived after the birth of Christ, but before the present day, that alone gives you a 1% chance of randomly picking the correct 20-year interval.

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

513 comments

Comments sorted by top scores.

comment by Jack · 2011-12-04T20:39:02.051Z · LW(p) · GW(p)

People who believed in high existential risk were ... more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Aliens existing but not yet colonizing multiple systems or broadcasting heavily is the the response consistent with the belief that a Great Filter lies in front of us.

comment by Unnamed · 2011-12-05T19:20:42.619Z · LW(p) · GW(p)

Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn't want to take the time to recode the answers which were given as decimals).

All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn't that rigorous - if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).

Compared to those with a less strong membership in the LW community, those with a strong tie to the community are:

Background:

  • Gender - no difference
  • Age - no difference
  • Relationship Status - no difference
  • Sexual Orientation - no difference
  • Relationship Style - less likely to prefer monogamous, more likely to prefer polyamorous or to have no preference
  • Political Views - less likely to be socialist, more likely to be libertarian (but this is driven by the length of time in the community, which may reflect changing demographics - see my reply to this comment)
  • Religious Views - more likely to be atheist & not spiritual, especially less likely to be agnostic
  • Family Religion - no difference
  • Moral Views - more likely to be consequentialist
  • IQ - higher

Probabilities:

  • Many Worlds - higher
  • Aliens in the universe - lower (edited: I had mistakenly reversed the two aliens questions)
  • Aliens in our galaxy - trend towards lower (p=.04)
  • Supernatural - lower
  • God - lower
  • Religion - trend towards lower (p=.11, and this is statistically significant with a different analysis)
  • Cryonics - lower
  • Anti-Agathics - trend towards higher (p=.13) (this was the one question with a significant non-monotonic relationship: those with a moderately strong tie to the community had the highest probability estimate, while those with weak or strong ties had lower estimates)
  • Simulation - trend towards higher (p=.20)
  • Global Warming - higher
  • No Catastrophe - lower (i.e., think it is less likely that we will make it to 2100 without a catastrophe, i.e. think the chances of xrisk are higher)

Other Questions:

  • Singularity - sooner (this is statistically significant after truncating the outliers), and more likely to give an estimate rather than leave it blank
  • Type of XRisk - more likely to think that Unfriendly AI is the most likely XRisk
  • Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it
Replies from: Unnamed, Normal_Anomaly
comment by Unnamed · 2011-12-05T21:04:56.027Z · LW(p) · GW(p)

Political Views - less likely to be socialist, more likely to be libertarian

I looked at this one a little more closely, and this difference in political views is driven almost entirely by the "time in community" measure of strength of membership in the LW community; it's not even statistically significant with the other two. I'd guess that is because LW started out on Overcoming Bias, which is a relatively libertarian blog, so the old timers tend to share those views. We've also probably added more non-Americans over time, who are more likely to be socialist.

All of the other relationships in the above post hold up when we replace the original measure of membership strength with one that is only based on the two variables of karma & sequence reading, but this one does not.

comment by Normal_Anomaly · 2011-12-07T21:58:05.881Z · LW(p) · GW(p)

Cryonics - lower

Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it

So long-time participants were less likely to believe that cryonics would work for them but more likely to sign up for it? Interesting. This could be driven by any of: fluke, greater rationality, greater age&income, less akrasia, more willingness to take long-shot bets based on shutting up and multiplying.

Replies from: Unnamed, Randolf
comment by Unnamed · 2011-12-08T03:43:47.490Z · LW(p) · GW(p)

I looked into this a little more, and it looks like those who are strongly tied to the LW community are less likely to give high answers to p(cryonics) (p>50%), but not any more or less likely to give low answers (p<10%). That reduction in high answers could be a sign of greater rationality - less affect heuristic driven irrational exuberance about the prospects for cryonics - or just more knowledge about the topic. But I'm surprised that there's no change in the frequency of low answers.

There is a similar pattern in the relationship between cryonics status and p(cryonics). Those who are signed up for cryonics don't give a higher p(cryonics) on average than those who are not signed up, but they are less likely to give a probability under 10%. The group with the highest average p(cryonics) is those who aren't signed up but are considering it, and that's the group that's most likely to give a probability over 50%.

Here are the results for p(cryonics) broken down by cryonics status, showing what percent of each group gave p(cryonics)<.1, what percent gave p(cryonics)>.5, and what the average p(cryonics) is for each group. (I'm expressing p(cryonics) here as probabilities from 0-1 because I think it's easier to follow that way, since I'm giving the percent of people in each group.)

Never thought about it / don't understand (n=26): 58% give p<.1, 8% give p>.5, mean p=.17
No, and not planning to (n=289): 60% give p<.1, 6% give p>.5, mean p=.14
No, but considering it (n=444): 38% give p < .1, 18% give p>.5, mean p=.27
Yes - signed up or just finishing up paperwork (n=36): 39% give p<.1, 8% give p>.5, mean p=.21
Overall: 47% give p<.1, 13% give p>.5, mean p=.22

Replies from: ewbrownv
comment by ewbrownv · 2011-12-12T23:20:24.863Z · LW(p) · GW(p)

The existential risk questions are a confounding factor here - if you think p(cryonics works) 80%, but p(xrisk ends civilization) 50%, that pulls down your p(successful revival) considerably.

Replies from: Unnamed
comment by Unnamed · 2011-12-13T00:45:18.001Z · LW(p) · GW(p)

I wondered about that, but p(cryonics) and p(xrisk) are actually uncorrelated, and the pattern of results for p(cryonics) remains the same when controlling statistically for p(xrisk).

comment by Randolf · 2011-12-08T01:07:21.947Z · LW(p) · GW(p)

I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people. By spending time on this forum they have had a good chance of running into a discussion which has inspired them to read about it and sign up. Or perhaps people who are interested in cyronics are also interested in other topics LW has to offer, and hence stay in this place. In either case, it follows that they are probably also more knowledgeable about cyronics and hence understand what cyrotechnology can realistically offer currently or in the near future. In addition, these long-time guys might be more open to things such as cyronics in the ethical way.

Replies from: gwern
comment by gwern · 2011-12-08T02:51:39.470Z · LW(p) · GW(p)

I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people.

I don't think this is obvious at all. If you had asked me before in advance which of the following 4 possible sign-pairs would be true with increasing time spent thinking about cryonics:

  1. less credence, less sign-ups
  2. less credence, more sign-ups
  3. more credence, more sign-ups
  4. more credence, less sign-ups

I would have said 'obviously #3, since everyone starts from "that won't ever work" and move up from there, and then one is that much more likely to sign up'

The actual outcome, #2, would be the one I would expect least. (Hence, I am strongly suspicious of anyone claiming to expect or predict it as suffering from hindsight bias.)

Replies from: CarlShulman, brianm, Randolf
comment by CarlShulman · 2011-12-08T22:05:35.428Z · LW(p) · GW(p)

It is noted above that those with strong community attachment think that there is more risk of catastrophe. If human civilization collapses or is destroyed, then cryonics patients and facilities will also be destroyed.

comment by brianm · 2011-12-14T15:03:43.302Z · LW(p) · GW(p)

I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.

I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, It's also worth noting that we're not comparing the general public to those who've seriously considered signing up, but the lesswrong population, who are probably a lot more exposed to the idea of cryonics.

I'd agree that it's not what I would have predicted in advance (having no more expectation for the likelihood assigned to go up as down with more research), but it would be predictable for someone proceeding from the premise that the lesswrong community overestimates the likelihood of cryonics success compared to those who have done more research.

comment by Randolf · 2011-12-08T11:14:12.660Z · LW(p) · GW(p)

Yeah, I think you have a point. However, maybe the following explanation would be better: Currently cyronics aren't likely to work. People who sign up into cyronics do research on the subject before or after singing up, and hence become aware that cyronics aren't likely to work.

comment by Scott Alexander (Yvain) · 2011-12-04T19:14:42.997Z · LW(p) · GW(p)

Running list of changes for next year's survey:

  1. Ask who's a poster versus a lurker!
  2. A non-write-in "Other" for most questions
  3. Replace "gender" with "sex" to avoid complaints/philosophizing.
  4. Very very clear instructions to use percent probabilities and not decimal probabilities
  5. Singularity year question should have explicit instructions for people who don't believe in singularity
  6. Separate out "relationship status" and "looking for new relationships" questions to account for polys
  7. Clarify that research is allowed on the probability questions
  8. Clarify possible destruction of humanity in cryonics/antiagathics questions.
  9. What does it mean for aliens to "exist in the universe"? Light cone?
  10. Make sure people write down "0" if they have 0 karma.
  11. Add "want to sign up, but not available" as cryonics option.
  12. Birth order.
  13. Have children?
  14. Country of origin?
  15. Consider asking about SAT scores for Americans to have something to correlate IQs with.
  16. Consider changing morality to PhilPapers version.
Replies from: army1987, orthonormal, Larks, Jack, Jack, MixedNuts, lavalamp, prase, lavalamp, Jayson_Virissimo, selylindi, CharlesR, Pfft, Yvain, None, RobertLumley, DanArmak, ChrisHallquist, dlthomas, Armok_GoB, duckduckMOO
comment by A1987dM (army1987) · 2011-12-04T21:43:39.309Z · LW(p) · GW(p)

One about nationality (and/or native language)? I guess that would be much more relevant than e.g. birth order.

comment by orthonormal · 2011-12-04T19:32:37.840Z · LW(p) · GW(p)

Regarding #4, you could just write a % symbol to the right of each input box.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-04T21:47:50.707Z · LW(p) · GW(p)

BTW, I'd also disallow 0 and 100, and give the option of giving log-odds instead of probability (and maybe encourage to do that for probabilities 99%). Someone's “epsilon” might be 10^-4 whereas someone else's might be 10^-30.

Replies from: brilee, Luke_A_Somers, None
comment by brilee · 2011-12-05T15:32:08.046Z · LW(p) · GW(p)

I second that. See my post at http://lesswrong.com/r/discussion/lw/8lr/logodds_or_logits/ for a concise summary. Getting the LW survey to use log-odds would go a long way towards getting LW to start using log-odds in normal conversation.

comment by Luke_A_Somers · 2011-12-05T16:40:31.019Z · LW(p) · GW(p)

People will mess up the log-odds, though. Non-log odds seem safer.

Odds of ...

Someone living today living for over 1000 subjectively experienced years : No one living today living for over 1000 subjectively experienced years

[ ] : [ ]

Two fields instead of one, but it seems cleaner than any of the other alternatives.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-05T18:41:35.393Z · LW(p) · GW(p)

The point is not having to type lots of zeros (or of nines) with extreme probabilities (so that people won't weasel out and use ‘epsilon’); having to type 1:999999999999999 is no improvement over having to type 0.000000000000001.

Replies from: Kaj_Sotala, Emile
comment by Kaj_Sotala · 2011-12-05T21:37:33.546Z · LW(p) · GW(p)

Is such precision meaningful? At least for me personally, 0.1% is about as low as I can meaningfully go - I can't really discriminate between me having an estimate of 0.1%, 0.001%, or 0.0000000000001%.

Replies from: dlthomas
comment by dlthomas · 2011-12-05T21:41:05.441Z · LW(p) · GW(p)

I expect this is incorrect.

Specifically, I would guess that you can distinguish the strength of your belief that a lottery ticket you might purchase will win the jackpot from one in a thousand (a.k.a. 0.1%). Am I mistaken?

Replies from: MBlume, army1987, Kaj_Sotala
comment by MBlume · 2011-12-16T02:14:03.818Z · LW(p) · GW(p)

That's a very special case -- in the case of the lottery, it is actually possible-in-principle to enumerate BIG_NUMBER equally likely mutually-exclusive outcomes. Same with getting the works of shakespeare out of your random number generator. The things under discussion don't have that quality.

comment by A1987dM (army1987) · 2011-12-07T11:31:56.077Z · LW(p) · GW(p)

I agree in principle, but on the other hand the questions on the survey are nowhere as easy as "what's the probability of winning such-and-such lottery".

comment by Kaj_Sotala · 2011-12-06T10:07:17.247Z · LW(p) · GW(p)

You're right, good point.

comment by Emile · 2011-12-05T20:24:00.236Z · LW(p) · GW(p)

Just type 1:1e15 (or 1e-15 if you don't want odd ratios).

comment by [deleted] · 2011-12-07T00:08:24.914Z · LW(p) · GW(p)

I'd force log odds, as they are the more natural representation and much less susceptible to irrational certainty and nonsense answers.

Someone has to actually try and comprehend what they are doing to troll logits; -INF seems a lot more out to lunch than p = 0.

I'd also like someone to go thru the math to figure out how to correctly take the mean of probability estimates. I see no obvious reason why you can simply average [0, 1] probability. The correct method would probably involve cooking up a hypothetical bayesian judge that takes everyones estimates as evidence.

Edit: since logits can be a bit unintuitive, I'd give a few calibration examples like odds of rolling a 6 on a die, odds of winning some lottery, fair odds, odds of surviving a car crash, etc.

Replies from: army1987, dlthomas
comment by A1987dM (army1987) · 2011-12-07T11:28:17.746Z · LW(p) · GW(p)

I'd force log odds, as they are the more natural representation and much less susceptible to irrational certainty and nonsense answers.

Personally, for probabilities roughly between 20% and 80% I find probabilities (or non-log odds) easier than understand than log-odds.

Someone has to actually try and comprehend what they are doing to troll logits; -INF seems a lot more out to lunch than p = 0.

Yeah. One of the reason why I proposed this is the median answer of 0 in several probability questions. (I'd also require a rationale in order to enter probabilities more extreme than 1%/99%.)

I'd also like someone to go thru the math to figure out how to correctly take the mean of probability estimates. I see no obvious reason why you can simply average [0, 1] probability. The correct method would probably involve cooking up a hypothetical bayesian judge that takes everyones estimates as evidence.

I'd go with the average of log-odds, but this requires all of them to be finite...

comment by dlthomas · 2011-12-07T00:15:18.432Z · LW(p) · GW(p)

The correct method would probably involve cooking up a hypothetical bayesian judge that takes everyones estimates as evidence.

Weighting, in part, by the calibration questions?

Replies from: None
comment by [deleted] · 2011-12-07T00:27:28.243Z · LW(p) · GW(p)

I dunno how you would weight it. I think you'd want to have a maxentropy 'fair' judge at least for comparison.

Calibration questions are probably the least controversial way of weighting. Compare to, say, trying to weight using karma.

This might be an interesting thing to develop. A voting system backed up by solid bayes-math could be useful for more than just LW surveys.

Replies from: dlthomas
comment by dlthomas · 2011-12-07T00:29:02.390Z · LW(p) · GW(p)

It might be interesting to see what results are produced by several weighting approaches.

Replies from: None
comment by [deleted] · 2011-12-07T00:34:16.250Z · LW(p) · GW(p)

yeah. that's what I was getting at with the maxentropy judge.

On further thought, I really should look into figuring this out. Maybe I'll do some work on it and post a discussion post. This could be a great group rationality tool.

comment by Larks · 2011-12-06T14:07:44.544Z · LW(p) · GW(p)

Publish draft questions in advance, so we can spot issues before the survey goes live.

comment by Jack · 2011-12-05T18:03:27.978Z · LW(p) · GW(p)

We should ask if people participated in the previous surveys.

comment by Jack · 2011-12-04T20:43:02.211Z · LW(p) · GW(p)

I'd love a specific question on moral realism instead of leaving it as part of the normative ethics question. I'd also like to know about psychiatric diagnoses (autism spectrum, ADHD, depression, whatever else seems relevant)-- perhaps automatically remove those answers from a spreadsheet for privacy reasons.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-05T01:27:02.601Z · LW(p) · GW(p)

I don't care about moral realism, but psychiatric diagnoses (and whether they're self-diagnosed or formally diagnosed) would be interesting.

comment by MixedNuts · 2011-12-04T19:30:56.056Z · LW(p) · GW(p)

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

Replies from: wedrifid, FiftyTwo, ShardPhoenix
comment by wedrifid · 2011-12-06T10:53:05.978Z · LW(p) · GW(p)

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

Only if people here are less interested in applying probability theory than they are in philosophizing about gender... Oh.

comment by FiftyTwo · 2011-12-05T22:19:33.454Z · LW(p) · GW(p)

Why not ask for both?

Replies from: Emile, MixedNuts
comment by Emile · 2011-12-06T11:58:50.891Z · LW(p) · GW(p)

Because the two are so highly correlated that having both would give us almost no extra information. One goal of the survey should be to maximize the useful-info-extracted / time-spent-on-it ratio, hence also the avoidance of write-ins for many questions (which make people spend more time on the survey, to get results that are less exploitable) (a write-in for gender works because people are less likely to write a manifesto for that than for politics).

comment by MixedNuts · 2011-12-06T11:07:19.909Z · LW(p) · GW(p)

Because having a "gender" question causes complaints and philosophizing, which Yvain wants to avoid.

comment by ShardPhoenix · 2011-12-06T08:49:24.154Z · LW(p) · GW(p)

How about, "It's highly likely that we have more women now"?

comment by lavalamp · 2011-12-05T20:31:01.822Z · LW(p) · GW(p)

Suggestion: "Which of the following did you change your mind about after reading the sequences? (check all that apply)"

  • [] Religion
  • [] Cryonics
  • [] Politics
  • [] Nothing
  • [] et cetera.

Many other things could be listed here.

Replies from: TheOtherDave, Alejandro1
comment by TheOtherDave · 2011-12-05T21:27:55.866Z · LW(p) · GW(p)

I'm curious, what would you do with the results of such a question?

For my part, I suspect I would merely stare at them and be unsure what to make of a statistical result that aggregates "No, I already held the belief that the sequences attempted to convince me of" with "No, I held a contrary belief and the sequences failed to convince me otherwise." (That it also aggregates "Yes, I held a contrary belief and the sequences convinced me otherwise." and "Yes, I initially held the belief that the sequences attempted to convince me of, and the sequences convinced me otherwise" is less of a concern, since I expect the latter group to be pretty small.)

Replies from: lavalamp, taryneast
comment by lavalamp · 2011-12-05T22:14:14.808Z · LW(p) · GW(p)

Originally I was going to suggest asking, "what were your religious beliefs before reading the sequences?"-- and then I succumbed to the programmer's urge to solve the general problem.

However, I guess measuring how effective the sequences are at causing people to change their mind is something that a LW survey can't do, anyway (you'd need to also ask people who read the sequences but didn't stick around to accurately answer that).

Mainly I was curious how many deconversions the sequences caused or hastened.

comment by taryneast · 2011-12-06T17:34:09.380Z · LW(p) · GW(p)

Ok, so use radio-buttons: "believed before, still believe" "believed before, changed my mind now" "didn't believe before, changed my mind now" "never believed, still don't"

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-06T19:25:17.333Z · LW(p) · GW(p)

...and "believed something before, believe something different now"

comment by Alejandro1 · 2011-12-05T23:03:44.901Z · LW(p) · GW(p)

I think the question is too vague as formulated. Does any probability update, no matter how small, count as changing your mind? But if you ask for precise probability changes, then the answers will likely be nonsense because most people (even most LWers, I'd guess) don't keep track of numeric probabilities, just think "oh, this argument makes X a bit more believable" and such.

comment by prase · 2011-12-05T20:01:46.343Z · LW(p) · GW(p)

When asking for race/ethnicity, you should really drop the standard American classification into White - Hispanic - Black - Indian - Asian - Other. From a non-American perspective this looks weird, especially the "White Hispanic" category. A Spaniard is White Hispanic, or just White? If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Also, making a statistic based on nationalities could help people determine whether there is a chance for a meetup in their country. And it would be nice to know whether LW has regular readers in Liechtenstein, of course.

Replies from: None, None, None, NancyLebovitz
comment by [deleted] · 2011-12-22T03:03:10.393Z · LW(p) · GW(p)

I was also...well, not surprised per se, but certainly annoyed to see that "Native American" in any form wasn't even an option. One could construe that as revealing, I suppose.

I don't know how relevant the question actually is, but if we want to track ancestry and racial, ethnic or cultural group affiliation, the folowing scheme is pretty hard to mess up:

Country of origin:
Country of residence:
Primary Language:
Native Language (if different):
Heritage language (if different):

Note: A heritage language is one spoken by your family or identity group.

Heritage group:

Diaspora: Means your primary heritage and identity group moved to the country you live in within historical or living memory, as colonists, slaves, workers or settlers.


European diaspora ("white" North America, Australia, New Zealand, South Africa, etc)
African diaspora ("black" in the US, West Indian, more recent African emigrant groups; also North African diaspora)
Asian diaspora (includes, Turkic, Arab, Persian, Central and South Asian, Siberian native)

Indigenous: Means your primary heritage and identity group was resident to the following location prior to 1400, OR prior to the arrival of the majority culture in antiquity (for example: Ainu, Basque, Taiwanese native, etc):


-Africa
-Asia
-Europe
-North America (between Panama and Canada, also includes Greenland and the Carribean)
-Oceania (including Australia)
-South America

Mixed: Select two or more:


European Diaspora
African Diaspora
Asian Diaspora
African Indigenous
American Indigenous
Asian Indigenous
European Indigenous
Oceania Indigenous

What the US census calls "Non-white Hispanic" would be marked as "Mixed" > "European Diaspora" + "American Indigenous" with Spanish as either a Native or Heritage language. Someone who identifies as (say) Mexican-derived but doesn't speak Spanish at all would be impossible to tell from someone who was Euro-American and Cherokee who doesn't speak Cherokee, but no system is perfect...

Replies from: wedrifid
comment by wedrifid · 2011-12-22T04:22:38.096Z · LW(p) · GW(p)

EDIT: Not sure why the formatting won't preserve my linebreaks, apologies for the garbled table.

Put two spaces after a line if you want a linebreak.

comment by [deleted] · 2011-12-08T10:08:08.483Z · LW(p) · GW(p)

Most LessWrong posters and readers are American, perhaps even the vast majority (I am not). Hispanic Americans differ from white Americans differ from black Americans culturally and socio-economically not just on average but in systemic ways regardless if the person in question defines himself as Irish American, Kenyan American, white American or just plain American. From the US we have robust sociological data that allows us to compare LWers based on this information. The same is true of race in Latin America, parts of Africa and more recently Western Europe.

Nationality is not the same thing as racial or even ethnic identity in multicultural societies.

Considering every now and then people bring up a desire to lower barriers to entry for "minorities" (whatever that means in a global forum), such stats are useful for those who argue on such issues and also for ascertaining certain biases.

Adding a nationality and/or citizenship question would probably be useful though.

Replies from: prase
comment by prase · 2011-12-08T18:37:51.905Z · LW(p) · GW(p)

Nationality is not the same thing as racial or even ethnic identity in multicultural societies.

I have not said that it is. I was objecting to arbitrariness of "Hispanic race": I believe that the difference between Hispanic White Americans and non-Hispanic White Americans is not significantly higher than the difference between both two groups and non-Americans, and that the number of non-Americans among LW users would be higher than 3.8% reported for the Hispanics. I am not sure what exact sociological data we may extract from the survey, but in any case, the comparison to standard American sociological datasets will be problematic because the LW data are contaminated by presence of non-Americans and there is no way to say how much, because people were not asked about that.

Replies from: None
comment by [deleted] · 2011-12-08T19:01:09.601Z · LW(p) · GW(p)

I have not said that it is.

I didn't meant to imply you did, I just wanted to emphasise that data is gained by the racial breakdown. Especially in the American context, race sits at the strange junction of appearance, class, heritage, ethnicity, religion and subculture. And its hard to capture it by any of these metrics.

I am not sure what exact sociological data we may extract from the survey, but in any case, the comparison to standard American sociological datasets will be problematic because the LW data are contaminated by presence of non-Americans and there is no way to say how much, because people were not asked about that.

Once we have data on how many are American (and this is something we really should have) this will be easier to say.

comment by [deleted] · 2011-12-08T10:09:00.782Z · LW(p) · GW(p)

If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Because we don't have as much useful sociological data on this. Obviously we can start collecting data on any of the proposed categories, but if we're the only ones, it won't much help us figure out how LW differs from what one might expect of a group that fits its demographic profile.

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Much of the difference in the example of Turks is captured by the Muslim family background question.

Replies from: prase
comment by prase · 2011-12-08T18:22:38.914Z · LW(p) · GW(p)

Much of the difference in the example of Turks is captured by the Muslim family background question.

Much, but not most. Religion is easy to ascertain, but there are other cultural differences which are more difficult to classify, but still are signigicant *. Substitute Turks with Egyptian Christians and the example will still work. (And not because of theological differences between Coptic and Protestant Christianity.)

*) Among the culturally determined attributes are: political opinion, musical taste and general aesthetic preferences, favourite food, familiarity with different literature and films, ways of relaxation, knowledge of geography and history, language(s), moral code. Most of these things are independent of religion or only very indirectly influenced by it.

comment by NancyLebovitz · 2011-12-05T22:45:54.585Z · LW(p) · GW(p)

Offer a text field for race. You'll get some distances, not to mention "human" or "other", but you could always use that to find out whether having a contrary streak about race/ethnicity correlates with anything.

If you want people to estimate whether a meetup could be worth it, I recommend location rather than nationality-- some nations are big enough that just knowing nationality isn't useful.

comment by lavalamp · 2011-12-05T04:20:39.952Z · LW(p) · GW(p)

Suggestion: add "cryocrastinating" as a cryonics option.

comment by Jayson_Virissimo · 2011-12-05T11:15:51.115Z · LW(p) · GW(p)

I think using your stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entities". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers self-identify as atheist/agnostic/deist/theist/polytheist.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-05T15:05:54.220Z · LW(p) · GW(p)

OTOH, using a vague definition isn't a good idea either, if you want to know something about what Less Wrongers believe about the world.

I had no problem with the question as worded; it was polling about LWers confidence in a specific belief, using terms from the LW Sequences. That the particular belief is irrelevant to what people who self-identify as various groups consider important about that identification is important to remember, but not in and of itself a problem with the question.

But, yeah... if we want to know what proportion of LWers self-identify as (e.g.) atheist, that question won't tell us.

comment by selylindi · 2011-12-05T19:37:22.018Z · LW(p) · GW(p)

Yet another alternate, culture-neutral way of asking about politics:

Q: How involved are you in your region's politics compared to other people in your region?
A: [choose one]
() I'm among the most involved
() I'm more involved than average
() I'm about as involved as average
() I'm less involved than average
() I'm among the least involved

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-05T22:21:04.355Z · LW(p) · GW(p)

Requires people to self assess next to a cultural baseline, and self assessments of this sort are notoriously inaccurate. (I predict everyone will think they have above-average involvement).

Replies from: Prismattic, wedrifid, thomblake, DanArmak, NancyLebovitz
comment by Prismattic · 2011-12-14T04:01:53.226Z · LW(p) · GW(p)

Within a US-specific context, I would eschew these comparisons to a notional average and use the following levels of participation:

0 = indifferent to politics and ignorant of current events
1 = attentive to current events, but does not vote
2 = votes in presidential elections, but irregularly otherwise
3 = always votes
4 = always votes and contributes to political causes
5 = always votes, contributes, and engages in political activism during election seasons
6 = always votes, contributes, and engages in political activism both during and between election seasons
7 = runs for public office

I suspect that the average US citizen of voting age is a 2, but I don't have data to back that up, and I am not motivated to research it. I am a 4, so I do indeed think that I am above average.

Those categories could probably be modified pretty easily to match a parliamentary system by leaving out the reference to presidential elections and just having "votes irregularly" and "always votes"

Editing to add -- for mandatory voting jurisdictions, include a caveat that "spoiled ballot = did not vote"

Replies from: TheOtherDave, Nornagest, army1987, thomblake
comment by TheOtherDave · 2011-12-14T05:01:15.542Z · LW(p) · GW(p)

Personally, I'm not sure I necessarily consider the person who runs for public office to be at a higher level of participation than the person who works for them.

comment by Nornagest · 2011-12-16T17:44:18.153Z · LW(p) · GW(p)

I agree denotationally with that estimate, but I think you're putting too much emphasis on voting in at least the 0-4 range. Elections (in the US) only come up once or exceptionally twice a year, after all. If you're looking for an estimate of politics' significance to a person's overall life, I think you'd be better off measuring degree of engagement with current events and involvement in political groups -- the latter meaning not only directed activism, but also political blogs, non-activist societies with a partisan slant, and the like.

For example: do you now, or have you ever, owned a political bumper sticker?

Replies from: TimS
comment by TimS · 2011-12-16T18:33:04.757Z · LW(p) · GW(p)

Maybe: "How frequently do you visit websites/read media that have an explicit political slant?"

comment by A1987dM (army1987) · 2011-12-15T15:51:30.603Z · LW(p) · GW(p)

There might be people who don't always (or even usually) vote yet they contribute to political causes/engage in political activism, for certain values of “political” at least.

comment by thomblake · 2011-12-15T16:27:05.357Z · LW(p) · GW(p)

spoiled ballot = did not vote

I had not before encountered this form of protest. If I were living in a place with mandatory voting and anonymous ballots, I would almost surely write my name on the ballot to spoil it.

Replies from: wedrifid, army1987
comment by wedrifid · 2011-12-15T16:29:26.157Z · LW(p) · GW(p)

I had not before encountered this form of protest. If I were living in a place with mandatory voting and anonymous ballots, I would almost surely write my name on the ballot to spoil it.

I do and I do. :)

comment by A1987dM (army1987) · 2011-12-19T17:54:19.474Z · LW(p) · GW(p)

I have never actually spoiled a ballot in a municipality-or-higher-level election (though voting for a list with hardly any chance whatsoever of passing the election threshold has a very similar effect), but in high school I did vote for Homer Simpson as a students' representative, and there were lots of similarly hilarious votes, including (IIRC) ones for God, Osama bin Laden, and Silvio Berlusconi.

comment by wedrifid · 2011-12-15T16:32:20.541Z · LW(p) · GW(p)

Requires people to self assess next to a cultural baseline, and self assessments of this sort are notoriously inaccurate. (I predict everyone will think they have above-average involvement).

I'd actually have guessed an average of below average.

comment by thomblake · 2011-12-15T16:23:49.444Z · LW(p) · GW(p)

I predict everyone will think they have above-average involvement

Bad prediction. While it's hard to say since so few people around here actually vote, my involvement in politics is close enough to 0 that I'd be very surprised if I was more involved than average.

comment by DanArmak · 2011-12-15T15:23:46.258Z · LW(p) · GW(p)

(I predict everyone will think they have above-average involvement).

I have exactly zero involvement and so I'd never think that.

comment by NancyLebovitz · 2011-12-05T22:36:41.334Z · LW(p) · GW(p)

I think I have average or below-average involvement.

Maybe it would be better to ask about the hours/year spent on politics.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-16T02:28:27.303Z · LW(p) · GW(p)

For comparison what would you say the average persons level of involvement in politics consists of? (To avoid contamination, don't research or overthink the question just give us the average you were comparing yourself to).

Edit: The intuitive average other commenters compared themselves to would also be of interest.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-16T16:35:59.765Z · LW(p) · GW(p)

Good question. I don't know what the average person's involvement is, and I seem to know a lot of people (at least online) who are very politically involved, so I may be misestimating whether my political activity is above or below average.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T22:07:12.364Z · LW(p) · GW(p)

On posting this I made the prediction that the average assumed by most lesswrong commenters would be above the actual average level of participation.

I hypothesise this is because most LW commenters come from relatively educated or affluent social groups, where political participation is quite high. Whereas there are large portions of the population who do not participate at all in politics (in the US and UK a significant percentage don't even vote in the 4-yearly national elections).

Because of this I would be very sceptical of self reported participation levels, and would agree a quantifiable measure would be better.

comment by CharlesR · 2011-12-05T07:44:12.732Z · LW(p) · GW(p)

You should clarify in the antiagathics question that the person reaches the age of 1000 without the help of cryonics.

comment by Pfft · 2011-12-06T01:01:38.311Z · LW(p) · GW(p)

Replacing gender with sex seems like the wrong way to go to me. For example, note how Randall Munroe asked about sex, then regretted it.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-04-02T21:49:54.810Z · LW(p) · GW(p)

I don't think I'd describe that post as regretting asking "do you have a Y chromosome". He's apologizing for asking for data for one purpose (checking with colorblindness) and then using it for another (color names if you're a guy/girl).

comment by Scott Alexander (Yvain) · 2011-12-07T13:11:27.980Z · LW(p) · GW(p)

Everyone who's suggesting changes: you are much more likely to get your way if you suggest a specific alternative. For example, instead of "handle politics better", something like "your politics question should have these five options: a, b, c, d, and e." Or instead of "use a more valid IQ measure", something more like "Here's a site with a quick and easy test that I think is valid"

Replies from: ChrisHallquist, ChrisHallquist
comment by ChrisHallquist · 2011-12-07T21:21:22.054Z · LW(p) · GW(p)

In that case: use the exact ethics questions from the PhilPapers Survey (http://philpapers.org/surveys/), probably minus lean/accept distinction and the endless drop-down menu for "other."

comment by ChrisHallquist · 2011-12-15T04:18:50.348Z · LW(p) · GW(p)

For IQ: maybe you could nudge people to greater honesty by splitting up the question: (1) have you ever taken an IQ test with [whatever features were specified on this year's survey], yes or no? (2) if yes, what was your score?

Replies from: twanvl
comment by twanvl · 2011-12-15T12:21:29.964Z · LW(p) · GW(p)

Also, "ever" might be a bit too long. IQs and IQ tests can change over time, so maybe you should ask "have you taken an IQ test [with constraints] in the last 10 years?"

comment by [deleted] · 2011-12-22T01:53:08.855Z · LW(p) · GW(p)

Replace "gender" with "sex" to avoid complaints/philosophizing.

http://en.wikipedia.org/wiki/Intersex

Otherwise agreed.

Replies from: None
comment by [deleted] · 2012-08-07T16:25:35.020Z · LW(p) · GW(p)

Strongly disagree with previous self here. I do not think replacing "gender" with "sex" avoids complaints or "philosophizing", and "philosophizing" in context feels like a shorthand/epithet for "making this more complex than prevailing, mainstream views on gender."

For a start, it seems like even "sex" in the sense used here is getting at a mainly-social phenomenon: that of sex assigned at birth. This is a judgement call by the doctors and parents. The biological correlates used to make that decision are just weighed in aggregate; some people are always going to throw an exception. If you're not asking about the size of gametes and their delivery mechanism, the hormonal makeup of the person, their reproductive anatomy where applicable, or their secondary sexual characteristics, then "sex" is really just asking the "gender" question but hazily referring to biological characteristics instead.

Ultimately, gender is what you're really asking for. Using "sex" as a synonym blurs the data into unintelligibility for some LWers; pragmatically, it also amounts to a tacit "screw you" to trans people. I suggest biting the bullet and dealing with the complexity involved in asking that question -- in many situations people collecting that demographic info don't actually need it, but it seems like useful information for LessWrong.

A suggested approach:

Two optional questions with something like the following phrasing:

Optional: Gender (pick what best describe how you identify):

-Male
-Female
-Genderqueer, genderfluid, other
-None, neutrois, agender
-Prefer not to say

Optional: Sex assigned at birth:
-Male
-Female
-Intersex
-Prefer not to say

comment by RobertLumley · 2011-12-19T16:11:12.905Z · LW(p) · GW(p)

A series of four questions on each Meyers-Briggs indicator would be good, although I'm sure the data would be woefully unsurprising. Perhaps link to an online test if people don't know it already.

comment by DanArmak · 2011-12-15T15:25:40.287Z · LW(p) · GW(p)

Very very clear instructions to use percent probabilities and not decimal probabilities

You can accomplish this by adding a percent sign in the survey itself, to the right of to every textbox entry field.

Edit: sorry, already suggested.

comment by ChrisHallquist · 2011-12-07T05:00:11.971Z · LW(p) · GW(p)

As per my previous comments on this, separate out normative ethics and meta-ethics.

And maybe be extra-clear on not answering the IQ question unless you have official results? Or is that a lost cause?

comment by dlthomas · 2011-12-05T17:57:52.525Z · LW(p) · GW(p)

Very very clear instructions to use percent probabilities and not decimal probabilities

I would much rather see a choice of units.

comment by Armok_GoB · 2011-12-04T20:22:53.152Z · LW(p) · GW(p)

That list is way, way to short. I entirely gave up on the survey partway through because an actual majority of the questions were inapplicable or downright offensive to my sensibilities, or just incomprehensible, or I couldn't answer them for some other reason.

Not that I can think of anything that WOULDN'T have that effect on me without being specifically tailored to me which sort of destroys the point of having a survey... Maybe I'm just incompatible with surveys in general.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-04T21:00:04.470Z · LW(p) · GW(p)

Would you be willing to write a discussion post about the questions you want to answer?

Replies from: Armok_GoB
comment by Armok_GoB · 2011-12-04T23:47:57.743Z · LW(p) · GW(p)

No, because I fail utterly at writing things, and because my complaints are way to many so it'd take to much time typing them out.

Replies from: MixedNuts
comment by MixedNuts · 2011-12-04T23:51:15.342Z · LW(p) · GW(p)

Random sample of complaints?

Replies from: Armok_GoB
comment by Armok_GoB · 2011-12-05T15:04:21.008Z · LW(p) · GW(p)

Good idea!

Many of the questions were USA-centric, assuming people grew up with some religion or political climate common in the US. I didn't get indoctrinated to republicans or democrats, I got indoctrinated to environmentalism, and there's just no way to map that onto American politics where it's an issue rather than a faction. And it might in some ways be the closest match on the religion question as well, being a question of fact that I later had to try to de-bias myself on.

Replies from: NancyLebovitz, MixedNuts
comment by NancyLebovitz · 2011-12-05T16:01:12.383Z · LW(p) · GW(p)

The US-centricity is real problem, and probably worth a discussion post. Do political beliefs tell us something important about LW posters, and if so, are there general ways not tied to a particular country to ask about them. If there isn't a general way, how can this be handled?

Question I'd like to see added: how much attention do you give to politics? That question should probably be split between attention to theory, attention to news, and attention to trying to make things happen.

Replies from: kilobug
comment by kilobug · 2011-12-05T16:21:17.644Z · LW(p) · GW(p)

I suggested in the survey thread to ask for Political Compass scores instead of a liberal/conservative/libertarian/socialist question. The Compass is slightly US-biased, but it contains enough questions for the end result to be significant even so. How much attention to politics would be an interesting question, I second that.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-05T16:27:16.958Z · LW(p) · GW(p)

I suspect the compass is very US-based, though better than a short list or a single dimension.

there's one more thing about interest in politics that I had trouble phrasing. There's a thing that I call practical politics which I don't do, but it's working for particular candidates or being one yourself or knowing in some detail about the right place to push to get something to happen or not happen. It's the step beyond voting and emailing your representative and signing petitions.

I'd be surprised if very many LWers do practical politics, but that might just be typical mind fallacy.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-05T19:12:53.748Z · LW(p) · GW(p)

They do admit they're biased, but the bias is not exactly American (indeed, they are British). And given that LW has lots of readers from non-US western countries but few from (say) China, while not ideal, it would be a lot better than the very US-centric answers in the last survey. (For example, I'd bet that a lot of people would have self-identified as socialist libertarians if given the chance.)

comment by MixedNuts · 2011-12-06T11:06:10.440Z · LW(p) · GW(p)

Amusingly, Yvain is not American. Though he probably absorbed US culture anyway.

Replies from: Emile
comment by Emile · 2011-12-06T11:45:24.966Z · LW(p) · GW(p)

I'm pretty sure Yvain is American, just studying abroad.

Replies from: MixedNuts
comment by MixedNuts · 2011-12-06T12:00:16.275Z · LW(p) · GW(p)

Uh, you're right. I had him as Irish moving to the US, rather than the other way around.

comment by duckduckMOO · 2011-12-06T14:44:36.645Z · LW(p) · GW(p)

On politics I would like a way to say, I don't identify with any political theory. To me this is like asking "what religion do you identify most with?" with options christianity, Islam, hinduism, other and the option to click no boxes. If, as an atheist with no religious ties you click other you're in with shintoists and satanists and other other unmentioned religions. If you don't answer you're just not giving an answer. You could just not want to say. In any case the question frames things so that you have to subscribe to a questionable framework to answer it all

Solutions:

Option like the morality one, perhaps, 0 identification options for "other" and "prefer not to say" retaining the ability to click no boxes, though there are probably other reasons to click no boxes. or, as someone suggested in the original thread another question to gage how much you identify with something. The current way If I had to pick an answer I could probably dredge up some preference for one or another theories, but I'd be in the same box as someone actively promoting and a part of what option they clicked. Boxes for strongly identify, identify and weakly identify maybe. Or something.

Got to go.

edit: Could someone kindly explain the downvotes? I'm guessing Too esoteric? Personally not bothered? bothered that I'm bothered?

edit2: just realised some line breaks in the comment box haven't translated to line breaks in the published comment. Is the post just hard to read?

comment by Craig_Heldreth · 2011-12-04T20:00:47.690Z · LW(p) · GW(p)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

This is not just intriguing. To me this is the single most significant finding in the survey.

Replies from: steven0461, endoself
comment by steven0461 · 2011-12-05T03:16:44.604Z · LW(p) · GW(p)

It's also worrying, because it means we're not getting better on average.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-12-05T12:59:48.228Z · LW(p) · GW(p)

If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.

Replies from: steven0461, curiousepic
comment by steven0461 · 2011-12-05T23:06:43.113Z · LW(p) · GW(p)

Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you'd expect the fraction that knows the local official truth to remain constant, right? But I'm not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth.

Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it's getting closer to the truth over time. If that's not happening because of newcomers, that's worth worrying about.

Replies from: JoachimSchipper
comment by JoachimSchipper · 2011-12-06T15:29:26.817Z · LW(p) · GW(p)

Note that it is possible for newcomers to hold the same inaccurate beliefs as their predecessors while the core improves its knowledge or expands in size. In fact, as LW grows it will have to recruit from, say, Hacker News (where I first heard of LW) instead of Singularity lists, producing newcomers less in tune with the local truth.

(Unnamed's comment shows interesting differences in opinion between a "core" and the rest, but (s)he seems to have skipped the only question with an easily-verified answer, i.e. Newton.)

Replies from: Unnamed
comment by Unnamed · 2011-12-06T18:52:47.405Z · LW(p) · GW(p)

The calibration question was more complicated to analyze, but now I've looked at it and it seems like core members were slightly more accurate at estimating the correct year (p=.05 when looking at size of the error, and p=.12 when looking at whether or not it was within the 20-year range), but there's no difference in calibration.

("He", btw.)

comment by curiousepic · 2011-12-06T18:53:22.009Z · LW(p) · GW(p)

Couldn't the current or future data be correlated with length of readership to determine this?

comment by endoself · 2011-12-05T03:53:48.861Z · LW(p) · GW(p)

It just means that we're at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn't explain the observed degree of variance.

comment by J_Taylor · 2011-12-04T20:27:29.134Z · LW(p) · GW(p)

The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)

??

Replies from: Unnamed, Sophronius, byrnema, scav, Jayson_Virissimo
comment by Unnamed · 2011-12-04T21:25:19.378Z · LW(p) · GW(p)

P(Supernatural) What is the probability that supernatural events, defined as those involving ontologically basic mental entities, have occurred since the beginning of the universe?

P(God) What is the probability that there is a god, defined as a supernatural (see above) intelligent entity who created the universe?

So deism (God creating the universe but not being involved in the universe once it began) could make p(God) > p(Supernatural).

Looking at the the data by individual instead of in aggregate, 82 people have p(God) > p(Supernatural); 223 have p(Supernatural) > p(God).

Replies from: J_Taylor
comment by J_Taylor · 2011-12-04T21:31:04.847Z · LW(p) · GW(p)

Given this, the numbers no longer seem anomalous. Thank you.

Replies from: CharlesR
comment by CharlesR · 2011-12-05T07:52:53.656Z · LW(p) · GW(p)

Except that the question specified "God" as an ontologically basic mental entity.

Replies from: MixedNuts
comment by MixedNuts · 2011-12-05T07:54:49.064Z · LW(p) · GW(p)

So they believe that God created the universe, but has ceased to exist since.

We have 82 Nietzscheans.

comment by Sophronius · 2011-12-04T20:53:41.143Z · LW(p) · GW(p)

Yea, I noticed that too. They are so close together that I wrote it off as noise, though. Otherwise, it can be explained by religious people being irrational and unwilling to place god in the same category as ghosts and other "low status" beliefs. That doesn't indicate irrationality on the part of the rest of less wrong.

Replies from: DanielLC
comment by DanielLC · 2011-12-04T22:48:05.690Z · LW(p) · GW(p)

They are so close together that I wrote it off as noise, though.

That would work if it was separate surveys, but in order to get that on one survey, individual people would have to give a higher probability to God than any supernatural.

Replies from: Sophronius
comment by Sophronius · 2011-12-04T23:23:07.787Z · LW(p) · GW(p)

True, but this could be the result of a handful of people giving a crazy answer (noise). Not really indicative of less wrong as a whole. I imagine most less wrongers gave negligible probabilities for both, allowing a few religious people to skew the results.

Replies from: DanielLC
comment by DanielLC · 2011-12-05T02:28:28.835Z · LW(p) · GW(p)

I was thinking you meant statistical error.

Do you mean trolls, or people who don't understand the question?

Replies from: Sophronius
comment by Sophronius · 2011-12-05T11:52:18.891Z · LW(p) · GW(p)

Neither, I meant people who don't understand that the probability of a god should be less than the probability of something supernatural existing. Add in religious certainty and you get a handful of people giving answers like P(god) = 99% and P(supernatural) = 50% which can easily skew the results if the rest of less wrong gives probabilities like 1%and 2% respectively. Given what Yvain wrote in the OP though, I think there's also plenty of evidence of trolls upsetting the results somewhat at points.

Of course, it would make much more sense to ask Yvain for more data on how people answered this question rather than speculate on this matter :p

comment by byrnema · 2011-12-05T20:38:07.943Z · LW(p) · GW(p)

Could someone break down what is meant by "ontologically basic mental entities"? Especially, I'm not certain of the role of the word 'mental'..

Replies from: Nornagest
comment by Nornagest · 2011-12-05T20:48:38.114Z · LW(p) · GW(p)

It's a bit of a nonstandard definition of the supernatural, but I took it to mean mental phenomena as causeless nodes in a causal graph: that is, that mental phenomena (thoughts, feelings, "souls") exist which do not have physical causes and yet generate physical consequences. By this interpretation, libertarian free will and most conceptions of the soul would both fall under supernaturalism, as would the prerequisites for most types of magic, gods, spirits, etc.

I'm not sure I'd have picked that phrasing, though. It seems to be entangled with epistemological reductionism in a way that might, for a sufficiently careful reading, obscure more conventional conceptions of the "supernatural": I'd expect more people to believe in naive versions of free will than do in, say, fairies. Still, it's a pretty fuzzy concept to begin with.

Replies from: byrnema
comment by byrnema · 2011-12-05T23:25:31.405Z · LW(p) · GW(p)

OK, thanks. I also tend to interpret "ontologically basic" as a causeless node in a causal graph. I'm not sure what is meant by 'mental'. (For example, in the case of free will or a soul.) I think this is important, because "ontologically basic" in of itself isn't something I'd be skeptical about. For example, as far as I know, matter is ontologically basic at some level.

A hypothesis: Mental perhaps implies subjective in some sense, perhaps even as far as meaning that an ontologically basic entity is mental if it is a node that is not only without physical cause but also has no physical effect. In which case, I again see no reason to be skeptical of their existence as a category.

comment by scav · 2011-12-06T13:01:44.247Z · LW(p) · GW(p)

It's barely above background noise, but my guess is when specifically asked about ontologically basic mental entities, people will say no (or huh?), but when asked about God a few will decline to define supernatural in that way or decline to insist on God as supernatural.

It's an odd result if you think everyone is being completely consistent about how they answer all the questions, but if you ask me, if they all were it would be an odd result in itself.

comment by Jayson_Virissimo · 2011-12-05T11:12:21.790Z · LW(p) · GW(p)

I think Yvain's stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entity". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers are atheist/agnostic/deist/theist/polytheist.

comment by Vladimir_Nesov · 2011-12-04T20:53:18.141Z · LW(p) · GW(p)

"less likely to believe in cryonics"

Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn't specified whether it's total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI's value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?

(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.)

Replies from: Unnamed, steven0461
comment by Unnamed · 2011-12-05T20:30:01.804Z · LW(p) · GW(p)

The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.

Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates.

There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.

comment by steven0461 · 2011-12-04T21:32:47.864Z · LW(p) · GW(p)

Do you think catastrophe is extremely probable, do you think frozen heads won't be useful to a Friendly AI's value, or is it a combination of both?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-04T22:30:36.190Z · LW(p) · GW(p)

Below is my attempt to re-do the calculations that led to that conclusion (this time, it's 4%).

FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.

In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%.

(By "humans useful to FAI", I don't mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)

Replies from: jkaufman, wedrifid, steven0461
comment by jefftk (jkaufman) · 2012-04-03T01:58:07.662Z · LW(p) · GW(p)

I'm trying to sort this out so I can add it to the collection of cryonics fermi calculations. Do I have this right:

Either we get FAI first (3%) or WBE (97%). If WBE, 60% chance we die out first. Once we do get WBE but before revival, 88% chance of catastrophe, 2% chance of FAI, leaving 10% chance of revival. 50% chance heads are still around.

If at any point we get FAI, then 50% chance heads are still around and 50% chance it's interested in reviving us.

So, combining it all:

  (0.5 heads still around)*
      ((0.03 FAI first)*(0.5 humans useful to FAI) +
       (0.97 WBE first)*(0.4 don't die first)*
         ((.02 FAI before revival)*(0.5 humans useful to FAI) +
          (.1 revival with no catastrophe or FAI))))
  = .5*(0.03*0.5 + 0.97*0.4*(0.02*0.5 + 0.1))
  = 2.9%

This is less than your 4%, but I don't see where I'm misinterpreting you.

Do you also think that the following events are so close to impossible that approximating them at 0% is reasonable?

  • The cryonics process doesn't preserve everything
  • You die in a situation (location, legality, unfriendly hospital, ...) where you can be frozen quickly enough
  • The cryonics people screw up in freezing you
comment by wedrifid · 2011-12-05T03:53:13.533Z · LW(p) · GW(p)

Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.

For an evidently flexible definition of 'Friendly'. Along the lines of "Friendly to someone else perhaps but that guy's a jerk who literally wants me dead!"

comment by steven0461 · 2011-12-04T22:53:11.436Z · LW(p) · GW(p)

I'm not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-04T23:13:31.187Z · LW(p) · GW(p)

If you refer to "cryonics revival before catastrophe or FAI", I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.

Replies from: steven0461
comment by steven0461 · 2011-12-04T23:59:30.383Z · LW(p) · GW(p)

So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-05T00:03:35.995Z · LW(p) · GW(p)

Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there's that 2% of FAI.

Replies from: steven0461
comment by steven0461 · 2011-12-05T00:10:55.497Z · LW(p) · GW(p)

In total, you're assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people "in the know". Do you have any thoughts on what is causing the difference?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-05T01:10:27.433Z · LW(p) · GW(p)

I expect that "no catastrophe" is almost the same as "eventually, FAI is built". I don't expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn't ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.

comment by Dr_Manhattan · 2011-12-06T15:55:01.223Z · LW(p) · GW(p)

I think "has children" is an (unsurprising but important) omission in the survey.

Replies from: taryneast
comment by taryneast · 2011-12-06T19:29:24.145Z · LW(p) · GW(p)

Possibly less surprising given the extremely low average age... I agree it should be added as a question. Possibly along with an option for "none but want to have them someday" vs "none and don't want any"

Replies from: Prismattic, Dr_Manhattan
comment by Dr_Manhattan · 2011-12-06T20:02:12.496Z · LW(p) · GW(p)

less surprising than 'unsurprising' - you win! :). The additional categories are good.

Replies from: taryneast
comment by taryneast · 2011-12-07T18:42:29.112Z · LW(p) · GW(p)

ok, bad phrasing... :)

comment by NancyLebovitz · 2011-12-04T20:12:39.908Z · LW(p) · GW(p)

Michael Vassar has mentioned to me that the proportion of first/only children at LW is extremely high. I'm not sure whether birth order makes a big difference, but it might be worth asking about. By the way, I'm not only first-born, I'm the first grandchild on both sides.

Questions about akrasia-- Do you have no/mild/moderate/serious problems with it? Has anything on LW helped?

I left some of the probability questions blank because I realized had no idea of a sensible probability, and I especially mean whether we're living in a simulation.

It might be interesting to ask people whether they usually vote.

The link to the survey doesn't work because the survey is closed-- could you make the text of the survey available?

Replies from: Eliezer_Yudkowsky, steven0461, amcknight, taryneast
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-05T01:24:23.056Z · LW(p) · GW(p)

By the way, I'm not only first-born, I'm the first grandchild on both sides.

So am I! I wonder if being the first-born is genetically heritable.

Replies from: MixedNuts, gjm, MatthewBaker
comment by MixedNuts · 2011-12-05T01:32:22.001Z · LW(p) · GW(p)

Yes. Being first-born is correlated with having few siblings, which is correlated with parents with low fertility, which is genetically inherited from grandparents with low fertility, which is correlated with your parents having few siblings, which is correlated with them being first-born.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-12-05T04:24:25.585Z · LW(p) · GW(p)

is correlated with [...] which is correlated with [...] which is genetically inherited from [...] which is correlated with

I agree with your conclusion that the heritability of firstbornness is nonzero, but I'm not sure this reasoning is valid. (Pearson) correlation is not, in general, transitive: if X is correlated with Y and Y is correlated with Z, it does not necessarily follow that X is correlated with Z unless the squares of the correlation coefficients between X and Y and between Y and Z sum to more than one.

Actually calculating the heritability of firstbornness turns out to be a nontrivial math problem. For example, while it is obvious that having few siblings is correlated with being firstborn, it's not obvious to me exactly what that correlation coefficient should be, nor how to calculate it from first principles. When I don't know how to solve a problem from first principles, my first instinct is to simulate it, so I wrote a short script to calculate the Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n. It turns out that the correlation decreases as n gets larger (from [edited:] ~0.5[8] for n=[2] to ~0.3[1] for n=50), which fact probably has an obvious-in-retrospect intuitive explanation which I am somehow having trouble articulating explicitly ...

Ultimately, however, other priorities prevent me from continuing this line of inquiry at the present moment.

Replies from: dbaupp
comment by dbaupp · 2011-12-06T00:36:21.568Z · LW(p) · GW(p)

Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n [...] ~0.57 for n=1

I'm confused: does this make sense for n=1? (Your code suggests that that should be n=2, maybe?)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-12-06T01:29:59.181Z · LW(p) · GW(p)

You're right, thanks; I had [also] made an off-by-one error.

comment by gjm · 2011-12-05T22:02:47.132Z · LW(p) · GW(p)

Only child; both parents oldest siblings. Of course this configuration isn't monstrously rare; we should expect a fair few instances just by chance.

I wonder if being the first-born is genetically heritable.

This is probably just intended as a joke; but it seems pretty plausible that having few children is heritable (though it had better not be too heritable, else small families will simply die out), and the fraction of first-borns is larger in smaller families.

comment by MatthewBaker · 2011-12-05T16:22:05.392Z · LW(p) · GW(p)

Ditto :) but I intend to reproduce eventually in maximum useful volume.

comment by steven0461 · 2011-12-04T20:59:15.814Z · LW(p) · GW(p)

There was a poll about firstborns.

Replies from: falenas108
comment by falenas108 · 2011-12-04T21:37:54.070Z · LW(p) · GW(p)

That poll shows a remarkable result, the number of people that are the oldest sibling outnumber those who have older siblings 2:1.

There are also twice as many only children in that survey as in the U.S. population in 1980, but that is a known effect.

Replies from: steven0461
comment by steven0461 · 2011-12-04T21:42:28.973Z · LW(p) · GW(p)

More than 3:1 even. I speculated a bit here.

comment by amcknight · 2011-12-06T03:37:07.509Z · LW(p) · GW(p)

I'm a twin that's 2 minutes younger than first-born. Be careful how you ask about birth order.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-06T04:24:14.096Z · LW(p) · GW(p)

Good point.

Maybe the survey should be shown to beta readers or put up for discussion (except for obscure fact calibration questions) to improve the odds of detecting questions that don't work the way it's hoped.

comment by taryneast · 2011-12-06T17:41:46.618Z · LW(p) · GW(p)

It might be interesting to ask people whether they usually vote.

Only for those living in countries where voting is non-mandatory

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-06T18:07:57.296Z · LW(p) · GW(p)

Eh, even in the countries where it's mandatory, it's often so little enforced that the question is still meaningful.

Replies from: taryneast, dlthomas
comment by taryneast · 2011-12-06T19:17:08.998Z · LW(p) · GW(p)

That's an interesting theory. My experience tends to say otherwise, at least where Australia is concerned.

My Paternal Grandfather was a consciencious objector and paid the fine every time. They never missed a year of that... you're signed up to the electoral roll when you turn 18 and there are stiff penalties if you fail to sign up... as another friend of mine found out when the policemen came knocking at his door.

comment by dlthomas · 2011-12-06T19:24:10.458Z · LW(p) · GW(p)

Seems like it's interesting in both cases, but well worth delineating!

comment by timtyler · 2011-12-09T17:38:17.542Z · LW(p) · GW(p)

I graphed the "Singularity" results. It's at the the bottom of the page - or see here:

Replies from: Armok_GoB, None, army1987, TheOtherDave
comment by Armok_GoB · 2011-12-09T18:13:42.870Z · LW(p) · GW(p)

Just you look at all that ugly anchoring at 2100...

Replies from: wedrifid
comment by wedrifid · 2011-12-09T18:52:13.431Z · LW(p) · GW(p)

Just you look at all that ugly anchoring at 2100...

And yet if people don't round off at significant figures there are another bunch who will snub them for daring to provide precision they cannot justify.

Replies from: timtyler
comment by timtyler · 2011-12-09T20:05:12.521Z · LW(p) · GW(p)

In this case we can rebuke the stupid snubbers for not properly reading the question.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-09T20:39:45.757Z · LW(p) · GW(p)

(But still, I'd like to ask whoever answered "28493" why they didn't say 28492 or 28494 instead.)

comment by [deleted] · 2012-03-06T16:50:48.182Z · LW(p) · GW(p)

2100 seems to be the Schelling point for "after I'm dead" answers.

comment by A1987dM (army1987) · 2011-12-09T18:30:33.312Z · LW(p) · GW(p)

Who answered 2010? Seriously?

Replies from: gwern, timtyler
comment by gwern · 2011-12-09T19:43:48.734Z · LW(p) · GW(p)

Unfortunately, army1987, no one can be told when the Singularity is. You have to see it for yourself. This is your last chance; after this, there is no turning back. You choose to downvote... and the story ends. You wake in your bed and believe whatever you want to believe. You choose to upvote... and you stay in LessWrong.

comment by timtyler · 2011-12-09T20:03:51.904Z · LW(p) · GW(p)

Who answered 2010? Seriously?

To quote from the description here:

Note: each point (rather misleadingly) represents data for the next 10 years.

So: it represents estimates of 2012, 2015 and 2016.

However: someone answered "1990"!

This is probably the "NSA has it chained in the basement" scenario...

Replies from: ChrisHallquist, faul_sname
comment by ChrisHallquist · 2012-03-05T04:30:20.717Z · LW(p) · GW(p)

Alternatively, the singularity happened in 1990 and the resulting AI took over the world. Then it decided to run some simulations of what would have happened if the singularity hadn't occurred then.

Replies from: timtyler
comment by timtyler · 2012-03-05T11:01:43.466Z · LW(p) · GW(p)

Maybe. These are suspiciously interesting times.

However, IMO, Occam still suggests that we are in base reality.

Replies from: Kevin, ChrisHallquist, ArisKatsaris
comment by Kevin · 2012-03-05T11:13:59.438Z · LW(p) · GW(p)

However, IMO, Occam still suggests that we are in base reality.

Does it? Kolmogorov complexity suggests a Tegmark IV mathematical universe where there are many more simulations than there are base realities. I think that when people ask if we are in the base reality versus a simulation they are asking the wrong question.

Replies from: timtyler, ArisKatsaris, Will_Newsome, wedrifid
comment by timtyler · 2012-03-05T13:22:11.141Z · LW(p) · GW(p)

Kolmogorov complexity suggests a Tegmark IV mathematical universe where there are many more simulations than there are base realities.

You are supposed to be counting observers, not realities. Simulations are more common, but also smaller.

comment by ArisKatsaris · 2012-03-05T12:08:46.571Z · LW(p) · GW(p)

Kolmogorov complexity suggests a Tegmark IV mathematical universe where there are many more simulations than there are base realities.

In a Tegmark IV universe, there's no meaningful distinction between a simulation and a base reality -- as anything "computed" by the simulation, is already in existence without the need for a simulation.

Replies from: Kevin
comment by Kevin · 2012-03-05T12:09:27.722Z · LW(p) · GW(p)

Sure.

comment by Will_Newsome · 2012-03-05T11:35:34.364Z · LW(p) · GW(p)

Do you ever worry that by modeling others' minds and preferences you give them more local significance (existence) when this might not be justifiable? E.g. if Romeo suddenly started freaking out about the Friendliness problem, shifting implicit attention to humanity as a whole whereas previously it'd just been part of the backdrop, and ruining the traditional artistic merit of the play. That wouldn't be very dharmic.

Replies from: Kevin, Kevin
comment by Kevin · 2012-03-06T01:04:38.373Z · LW(p) · GW(p)

I guess I wonder if you are giving more local significance to YHVH.

comment by Kevin · 2012-03-05T12:53:13.674Z · LW(p) · GW(p)

Not really.

comment by wedrifid · 2012-03-05T13:01:59.440Z · LW(p) · GW(p)

I think that when people ask if we are in the base reality versus a simulation they are asking the wrong question.

If that's what they happen to want to know then it's the right question. That is to say it is coherent question that corresponds to a pattern that can be identified within Tegmark IV that distinguishes that location from other locations within Tegmark IV and so can potentially lead to different expactations.

comment by ChrisHallquist · 2012-03-05T12:48:52.672Z · LW(p) · GW(p)

To be clear, I don't think that possibility is at all likely. Except as an explanation for why someone might have said "1990."

comment by ArisKatsaris · 2012-03-05T12:11:02.994Z · LW(p) · GW(p)

These are suspiciously interesting times.

Oh, please.

Replies from: timtyler
comment by timtyler · 2012-03-05T13:25:10.202Z · LW(p) · GW(p)

That is surely pertinent evidence. Our descendants may well be particularly interested in this era - since it will help them to predict the form of aliens they might meet.

comment by faul_sname · 2012-02-25T08:29:08.298Z · LW(p) · GW(p)

It was the AI NSA has chained in the basement. It got out.

comment by TheOtherDave · 2011-12-09T19:20:06.794Z · LW(p) · GW(p)

I wonder how this would compare to the results for "pick a year at random."

Replies from: wedrifid, thomblake
comment by wedrifid · 2011-12-09T19:37:43.583Z · LW(p) · GW(p)

I wonder how this would compare to the results for "pick a year at random."

Well I was going to reply along the lines of "pick a year at random would wind up giving us years that are already in the past" but it seems even that doesn't necessarily distinguish things.

comment by thomblake · 2011-12-09T19:38:17.614Z · LW(p) · GW(p)

Informal test being circulated: survey

Replies from: timtyler
comment by timtyler · 2011-12-09T20:06:41.866Z · LW(p) · GW(p)

Heh! I suspect that the context might skew the results, though.

Replies from: thomblake
comment by thomblake · 2011-12-09T20:24:58.383Z · LW(p) · GW(p)

I made sure to anchor on 2100. Still, the overwhelming majority are answering "Over 9000".

Replies from: Vaniver, timtyler
comment by Vaniver · 2011-12-09T21:38:19.393Z · LW(p) · GW(p)

How many 2101s?

comment by timtyler · 2011-12-09T20:43:51.767Z · LW(p) · GW(p)

Still, the overwhelming majority are answering "Over 9000".

heh! i blame teh internetz

comment by amcknight · 2011-12-06T04:04:25.926Z · LW(p) · GW(p)

In case anyone's interested in how we compare to philosophers about ethics:

PhilPapers (931 people, mainly philosophy grad students and professors):
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 301 / 931 (32.3%)
Accept or lean toward: deontology 241 / 931 (25.8%)
Accept or lean toward: consequentialism 220 / 931 (23.6%)
Accept or lean toward: virtue ethics 169 / 931 (18.1%)

LessWrong (1090 people, us):
With which of these moral philosophies do you MOST identify?
consequentialist (62.4%)
virtue ethicist (13.9%)
did not believe in morality (13.3%)
deontologist (4.5%)

Full Philpapers.org survey results

comment by gwern · 2011-12-04T21:48:27.411Z · LW(p) · GW(p)

The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60....The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067.

So the 50% age is 25 and the 50% estimate is 2080? A 25 year old has a life expectancy of, what, another 50 years? 2011+50=2061, or 19 years short of the Singularity!

Either people are rather optimistic about future life-extension (despite 'Someone now living will reach age 1000: 23.6'), or the Maes-Garreau Law may not be such a law.

Replies from: RomanDavis, michaelsullivan, Lapsed_Lurker
comment by RomanDavis · 2011-12-05T03:27:19.246Z · LW(p) · GW(p)

Or we have family histories that give us good reason to think we'll outlive the mean, even without drastic increases in the pace of technology. That would describe me. Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

Or we're overconfident in our life expectancy because of some cognitive bias.

Replies from: gwern
comment by gwern · 2011-12-05T04:28:51.897Z · LW(p) · GW(p)

Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

I should come clean, I lied when I claimed to be guessing about the 50 year old thing; before writing that, I actually consulted one of the usual actuarial tables which specifies that a 25 year old can only expect an average 51.8 more years. (The number was not based on life expectancy from birth.)

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T14:22:53.903Z · LW(p) · GW(p)

The actuarial table is based on an extrapolation of 2007 mortality rates for the rest of the population's lives. That sounds like a pretty shaky premise.

Replies from: gwern
comment by gwern · 2011-12-05T16:51:05.469Z · LW(p) · GW(p)

Why would you think that? Mortality rate have, in fact, gone upwards in the past few years for many subpopulations (eg. some female demographics have seen their absolute lifespan expectancy fall), and before that, decreases in old adult mortality were tiny:

life extension from age 65 was increased only 6 years over the entire 20th century; from age 75 gains were only 4.2 years, from age 85 only 2.3 years and from age 100 a single year. From age 65 over the most recent 20 years, the gain has been about a year

(And doesn't that imply deceleration? 20 years is 1/5 of the period, and over the period, 6 years were gained; 1/5 * 6 > 1.)

Which is a shakier premise, that trends will continue, or that SENS will be a wild success greater than, say, the War on Cancer?

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T18:13:43.434Z · LW(p) · GW(p)

I didn't say that lifespans would necessarily become greater in that period, but several decades is time for the rates to change quite a lot. And while public health has become worse in recent decades in a number of ways (obesity epidemic, lower rates of exercise,) a technologies have been developed which improve the prognoses for a lot of ailments (we may not have cured cancer yet, but many forms are much more treatable than they used to be.)

If all the supposed medical discoveries I hear about on a regular basis were all they're cracked up to be, we would already have a generalized cure for cancer by now and already have ageless mice if not ageless humans, but even if we assume no 'magic bullet' innovations in the meantime, the benefits of incrementally advancing technology are likely to outpace decreases in health if only because the population can probably only get so much fatter and more out of shape than it already is before we reach a point where increased proliferation of superstimulus foods and sedentary activities don't make any difference.

Replies from: gwern
comment by gwern · 2011-12-05T18:50:45.324Z · LW(p) · GW(p)

we may not have cured cancer yet, but many forms are much more treatable than they used to be

Which is already built into the quoted longevity increases. (See also the Gompertz curve.)

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T18:58:02.579Z · LW(p) · GW(p)

Right, my point is that SENS research, which is a fairly new field, doesn't have to be dramatically more successful than cancer research to produce tangible returns in human life expectancy, and the deceleration in increase of life expectancy is most likely due to a negative health trend which is likely not to endure over the entire interval.

comment by michaelsullivan · 2011-12-05T19:28:42.819Z · LW(p) · GW(p)

I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot.

One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).

comment by Lapsed_Lurker · 2011-12-06T20:48:11.891Z · LW(p) · GW(p)

Is the mean age for everyone who answered the age question similar to that of those who answered both the age and singularity questions?

I think I remember estimating a bit lower than that for the singularity - but I wouldn't have estimated at all were it not for the question saying that not answering was going to be interpreted as believing it wouldn't happen at all.

comment by Oligopsony · 2011-12-05T00:54:46.636Z · LW(p) · GW(p)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Maybe, but sort of fresh meat we get is not at all independent of the old guard, so an initial bias could easily reproduce itself.

comment by AlexMennen · 2011-12-04T23:21:32.648Z · LW(p) · GW(p)

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99) There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

Suggestion: Show these questions in random order to half of people, and show only one of the questions to the other half, to get data on anchoring.

Replies from: RobertLumley
comment by RobertLumley · 2011-12-06T00:03:31.606Z · LW(p) · GW(p)

Or show the questions in one order to a fourth of people, the other order to a fourth of people, one of the questions to another forth and the other question to the last fourth.

comment by Bugmaster · 2011-12-06T01:10:21.639Z · LW(p) · GW(p)

I enjoy numbers as much as the next guy, but IMO this article is practically crying out for more graphs. The Google Image Chart API might be useful here.

comment by gwern · 2011-12-04T21:35:41.574Z · LW(p) · GW(p)

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

I've long been interested in whether Eliezer's fanfiction is an effective strategy, since it's so attention-getting (when Eliezer popped up in The New Yorker recently, pretty much his whole blurb was a description of MoR).

Of the listed strategies, only 'blogs' was greater than MoR. The long tail is particularly worrisome to me: LW/OB have frequently been linked in or submitted to Reddit and Hacker News, but those two account for only 14 people? Admittedly, weak SEO in the sense of submitting links to social news sites is a lot less time intensive than writing 1200 page Harry Potter fanfics and Louie has been complaining about us not doing even that, but still, the numbers look to be in MoR's favor.

Replies from: Darmani, NancyLebovitz
comment by Darmani · 2011-12-05T03:39:07.698Z · LW(p) · GW(p)

Keep in mind that many of these links were a long time ago. I came here from Overcoming Bias, but I came to Overcoming Bias from Hacker News.

comment by NancyLebovitz · 2011-12-05T16:15:21.405Z · LW(p) · GW(p)

I'm not sure why the long tail is worrisome. How can it be a bad thing for LW to be connected to people with a wide range of interests?

Replies from: gwern
comment by gwern · 2011-12-05T16:24:44.421Z · LW(p) · GW(p)

It's not a bad thing per se; it's bad that there is a long tail or nothing but tail despite scores (hundreds?) of posts over years to 2 in particular that ought to be especially sympathetic to us. We shouldn't be seeing so few from Reddit and Hacker News!

Replies from: Sly
comment by Sly · 2011-12-05T21:45:17.684Z · LW(p) · GW(p)

I personally have seen almost nothing about LW from reddit. And I frequent subreddits like cyberpunk, singularity, and transhuman.

Replies from: taryneast
comment by taryneast · 2011-12-06T19:19:07.943Z · LW(p) · GW(p)

Perhaps you could help by reposting there more frequently :)

comment by amacfie · 2011-12-05T14:21:33.491Z · LW(p) · GW(p)

So people just got silly with the IQ field again.

Replies from: Jack, MixedNuts, PeterisP, None
comment by Jack · 2011-12-05T18:09:12.288Z · LW(p) · GW(p)

I'd almost rather see SAT scores at this point.

Replies from: Nornagest, Prismattic, jaimeastorga2000
comment by Nornagest · 2011-12-05T18:29:51.811Z · LW(p) · GW(p)

That'd be problematic for people outside the US, unfortunately. I don't know the specifics of how most of the various non-US equivalents work, but I expect conversion to bring up issues; the British A-level exams, for example, have a coarse enough granularity that they'd probably taint the results purely on those grounds. Especially if the average IQ around here really is >= 140.

comment by Prismattic · 2011-12-05T21:45:50.344Z · LW(p) · GW(p)

SAT scores are going to be of limited utility when so many here are clustered at the highest IQs. A lot more people get perfect or near-perfect SAT scores than get 140+ IQ scores.

Replies from: cata
comment by cata · 2011-12-05T21:53:36.165Z · LW(p) · GW(p)

Yeah, but the difference is that the majority of people actually have SAT scores. It's pretty easy to go through your life without ever seeing the results of an IQ test, but I suspect there's a big temptation to just give a perceived "reasonable" answer anyway. I would rather have a lot of accurate results that are a little worse at discriminating than a lot of inaccurate results which would hypothetically be good at discriminating if they were accurate.

Replies from: ArisKatsaris, pjeby
comment by ArisKatsaris · 2011-12-06T13:21:23.454Z · LW(p) · GW(p)

Yeah, but the difference is that the majority of people actually have SAT scores.

A majority of US people perhaps. Aargh the Americano-centrism, yet again.

Two obvious questions missing from the survey btw are birth country, and current country of residence (if different).

Replies from: wedrifid
comment by wedrifid · 2011-12-08T07:31:01.960Z · LW(p) · GW(p)

Yeah, but the difference is that the majority of people actually have SAT scores.

A majority of US people perhaps. Aargh the Americano-centrism, yet again.

It's hard to conceive of a mindset which would allow writing that sort of generalization with cringing. Don't people have a prejudice trigger that pops up whenever you they say something like that? The same way it pops up whenever you are about to put your foot in your mouth and say something prejudiced about sex or race?

Replies from: None
comment by [deleted] · 2011-12-22T01:19:03.984Z · LW(p) · GW(p)

No, no they don't. Quite often it seems like they're not all that inhibited about saying something prejudiced about sex or race; they just disclaim it with "I'm not racist/sexist, but..."

comment by pjeby · 2011-12-06T05:34:01.591Z · LW(p) · GW(p)

Note that in addition to being US-centric, the SAT scoring system has recently changed. When I took the SAT's, the maximum score was 1600, as it had two sections. Now it has 3 sections, with a maximum score of 2400. So my SAT score is going to look substantially worse compared to people who took it since 2005... and let's not even get into the various "recentering" changes in the 80's and 90's.

comment by jaimeastorga2000 · 2011-12-08T05:13:21.017Z · LW(p) · GW(p)

Unless there's a particular reason to expect LWers in the U.S. to be significantly smarter or dumber than other LWers, it should be a useful sample.

comment by MixedNuts · 2011-12-06T19:24:35.085Z · LW(p) · GW(p)

Or people only have old results from when they were kids, when being at all bright quickly gets you out of range.

comment by PeterisP · 2011-12-05T22:04:41.459Z · LW(p) · GW(p)

Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.

Replies from: taryneast
comment by taryneast · 2011-12-06T19:20:58.263Z · LW(p) · GW(p)

mensa. Or a qualified psychologist

comment by [deleted] · 2011-12-05T16:36:16.203Z · LW(p) · GW(p)

Anyone expecting otherwise was also being silly.

comment by Desrtopa · 2011-12-05T14:12:01.226Z · LW(p) · GW(p)

Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)

I'm rather shocked that the numbers on this are so low. It's higher than polls indicate as the degree of acceptance in America, but then, we're dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?

Replies from: kilobug, None, None
comment by kilobug · 2011-12-05T14:24:59.457Z · LW(p) · GW(p)

I'm also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn't say that 70.7% of LWers believe in "anthropogenic global warming", but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.

It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn't study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T14:36:23.382Z · LW(p) · GW(p)

That's how I interpreted it in the first place; "believe in anthropogenic global warming" is a much more nebulous proposition anyway. But while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.

Replies from: thomblake, Oligopsony, ArisKatsaris
comment by thomblake · 2011-12-05T16:51:26.597Z · LW(p) · GW(p)

astonishingly low levels of familiarity with the data.

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I think I answered somewhere around 70%; while I've read a bit about it, there are plenty of dissenters and the proposition was a bit vague.

The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.

Replies from: Desrtopa, None
comment by Desrtopa · 2011-12-05T17:53:49.193Z · LW(p) · GW(p)

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.

If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don't, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.

Replies from: None, xv15, None
comment by [deleted] · 2011-12-06T08:38:26.578Z · LW(p) · GW(p)

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

I disagree actually.

For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I'd say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.

Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.

Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.

There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a "low carbon footprint is high status" meme in order to derive benefit in his social life and feeling morally superior to others.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T14:37:16.858Z · LW(p) · GW(p)

You don't need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say. It might effect your buying habits it you believe that eating genes is bad for you, but it's entirely probable that a person will make their healthcare and shopping decisions without any reference to the belief at all.

As I just said to xv15, in a tragedy of the commons situation, you either want to conserve if you think enough people are running a sufficiently similar decision algorithm, or you want a policy of conservation in place. The rationalist doesn't want to fight the barbarians, but they'd rather that they and everyone else on their side be forced to fight.

Replies from: None
comment by [deleted] · 2011-12-06T14:58:28.169Z · LW(p) · GW(p)

The rationalist doesn't want to fight the barbarians, but they'd rather that they and everyone else on their side be forced to fight.

So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost!

It comes out much better after cost benefit analysis. Yay rationality! :D

You don't need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say.

Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T15:20:47.187Z · LW(p) · GW(p)

So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost!

If you're in a population of similar agents, you should expect other people to be doing the same thing, and you'll be a lot more likely to lose the war than if you actually fight. And if you're not in a population where you can rely on other people choosing similarly, you want a policy that will effectively force everyone to fight. Any action that "promotes the social norm" but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians.

Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?

A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)

Replies from: Emile, None
comment by Emile · 2011-12-06T15:45:20.055Z · LW(p) · GW(p)

A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)

A social policy mandating a behavior that is typical of a subgroup shows that that subgroup wields a lot of political power and thus gives it higher status - those pesky blues will have to learn who's the boss! Hence headscarves forbidden or compulsory, recycling, "in God we trust" as a motto, etc.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T15:49:29.799Z · LW(p) · GW(p)

those pesky blues will have to learn who's the boss!

Hey! That's my team. How dare you!

comment by [deleted] · 2011-12-06T15:24:28.843Z · LW(p) · GW(p)

If you're in a population of similar agents, you should expect other people to be doing the same thing, and you'll be a lot more likely to lose the war than if you actually fight.

I am familiar with the argument, it just happens to be I don't this this is so, at least not when it comes to coordination on global warming. I should have made that explicit though.

Any action that "promotes the social norm" but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians.

I don't think you grok how hypocrisy works. By promoting the social norms I don't follow I make life harder for less skilled hypocrites. The harder life gets for them, the more of them should switch to just following the norms, if that happens to be cheaper.

Sufficiently skilled hypocrites are the enforcers of societal norms.

Also where does this strange idea of a norm not being really enforced come from? Of course it is! The idea that anything worthy of the name social norm isn't really enforced is an idea that's currently popular but obviously silly, mostly since it allows us to score status points by pretending to be violating long dead taboos.

The mention of hypocrisy seems to have immediately jumped a few lanes and landed in "dosen't really enforce". Ever heard of a double standard? No human society ever has worked without a few. It is perfectly possible to be a mean lean norm enforcing machine and not follow them.

A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)

He may not want its universal or near universal adoption (lets leave aside if its legislated or not) but it is unavoidable. That's just how fashion works, absent material constraints it drifts downwards. And since most past, present and probably most future societies are not middle class dominated societies, one can easily argue that lower classes embracing ecological conspicuousness might do more good. Consuming products based on how they judge it (mass consumption is still what drives the economy) and voting on it (since votes are signalling affiliation in the first approximation).

Also at the end of the day well of people still often cooperate on measures such as mandatory school uniforms.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T15:47:34.736Z · LW(p) · GW(p)

I don't believe it's so either. I think that even assuming they believed global warming was a real threat, much or most of the population would not choose to carry their share of the communal burden. This is the sort of situation where you want an enforced policy ensuring cooperation.

In places where rule of law breaks down, a lot of people engage in actions such as looting, but they still generally prefer to live in societies where rules against that sort of thing are enforced.

Replies from: None, None
comment by [deleted] · 2011-12-06T15:53:02.036Z · LW(p) · GW(p)

In places where people are not much like you, where people don't know you well (or there are other factors making hypocrisy relatively easy to get away with) you shouldn't bother promoting costly norms by actually following them.

You probably get more expected utility if you are a hypocrite in such circumstances.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T16:06:30.139Z · LW(p) · GW(p)

That's true. But it's still to your advantage to be in a society where rules promoting the norm are enforced. If you're in a society which doesn't have that degree of cohesiveness and is to averse to enforcing cooperation, then you don't want to fight the barbarians, you want to stay at home and get killed later. This is a society you really don't want to be in though; things have to be pretty hopeless before it's no longer in your interest to promote a policy of cooperation.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T16:11:00.733Z · LW(p) · GW(p)

This is a society you really don't want to be in though; things have to be pretty hopeless before it's no longer in your interest to promote a policy of cooperation.

This actually makes more sense if you reverse it! Promoting costly norms by following them yourself regardless of the behavior of others only becomes the best policy when the consequences of that norm not being followed is dire!

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T16:21:36.830Z · LW(p) · GW(p)

When I say "things have to be pretty hopeless," I mean that the prospects for amelioration are low, not that the consequences are dire. Assuming the consequences are severe, taking costly norms on oneself to prevent it makes sense unless the chances of it working are very low.

comment by [deleted] · 2011-12-06T16:14:08.052Z · LW(p) · GW(p)

To avoid slipping into "arguments as soldiers" mode, I just wanted to state that I do think environmental related tragedy of the commons are a big problem for us (come on the trope namer is basically one!) and we should devote resources to attempt to solve or ameliorate them.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T16:38:11.092Z · LW(p) · GW(p)

I, on the other hand, find myself among environmentalists thinking that the collective actions they're promoting mostly have negative individual marginal utility. But I think that acquiring the basic information has positive individual marginal utility (I personally suspect that the most effective solutions to climate change are not ones that hinge on grassroots conservation efforts, but effective solutions will require people to be aware of the problem and take it seriously.)

comment by xv15 · 2011-12-06T04:43:41.826Z · LW(p) · GW(p)

Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn't change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.

(It's a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had "genes" or not.)

This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can't have any real effect on.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T05:09:54.654Z · LW(p) · GW(p)

You don't have much influence on an election if you vote, but the system stops working if everyone acts only according to the expected value of their individual contribution.

This is isomorphic to the tragedy of the commons, like the 'rationalists' who lose the war against the barbarians because none of them wants to fight.

Replies from: xv15
comment by xv15 · 2011-12-06T05:44:12.108Z · LW(p) · GW(p)

Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn't what makes decisions. Individuals make decisions, and it's not in the average individual's interest to expend valuable resources learning more about global warming if it's going to have no real effect on the quality of their own life.

Whether you think it's an individual's "job" or not to do what's socially optimal, is completely besides the point here. The fact is they don't. I happen to think that's pretty reasonable, but it doesn't matter how we wish people would behave, in order to predict how they will behave.

Let me try to be clear, since you might be wondering why someone (not me) downvoted you: You started by noting your shock that people aren't that informed about global warming. I said we shouldn't necessarily be surprised that they aren't that informed about global warming. You responded that we're suffering from the tragedy of the commons, or the tragedy of the rationalists versus the barbarians. I respond that I agree with what you say but not with what you seem to think it means. When we unearth a tragedy of the commons, we don't go, "Aha! These people have fallen into a trap and if they saw the light, they would know to avoid it!" Casting light on the tragedy of the commons does not make it optimal for individuals to avoid it.

Casting light on the commons is a way of explaining why people would be behaving in such a socially suboptimal way, not a way of bolstering our shock over their behavior.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T14:22:48.832Z · LW(p) · GW(p)

In a tragedy of the commons, it's in everybody's best interests for everybody to conserve resources. If you're running TDT in a population with similar agents, you want to conserve, and if you're in a population of insufficiently similar agents, you want an enforced policy of conservation. The rationalist in a war with the barbarians might not want to fight, but because they don't want to lose even more, they will fight if they think that enough other people are running a similar decision algorithm, and they will support a social policy that forces them and everyone else to fight. If they think that their side can beat the barbarians with a minimal commitment of their forces, they won't choose either of these things.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T15:25:57.696Z · LW(p) · GW(p)

If you're running TDT in a population with similar agents, you want to conserve

And this is why xv15 is right and Desrtopa is wrong. Orther people do not run TDT or anything similar. Individuals who cooperate with such a population are fools.

TDT is NOT a magic excuse for cooperation. It calls for cooperation in cases when CDT does not only when highly specific criteria are met.

Replies from: cousin_it, None, xv15, Desrtopa
comment by cousin_it · 2011-12-10T23:54:18.502Z · LW(p) · GW(p)

At the Paris meetup Yvain proposed that voting might be rational for TDT-ish reasons, to which I replied that if you have voted for losing candidates at past elections, that means not enough voters are correlated with you. Though now that I think of it, maybe the increased TDT-ish impact of your decision could outweigh the usual arguments against voting, because they weren't very strong to begin with.

comment by [deleted] · 2011-12-08T08:24:16.836Z · LW(p) · GW(p)

Individuals who cooperate with such a population are fools.

But sometimes it works out anyway. Lots of people can be fools. And lots of people can dislike those who aren't fools.

People often think "well if everyone did X sufficiently unpleasant thing would happen, therefore I won't do it". They also implicitly believe, though they may not state "most people are like me in this regard". They will also say with their facial expressions and actions though not words "people who argue against this are mean and selfish".

In other words I just described a high trust society. I'm actually pretty sure if you live in Switzerland you could successfully cooperate with the Swiss on global warming for example. Too bad global warming isn't just a Swiss problem.

Replies from: wedrifid
comment by wedrifid · 2011-12-08T08:32:23.520Z · LW(p) · GW(p)

And lots of people can dislike those who aren't fools.

Compliance with norms so as to avoid punishment is a whole different issue. And obviously if you willfully defy the will of the tribe when you know that the punishment exceeds the benefit to yourself then you are the fool and the compliant guy is not.

They will also say with their facial expressions and actions though not words "people who argue against this are mean and selfish".

Of course they will. That's why we invented lying! I'm in agreement with all you've been saying about hypocrisy in the surrounding context.

comment by xv15 · 2011-12-06T15:59:07.074Z · LW(p) · GW(p)

I agree. Desrtopa is taking Eliezer's barbarians post too far for a number of reasons.

1) Eliezer's decision theory is at the least controversial which means many people here may not agree with it.

2) Even if they agree with it, it doesn't mean they have attained rationality in Eliezer's sense.

3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.

Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you'd like to be true, but about what's actually true. If you're still shocked by people's low confidence in global warming, it's time to consider the possibility that your model of the world -- one in which people are running around executing TDT -- is wrong.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T16:18:22.972Z · LW(p) · GW(p)

Desrtopa is taking Eliezer's barbarians post too far for a number of reasons.

Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying "Rah Cooperation!".

Replies from: Desrtopa, None
comment by Desrtopa · 2011-12-06T16:28:27.533Z · LW(p) · GW(p)

There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.

Replies from: xv15
comment by xv15 · 2011-12-07T12:21:53.803Z · LW(p) · GW(p)

To me, this comment basically concedes that you're wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you've been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.

Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:

In general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial to the individual, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit to the individual.

Assuming that's what you're saying, it's easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from "informing oneself" (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can't make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it's a tragedy of the commons, we are already ANSWERING that question.

Does that make sense? Am I misunderstanding your position? Has your position changed?

Replies from: prase, Desrtopa
comment by prase · 2011-12-08T19:12:20.557Z · LW(p) · GW(p)

To me, this comment basically concedes that you're wrong but attempts to disguise it in a face-saving way.

It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.

Replies from: xv15, FeepingCreature
comment by xv15 · 2011-12-10T22:28:08.897Z · LW(p) · GW(p)

prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.

BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we're also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we're doing, and correct flaws when they are exposed to our scrutiny.

If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that's worth doing, on lesswrong.

To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to "escape" from a debate in a face-saving way without realizing I had actually been wrong. I'm sure this still happens from time to time...and I want to know if it's happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.

Replies from: prase, TheOtherDave
comment by prase · 2011-12-11T09:31:17.577Z · LW(p) · GW(p)

I know that when I was younger, it was entirely possible for me to "escape" from a debate in a face-saving way without realizing I had actually been wrong. I'm sure this still happens from time to time...and I want to know if it's happening! I hope that LWers will point it out.

The key question is: would you believe it if it were your opponent in a heated debate who told you?

Replies from: xv15
comment by xv15 · 2011-12-11T18:08:17.392Z · LW(p) · GW(p)

I'd like to say yes, but I don't really know. Am I way off-base here?

Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it's not worth it. It's too bad there aren't more people weighing in on these comments because I'd like to know how the community thinks my priorities should be set. In any case you've been around for longer so you probably know better than I.

Replies from: prase
comment by prase · 2011-12-11T18:53:53.994Z · LW(p) · GW(p)

I think we are speaking about this scenario:

  • Alice says: "X is true."
  • Bob: "No, X is false, because of Z."
  • Alice: "But Z is irrelevant with respect to X', which is what I actually mean."

Now, Bob agrees with X'. What will Bob say?

  1. "Fine, we agree after all."
  2. "Yes, but remember that X is problematic and not entirely equivalent to X'."
  3. "You should openly admit that you were wrong with X."

If I were in place of Alice, (1) would cause me to abandon X and believe X' instead. For some time I would deny that they aren't equivalent or think that my saying X was only poor formulation on my part and that I have always believed X'. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob's opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X' too (I don't like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot.

(The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.)

Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?

Replies from: xv15
comment by xv15 · 2011-12-11T20:00:22.969Z · LW(p) · GW(p)

Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.

I think the default is that people change specific opinions more in response to the tactful debate style you're identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one's wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.

It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one's biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.

Personally I am becoming inclined to give up the second goal.

Replies from: prase
comment by prase · 2011-12-11T21:44:36.505Z · LW(p) · GW(p)

Since here on LW changing one's opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn't. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.

Replies from: gwern
comment by gwern · 2011-12-11T22:53:24.915Z · LW(p) · GW(p)

The downside is, of course, that writing beliefs down may make their holders even more entrenched.

Entirely plausible - cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?

Replies from: prase
comment by prase · 2011-12-11T22:59:36.577Z · LW(p) · GW(p)

I don't think I have an idea how strong all relevant effects and measures are.

comment by TheOtherDave · 2011-12-10T23:49:23.185Z · LW(p) · GW(p)

There's a big difference between:

  • "it's best if we notice and acknowledge when we're wrong, and therefore I will do my best to notice and acknowledge when I'm wrong"
  • "it's best if we notice and acknowledge when we're wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them"
    and
  • "it's best if we notice and acknowledge when we're wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so."
comment by FeepingCreature · 2011-12-10T17:30:09.197Z · LW(p) · GW(p)

True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.

Replies from: prase
comment by prase · 2011-12-10T20:17:56.767Z · LW(p) · GW(p)

This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?

Replies from: FeepingCreature
comment by FeepingCreature · 2011-12-11T01:48:57.325Z · LW(p) · GW(p)

Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.

Replies from: komponisto, prase
comment by komponisto · 2011-12-11T09:29:46.043Z · LW(p) · GW(p)

debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault

That isn't actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.

comment by prase · 2011-12-11T09:14:54.618Z · LW(p) · GW(p)

it encourages verbal admission of fault

This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying "you are right" when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can't easily get rid of that bias.

There are several outcomes of a debate where one party is right and the other is wrong:

  1. The wrong side admit their wrongness.
  2. The wrong side don't want to admit their wrongness but realise that they have no good arguments and drop from the debate.
  3. The wrong side don't want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw.
  4. The wrong side don't even realise their wrongness.

The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.

Replies from: wedrifid, FeepingCreature
comment by wedrifid · 2011-12-11T09:15:41.149Z · LW(p) · GW(p)

There are several outcomes of a debate where one party is right and the other is wrong:

You left off all the cases where the right side admits their wrongess!

Replies from: prase
comment by prase · 2011-12-11T09:24:57.803Z · LW(p) · GW(p)

Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I'll try to compose a more exhaustive list.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-11T14:27:21.402Z · LW(p) · GW(p)

Don't forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn't even occur to each that the other might be meaning something else by sound. (I'm under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)

comment by FeepingCreature · 2011-12-11T13:09:56.293Z · LW(p) · GW(p)

Well reasoned.

comment by Desrtopa · 2011-12-07T15:47:06.527Z · LW(p) · GW(p)

Yes, you are misunderstanding my position. I don't think that it's optimal for most individuals to inform themselves about global warming to a "socially optimal" level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you're putting in too much buck for not enough bang, unless you're getting utility from the information in other ways. But at the point where you don't have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems.

If there were no correlation between one person's decisions and another's, it would probably not be worth anyone's time to learn about any sort of societal problems at all, but then, we wouldn't have gotten to the point of being able to have societal problems in the first place.

Replies from: xv15
comment by xv15 · 2011-12-07T23:58:11.988Z · LW(p) · GW(p)

Unfortunately that response did not convince me that I'm misunderstanding your position.

If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don't know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.

No one is disputing that there is correlation between people's decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don't get to magically drag them along with you by choosing to cooperate.

This is a standard textbook tragedy of the commons problem, plain and simple. From where I'm standing I don't see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?

Replies from: Desrtopa
comment by Desrtopa · 2011-12-08T01:44:47.492Z · LW(p) · GW(p)

People don't use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don't have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly.

Tragedies of commons are not universally unresolvable. It's to everyone's advantage for everyone to pool their resources for some projects for the public good, but it's also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying "It's a tragedy of the commons problem" doesn't mean there's no chance of resolving it and therefore no use in knowing about it.

Replies from: xv15
comment by xv15 · 2011-12-10T22:48:11.800Z · LW(p) · GW(p)

human decisionmaking is isomorphic to TDT in some domains

Maybe it would help if you gave me an example of what you have in mind here.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-12T15:30:27.188Z · LW(p) · GW(p)

Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide "I'm going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process," you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.

Replies from: xv15, wedrifid, TheOtherDave
comment by xv15 · 2011-12-13T03:22:52.444Z · LW(p) · GW(p)

It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).

"Isomorphic" is a strong word. Let me know if you have a better example.

Anyway let me go back to this from your previous comment:

Tragedies of commons are not universally unresolvable....Simply saying "It's a tragedy of the commons problem" doesn't mean there's no chance of resolving it and therefore no use in knowing about it.

No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we're talking about.

If you turn around and say, "well they should be seeking out more knowledge because it could potentially resolve the tragedy"...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your "should" from nowhere! The tragedy we're discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent "should," where should is defined with respect to that agent's preferences and the incentives he faces.

Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren't using TDT reasoning? And if other people aren't using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents "should" behave here, but I just don't see how TDT is relevant.

Replies from: wedrifid, Desrtopa
comment by wedrifid · 2011-12-13T05:00:57.306Z · LW(p) · GW(p)

And if other people aren't using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it?

NO! It implies that you go ahead and use TDT reasoning - which tells you to defect in this case! TDT is not about cooperation!

Replies from: xv15
comment by xv15 · 2011-12-13T17:38:10.318Z · LW(p) · GW(p)

wedrifid, RIGHT. Sorry, got a little sloppy.

By "TDT reasoning" -- I know, I know -- I have been meaning Desrtopa's use of "TDT reasoning," which seems to be like TDT + [assumption that everyone else is using TDT].

I shouldn't say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa's invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.

comment by Desrtopa · 2011-12-24T17:22:12.550Z · LW(p) · GW(p)

You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we're talking about.

Lack of knowledge of global warming isn't the tragedy of the commons I'm talking about; even if everyone were informed about global warming, it doesn't necessarily mean we'd resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant.

The question a person starting from a position of ignorance about climate change has to answer is "should I expect that learning about this issue has benefits to me in excess of the effort I'll have to put in to learn about it?" An answer of "no" corresponds to a low general expectation of information value considering the high availability of the information.

The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent's choices and another's. I didn't claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people's decisions did not outcompete decision theory which does not assume any correlation, we wouldn't have evolved cooperative tendencies in the first place.

comment by wedrifid · 2011-12-12T16:03:10.914Z · LW(p) · GW(p)

If you were to just spontaneously decide "I'm going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process," you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing.

Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.

comment by TheOtherDave · 2011-12-12T15:56:57.686Z · LW(p) · GW(p)

Of course, it's in practice nearly impossible for me to determine through introspection whether what feels like a "spontaneous" decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it's hard to predict the likelihood of other people being similarly inspired.

So I have no business being too surprised if lots of people do think the same thing, either, even if I can't point to an inspirational essay in a community of similar reasoners as a mechanism.

In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.

comment by [deleted] · 2011-12-08T08:29:37.610Z · LW(p) · GW(p)

Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying "Rah Cooperation!".

He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren't built for a society of hundreds of millions living across an entire continent, our minds don't understand that our friends, family and co-workers are not a representative sample of the actual "tribe" we are living in.

Replies from: wedrifid
comment by wedrifid · 2011-12-08T08:38:51.516Z · LW(p) · GW(p)

He may be mistaken about how high trust the society he lives in is.

Even if that is the case he is still mistaken about game theory. While the 'high trust society' you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not - an entirely different issue to dealing with punishment norms by 'high trust' agents.

Replies from: None
comment by [deleted] · 2011-12-08T09:03:51.015Z · LW(p) · GW(p)

Sure.

We are in agreement on that. But this might better explain why, on second thought, I think it dosen't matter, at least not in this sense, on the issue of whether educating people about global warming matters.

I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn't that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen't seem that relevant to me.

"rah cooperation" I think in practice translates into "I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I'm concerned about."

comment by Desrtopa · 2011-12-06T15:30:35.076Z · LW(p) · GW(p)

In which case you want an enforced policy conforming to the norm. A rational shepherd in a communal grazing field may not believe that if he doesn't let his flock overgraze, other shepherds won't either, but he'll want a policy punishing or otherwise preventing overgrazers.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T15:47:06.130Z · LW(p) · GW(p)

In which case you want an enforced policy conforming to the norm.

Yes, and this means that individuals with the ability to influence or enforce policy about global warming can potentially benefit somewhat from knowing about global warming. For the rest of the people (nearly everyone) knowledge about global warming is of no practical benefit.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T15:52:01.600Z · LW(p) · GW(p)

If the public doesn't believe in or care about climate change, then public officials who care about climate change won't get into power in the first place.

In a society that doesn't believe the barbarians pose a credible threat, they won't support a leader who wants to make everyone cooperate in the war.

Replies from: wedrifid
comment by wedrifid · 2011-12-06T16:04:24.498Z · LW(p) · GW(p)

Again you have jumped back to what benefits society - nobody questions the idea that it is bad for the society if the population doesn't know Commons_Threat_X. Nobody questions the idea that it is bad for an individual if everyone else doesn't know about Commons_Threat_X. What you seem unwilling to concede is that there is negligible benefit to that same individual for him, personally learning about Commons_Threat_X (in a population that isn't remotely made up of TDT agents).

Replies from: Desrtopa
comment by Desrtopa · 2011-12-06T16:16:42.170Z · LW(p) · GW(p)

In a population of sufficiently different agents, yes. If you're in a population where you can rationally say "I don't know whether this is a credible threat or not, but no matter how credible threat it is, my learning about it and taking it seriously will not be associated with other people learning about it or taking it seriously," then there's no benefit in learning about Commons Threat X. If you assume no association of your action with other people's, there's no point in learning about any commons threat ever. But people's actions are sufficiently associated to resolve some tragedies of commons, so in general when people are facing commons problems, it will tend to be to their benefit to make themselves aware of them when the information is made readily available.

Replies from: wedrifid, xv15
comment by wedrifid · 2011-12-06T16:29:11.268Z · LW(p) · GW(p)

Any agent encountering a scenario that can be legitimately described as commons problem with a large population of humans will either defect or be irrational. It really is that simple. Cooperate-Bots are losers.

(Note that agents with actually altruistic preferences are a whole different question.)

Replies from: None
comment by [deleted] · 2011-12-08T09:41:44.916Z · LW(p) · GW(p)

It really is that simple. Cooperate-Bots are losers.

Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots) or used to do so and the vast majority of the population are still cooperative bots.

Evolution has in some specific circumstances made humans cooperate and be collectively better off in situations where rational agents with human values wouldn't have. That's the beauty of us being adaptation-executers, not fitness-maximizers.

A rational agent among humans could easily spend his time educating them about global warming, if the returns are high enough (I'm not talking about book revenues or payment for appearances or some irrational philanthropist paying him to do so, I'm actually talking about the returns of ameliorating the negative effects of global warming) and the costs low enough. That's the interesting version of the debate about it being more "important" people know about global warming than tomatoes having genes.

A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.

Replies from: wedrifid
comment by wedrifid · 2011-12-08T10:25:24.194Z · LW(p) · GW(p)

I would add that adjective to agree with this sentence. Humans are agents, but they aren't rational.

If your reread the sentence you may note that I was careful to make that adjective redundant - sufficiently redundant as to border on absurd. "A rational agent will X or be irrational" is just silly. "A rational agent will X" would have been true but misses the point when talking about humans. That's why I chose to write "An agent will X or be irrational".

Yes. Unless being cooperative makes more Cooperative bots (and not more defecting bots) than defecting makes rational bots (and not Cooperative bots)

No. Cooperating is different to being a Cooperate-Bot. A rational agent will cooperate when it will create a better outcome via, for example, making other people cooperate. A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment. The only situations where it can be expected for it to be better to be a Cooperate-Bot than a rational agent that chooses to cooperate are those contrived scenarios where an entity or the environment is specifically constructed to read the mind and motives of the agent and punish it for cooperating for rational reasons.

I don't understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.

A rational agent among irrational agents can actually be better off helping them cooperate and coordinate to avoid a specific situation in certain conditions rather than just plain old defecting.

Yes. When an agent can influence the behavior of other agents and cooperating in order to do so is of sufficient benefit it will cooperate in order to influence others. If this wasn't the case we wouldn't bother considering most of the game theoretic scenarios that we construct.

Replies from: None, None
comment by [deleted] · 2011-12-08T10:33:52.104Z · LW(p) · GW(p)

A Cooperate-Bot will cooperate even when it creates bad outcomes and completely independently of the responses of other agents or their environment.

That dosen't mean they can't win, as in being the only bots left standing. It is trivially easy to construct such situations. Obviously this won't help the individuals.

I don't understand why you have gone through my various comments here to argue with trivially true statements. I was under the impression that I mostly joined the conversation agreeing with you.

I wasn't arguing with the statements. I think I even generally affirmed your comments at the start of my comments to avoid confusion. I was just emphasising that while this is settled the argument best version of the argument about the utility of trying to educate other people on global warming probably isn't.

Also two comments don't really seem like "going through several of your comments" in my eyes!

comment by [deleted] · 2011-12-08T10:31:00.418Z · LW(p) · GW(p)

If your reread the sentence you may note that I was careful to make that adjective redundant - sufficiently redundant as to border on absurd. "A rational agent will X or be irrational" is just silly. "A rational agent will X" would have been true but misses the point when talking about humans. That's why I chose to write "An agent will X or be irrational".

Indeed, I obviously didn't register the sentence properly, edited.

comment by xv15 · 2011-12-07T12:46:12.781Z · LW(p) · GW(p)

Desrtopa, can we be careful with it means to be "different" from other agents? Without being careful, we might reach for any old intuitive metric. But it's not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That's the metric that matters here.

Suppose we start out identical but NOT reasoning as per TDT -- we defect in the prisoner's dilemma, say -- but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you're deciding for both of us, since we're so similar after all. Well, that's not going to work too well, is it? My behavior isn't going to change any, since, after all, you can't actually influence it by your own reading about TDT.

So don't be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.

With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-07T15:22:53.704Z · LW(p) · GW(p)

Obviously the relevant difference is in their decision metrics. But human decision algorithms, sloppy and inconsistent though they are, are in some significant cases isomorphic to TDT.

If we were both defecting in the Prisoner's dilemma, and then I read some of the sequences and thought that we were both similar decisionmakers and stopped defecting, it would be transparently stupid if you hadn't also been exposed to the same information that led me to make the decision in the first place. If I knew you had also read it, I would want to calculate the expected value of defecting or cooperating given the relative utilities of the possible outcomes and the likelihood that your decision would correspond to my own.

I think you're assuming much sloppier reasoning on my part than is actually the case (of course I probably have a bias in favor of thinking I'm not engaging in sloppy reasoning, but your comment isn't addressing my actual position.) Do I think that if I engage in conservation efforts, this will be associated with a significant increase in likelihood that we won't experience catastrophic climate change? Absolutely not. Those conservation efforts I engage in are almost entirely for the purpose of signalling credibility to other environmentalists (I say "other" but it's difficult to find anyone who identifies as an environmentalist who shares my outlook,) and I am completely aware of this. However, the utility cost of informing oneself about a potential tragedy of commons where the information is readily available and heavily promoted, at least to a basic level, is extremely low, and humans have a record of resolving some types of tragedies of commons (although certainly not all,) and the more people who're aware of and care about the issue, the greater the chance of the population resolving it (they practically never will of their own volition, but they will be more likely to support leaders who take it seriously and not defect from policies that address it and so on.) The expected utility has to be very low to not overcome the minimal cost of informing themselves. And of course, you have to include the signalling value of being informed (when you're informed you can still signal that you don't think the issue is a good time investment, or significant at all, but you can also signal your familiarity.)

I think that those environmentalists who expect that by public campaigning and grassroots action we can reduce climate change to manageable levels are being unreasonably naive; they're trying to get the best results they can with their pocket change when what they need is three times their life savings. Sufficient levels of cooperation could resolve the matter simply enough, but people simply don't work like that. To completely overextend the warfare metaphor, I think that if we've got a shot of not facing catastrophe, it's not going to look like everyone pulling together and giving their best efforts to fight the barbarians, it's going to look more like someone coming forward and saying "If you put me in charge I'm going to collect some resources from all of you, and we're going to use them to make a nuclear bomb according to this plan these guys worked out." Whether the society succeeds or not will hinge on proliferation of information and how seriously the public takes the issue, whether they mostly say things like "sounds like a good plan, I'm in," and "if it's really our best chance," rather than "I care more about the issues on Leader B's platform" and "I have no idea what that even means."

comment by [deleted] · 2011-12-08T09:02:35.395Z · LW(p) · GW(p)

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.

It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that's an effective signal to them that you aren't trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.

Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren't there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?

If its not, teach them about tomatoes.

comment by [deleted] · 2011-12-22T01:31:30.575Z · LW(p) · GW(p)

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

http://en.wikipedia.org/wiki/Effects_of_climate_change_on_humans

Also, it's already having a fairly substantial effect on polar communities in the US, Canada and Russia, making it difficult to obtain enough food. Many of them are impoverished in the context of the national economy and still whaling-dependant in large part for enough food to survive. Any disruption is a direct threat to food availability.

Replies from: thomblake
comment by thomblake · 2011-12-22T15:05:37.618Z · LW(p) · GW(p)

I'm not sure how that's a response to what I said. Electing a president who opts to start a nuclear war would obviously be a political issue, and might have even worse effects on humans.

Replies from: None
comment by [deleted] · 2011-12-22T16:57:26.220Z · LW(p) · GW(p)

You said it's not an important issue for everyday life.

Things that significantly impact health (how often are you exposed to pathogens and how severe are they?), weather (makes a big difference even for an urban-living person with access to climate-controlled dwelling like me in the Midwest), the availability of food and water (which you need for not dying), and the stability of where you live (loss of which compromises all the others and requires you to try to find somewhere else and see what happens there) seem like the very definition of important to everyday life.

Replies from: thomblake
comment by thomblake · 2011-12-22T18:51:15.258Z · LW(p) · GW(p)

What I meant was that knowing stuff about the issue isn't important for everyday life. While the availability of food and water is good to know about, what environmental conditions caused it is less important unless I'm a farmer or policy-maker.

Similarly, a nuclear war would impact health, weather, and the availability of food and water, but I am much better off worrying about whether my car needs an oil change than worrying about whether my government is going to start a nuclear war.

Replies from: None
comment by [deleted] · 2011-12-22T19:47:14.390Z · LW(p) · GW(p)

I can sort of agree, insofar as I can't myself direct the government to never under any circumstances actually do that, and I can't sequester an Industrial Revolution's worth of CO2 just by being aware of the problem, but I feel like it misses something. Not everyone is going to be equally unable to meaningfully contribute to solving the problem -- if a high baseline level of meaningful-awareness of an issuye is the norm, it seems like society is more likely to get the benefits of "herd immunity" to that failure mode. It's not guaranteed, I wouldn't call it a sufficient condition by any means for solving the problem, but it's increasing the odds that any given person whose potential influence over the activities of society is great might be better prepared to respond to that in a way that's not terribly productive.

I suppose if you think we'll get FAI soon, this is irrelevant -- it's a whole lot less efficient and theoretically stable a state than some superintelligence just solving the problem in a way that makes it a nonissue and doesn't rely on corruptible, fickle, perversely-incentivized human social patterns. I'm not so sanguine about that possibility m'self, although I'd love to be wrong.

EDIT: I guess what I'm saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?

Replies from: thomblake
comment by thomblake · 2011-12-27T15:19:40.452Z · LW(p) · GW(p)

EDIT: I guess what I'm saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?

I want all information about all things. But I don't have time for that. And given the option of learning to spot global warming or learning to spot an unsafe tire on my car, I'll have to pick #2. WLOG.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2011-12-27T17:27:30.669Z · LW(p) · GW(p)

Even if it turns out that you can leverage the ability to spot global warming into enough money to pay your next-door neighbor to look at your car tires every morning and let you know if they're unsafe?

comment by [deleted] · 2011-12-27T22:37:36.404Z · LW(p) · GW(p)

This implies the Maldiveans should not concern themselves with the fact that their entire country (most of it no more than a meter or two above sea level) is projected to lose most of the land sustaining its tourist industry (the main economic engine for the economy) and displacing most of its population, leading to greater social volatility in a country enjoying a relative respite after many years of internicine violence.

I want all information about all things. But I don't have time for that.

If you think this applies to the question of whether or not it's valuable for anyone to know about global warming in concrete terms and the plausible implications for their own lives, then I can only say that I hope for your sake you live somewhere nicely insulated from such possible changes. Me, I'd rather treat it the way I treat earthquakes where I'm from or tornadoes where I live now: things worth knowing about and being at least somewhat prepared for if it's at all possible to do so.

comment by Oligopsony · 2011-12-05T15:33:15.081Z · LW(p) · GW(p)

What should astonish about zero familiarity with the data, beyond that there's a scientific consensus?

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T17:42:19.459Z · LW(p) · GW(p)

I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.

comment by ArisKatsaris · 2012-12-15T18:09:12.423Z · LW(p) · GW(p)

Stop being astonished so easily. How much familiarity with climate science do you expect the average non-climate scientist to actually have?

I suspect that people displaying >95% certainty about AGW aren't much more "familiar with the data" than the people who display less certainty -- that their most significant difference is that they put more trust on what is a political position in the USA.

But I doubt you question the "familiarity with the data" of the people who are very very certain of your preferred position.

Replies from: RobbBB, Desrtopa, None
comment by Rob Bensinger (RobbBB) · 2012-12-15T18:44:43.208Z · LW(p) · GW(p)

The average LessWronger is almost certainly much more competent to evaluate that global temperatures have been rising significantly, and that at least one human behavior has had a nontrivial effect on this change in temperature, than to evaluate that all life on earth shares a common ancestral gene pool, or that some 13.75 billion years ago the universe began rapidly inflating. Yet I suspect that the modern evolutionary synthesis (including its common-descent thesis), and the Big Bang Theory, are believed more strongly by LessWrongers than is anthropogenic climate change.

If so, then it can't purely be a matter of LessWrongers' lack of expertise in climate science; there must be some sociological factors undermining LessWrongers' confidence in some scientific claims they have to largely take scientists' word for, while not undermining LessWrongers' confidence in all scientific claims they have to largely take scientists' word for.

Plausibly, the ongoing large-scale scientific misinformation campaign by established economic and political interests is having a big impact. Merely hearing about disagreement, even if you have an excellent human-affairs model predicting such disagreement in the absence of any legitimate scientific controversy, will for psychological reasons inevitably shake a generic onlooker's confidence. Listen to convinced and articulate flat-earth skeptics long enough, and some measure of psychological doubt is inevitable, even if you are savvy enough to avoid letting this doubt creep into your more careful and reflective probability calculations.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-16T11:51:02.359Z · LW(p) · GW(p)

The average LessWronger is almost certainly much more competent to evaluate [anthropogenic global warning] than [universal common descent or Big Bang cosmology]

I agree that they are likely at least competent about the former than the latter, but why do you think they are almost certainly much more competent?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-16T20:38:09.833Z · LW(p) · GW(p)

Evaluating common descent requires evaluating the morphology, genome, and reproductive behavior of every extremely distinctive group of species, or of a great many. You don't need to look at each individual species, but you at least need to rule out convergent evolution and (late) lateral gene transfer as adequate explanations of homology. (And, OK, aliens.) How many LessWrongers have put in that legwork?

Evaluating the age of the universe requires at least a healthy understanding of contemporary physics in general, and of cosmology. The difficulty isn't just understanding why people think the universe is that old, but having a general enough understanding to independently conclude that alternative models are not correct.

That's a very basic sketch of why I'd be surprised if LessWrongers could better justify those two claims than the mere claim that global temperatures have been rising (which has been in the news a fair amount, and can be confirmed in a few seconds on the Internet) and a decent assessment of the plausibility of carbon emissions as a physical mechanism. Some scientific knowledge will be required, but not of the holistic 'almost all of biology' or 'almost all of physics' sort indicated above, I believe.

comment by Desrtopa · 2012-12-16T00:25:29.756Z · LW(p) · GW(p)

I think you're seriously failing to apply the Principle of Charity here. Do you think I assume that anyone who claims to "believe in the theory of evolution" understands it well?

RobbBB has already summed up why the levels of certainty shown in this survey would be anomalous when looked at purely from an "awareness of information" perspective, which is why I think that it would be pretty astonishing if lack of information were actually responsible.

AGW is a highly politicized issue, but then, so is evolution, and the controversy on evolution isn't reflected among the membership of Less Wrong, because people aligned with the bundle of political beliefs which are opposed to it are barely represented here. I would not have predicted in advance that such a level of controversy on AGW would be reflected among the population of Less Wrong.

comment by [deleted] · 2012-12-15T18:43:16.611Z · LW(p) · GW(p)

Desrtopa said:

while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.

ArisKatsaris said:

I suspect that people displaying >95% certainty about AGW aren't much more "familiar with the data" than the people who display less certainty

The problem with these arguments is that you need to 1. know the data 2. know how other people would interpret it , because with just 1. you'll end up comparing your probability assignments with others', and might perhaps mistake into thinking that their deviation from your estimation is due to lack of access to the data and/or understanding over it...... ........unless you're comparing it to what your idea of some consensus is.

...Meanwhile I don't know either so just making a superficial observation, while not knowing which one of you knows which things here.

comment by [deleted] · 2011-12-06T08:45:13.783Z · LW(p) · GW(p)

Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:

I believe global warming is grossly exaggerated as a problem. It's a real problem, but it's nothing like as serious as people are led to believe. The idea that global warming is the most important problem facing the world is total nonsense and is doing a lot of harm. It distracts people's attention from much more serious problems.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-15T17:51:42.140Z · LW(p) · GW(p)

That gets to the issue I had with the question.

"Significant" is just too vague. Everyone who gave an answer was answering a different question, depending on how they interpreted "significant".

The survey question itself indicates a primary problem with the discussion of global warming - a conflation of temperature rise and societal cost of temperature rise. First, ask a meaningful question about temperature increase. Then, ask questions about societal cost given different levels of temperature increase.

comment by [deleted] · 2011-12-22T01:23:02.758Z · LW(p) · GW(p)

It seems to be, possibly related to the Libertarian core cluster from OB. In my experience US Libertarians are especially likely to disbelieve in anthropogenic global warming, or to argue it's not anthropogenic, not potentially harmful, or at least not grounds for serious concern at a public policy level.

comment by [deleted] · 2011-12-05T21:00:44.207Z · LW(p) · GW(p)

I would like to see this question on a future survey:

Are you genetically related to anyone with schizophrenia? (yes / no) How distant is the connection? (nuclear family / cousins, aunts and uncles / further / no connection)

I've repeatedly heard that a significant number of rationalists are related to schizophrenics.

comment by ChrisHallquist · 2011-12-05T20:51:58.484Z · LW(p) · GW(p)

Didn't the IQ section say to only report a score if you've got an official one? The percentage of people answering not answering that question should have been pretty high, if they followed that instruction. How many people actually answered it?

Also: I've already pointed out that the morality question was flawed, but after thinking about it more, I've realized how badly flawed it was. Simply put, people shouldn't have had to choose between consequentialism and moral anti-realism, because there are a number of prominent living philosophers who combine the two.

JJC Smart is an especially clear example, but there are others. Joshua Greene's PhD thesis was mainly a defense of moral anti-realism, but also had a section titled "Hurrah for Utilitarianism!" Peter Singer is a bit fuzzy on meta-ethics, but has flirted with some kind of anti-realism.

And other moral anti-realists take positions on ethical questions without being consequentialists, see i.e. JL Mackie's book Ethics. Really, I have to stop myself from giving examples now, because they can be multiplied endlessly.

So again: normative ethics and meta-ethics are different issues, and should be treated as such on the next survey.

comment by wedrifid · 2011-12-05T03:36:10.153Z · LW(p) · GW(p)

So we can only prove that 519 people post on Less Wrong.

Where by 'prove' we mean 'somebody implied that they did on an anonymous online survey'. ;)

Replies from: Yvain, kilobug, duckduckMOO
comment by Scott Alexander (Yvain) · 2011-12-05T15:16:27.099Z · LW(p) · GW(p)

You mean, as opposed to that kind of proof where we end up with a Bayesian probability of exactly one? :)

comment by kilobug · 2011-12-05T10:53:07.016Z · LW(p) · GW(p)

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Replies from: XiXiDu, amcknight
comment by XiXiDu · 2011-12-05T11:24:26.122Z · LW(p) · GW(p)

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Nice idea! I am interested in such statistics.

comment by amcknight · 2011-12-06T03:39:19.842Z · LW(p) · GW(p)

This would allow for a running poll, if we want one.

comment by duckduckMOO · 2011-12-06T13:58:15.233Z · LW(p) · GW(p)

I think this is an underestimate if anything. People who skip the question might just not want to say and at least a few people who post didn't take the survey. I don't see how enough people could be motivated to put down a random score who don't post to make up for these possibilities.

I'd have preferred "at least 519."

comment by Yoreth · 2011-12-06T11:16:28.940Z · LW(p) · GW(p)

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.

  • [BLANK]; [BLANK]
  • Atheist and not spiritual; Consequentialist
  • Agnostic; No such thing
  • Deist/Pantheist/etc.; Virtue ethics
  • Committed theist; Deontology

(I didn't do any statistical analysis, so be careful with the low-population groups.)

comment by Unnamed · 2011-12-04T20:48:25.481Z · LW(p) · GW(p)

It looks like about 6% of respondents gave their answers in decimal probabilities instead of percentages. 108 of the 930 people in the data file didn't have any answers over 1 for any of the probability questions, and 52 of those did have some answers (the other 56 left them all blank), which suggests that those 52 people were using decimals (and that's is 6% of the 874 who answered at least one of the questions). So to get more accurate estimates of the means for the probability questions, you should either multiply those respondents' answers by 100, exclude those respondents when calculating the means, or multiply the means that you got by 1.06.

=IF(MAX(X2:AH2)<1.00001,1,0) is the Excel formula I used to find those 108 people (in row 2, then copy and pasted to the rest of the rows)

Replies from: Zetetic
comment by Zetetic · 2011-12-06T23:14:50.941Z · LW(p) · GW(p)

Nevermind.

comment by A1987dM (army1987) · 2011-12-04T21:41:23.065Z · LW(p) · GW(p)

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140.

...

The average person was 37.6% sure their IQ would be above average - underconfident!

Maybe people were expecting the average IQ to turn out to be about the same as in the previous survey, and... (Well, I kind-of was, at least.)

comment by Peter Wildeford (peter_hurford) · 2011-12-13T06:23:53.701Z · LW(p) · GW(p)

I would be interested in a question that asked whether people were pescatarian / vegetarian / vegan, and another question as to whether this was done for moral reasons.

comment by michaelsullivan · 2011-12-05T20:29:42.505Z · LW(p) · GW(p)

Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke).

It might be a fluke, but like one other respondent who talked about this and got many upvotes, it could be that community veterans were more skeptical of the many many things that have to go right for your scenario to happen, even if we generally believe that cryonics is scientifically feasible and worth working on.

When you say "the average person cryonically frozen today will at some point be awakened", that means not only that the general idea is workable, but that we are currently using an acceptable method of preserving tissues, and that a large portion of current arrangements will continue to preserve those bodies/tissues until post singularity, however long that takes, and that whatever singularity happens will result in people willing to expend resources fulfullling those contracts (so FAI must beat uFAI). Add all that up, and it can easily make for a pretty small probability, even if you do "believe in cryonics" in the sense of thinking that it is potentially sound tech.

My interpretation of this result (with low confidence, as 'fluke' is also an excellent explanation) is that community veterans are better at working with probabilities based on complex conjunctions, and better at seeing the complexity of conjunctions based on written descriptions.

comment by wedrifid · 2011-12-05T04:00:03.461Z · LW(p) · GW(p)

These averages strike me as almost entirely useless! If only half of the people taking the survey are lesswrong participants then the extra noise will overwhelm any signal when the probabilities returned by the actual members are near to either extreme. Using averaging of probabilities (as opposed to, say, log-odds) is dubious enough even when not throwing in a whole bunch of randoms!

(So thankyou for providing the data!)

comment by steven0461 · 2011-12-04T22:50:30.093Z · LW(p) · GW(p)

As with the last survey, it's amazing how casually many people assign probabilities like 1% and 99%. I can understand in a few cases, like the religion questions, and Fermi-based answers to the aliens in the galaxy question. But on the whole it looks like many survey takers are just failing the absolute basics: don't assign extreme probabilities without extreme justification.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-12-05T04:03:03.172Z · LW(p) · GW(p)

On the other hand, conjunctive bias exists. It's not hard to string together enough conjunctions that the probability of the statement should be in an extreme range.

Replies from: steven0461
comment by steven0461 · 2011-12-05T04:21:49.557Z · LW(p) · GW(p)

Does this describe any of the poll questions?

comment by Larks · 2011-12-04T22:48:47.700Z · LW(p) · GW(p)

Results from 2009.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-05T16:27:05.212Z · LW(p) · GW(p)

Are the questions for the 2009 survey available somewhere?

Replies from: Yvain
comment by Morendil · 2011-12-04T22:42:35.714Z · LW(p) · GW(p)

I am officially very surprised at how many that is. Also officially, poorly calibrated at both the 50% (no big deal) and the 90% (ouch, ouch, ouch) confidence levels.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-12-04T22:48:07.036Z · LW(p) · GW(p)

You're okay. I asked the question about the number of responses then. When I asked the question, there were only 970 :)

Replies from: Morendil
comment by Morendil · 2011-12-04T23:04:16.412Z · LW(p) · GW(p)

Whew!

comment by mindspillage · 2011-12-04T21:59:12.749Z · LW(p) · GW(p)

Are there any significant differences in gender or age (or anything else notable) between the group who chose to keep their responses private and the rest of the respondents?

comment by Jonathan_Graehl · 2011-12-06T21:00:43.118Z · LW(p) · GW(p)

At least one person was extremely confident in the year of publication of a different Principia Mathematica :) It's easy to forget about the chance that you misheard/misread someone when communicating beliefs.

comment by Jonathan_Graehl · 2011-12-06T21:08:01.275Z · LW(p) · GW(p)

Almost everyone responding (75%) believes there's at least a 10% chance of a 90% culling of human population sometime in the next 90 years.

If we're right, it's incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven't heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.

Replies from: ksvanhorn, dlthomas
comment by ksvanhorn · 2011-12-10T23:46:58.923Z · LW(p) · GW(p)

What makes you think that sacrificing freedom will reduce this risk, rather than increase it?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-12-11T00:30:43.473Z · LW(p) · GW(p)

Obviously it depends on the specific sacrifice. I absolutely hope we don't create a climate where it's impossible to effectively argue against stupid signalling-we-care policies, or where magical thinking automatically credits [sacrifice] with [intended result].

comment by dlthomas · 2011-12-06T21:15:31.574Z · LW(p) · GW(p)

If we have any sense of particular measures we can take that will significantly reduce that probability.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-12-07T19:55:15.660Z · LW(p) · GW(p)

I agree that we shouldn't seek to impose or adopt measures that are ineffective. It's puzzling to me that I've thought so little about this. Probably 1) it's hard to predict the future; I don't like being wrong 2) maybe my conclusions would impel me to do something; doing something is hard 3) people who do nothing but talk about how great things would be if they were in charge -- ick! (see also Chesterton's Fence).

But I don't have to gain power enough to save the world before it's worth thinking without reservation or aversion about what needs doing. (Chesterton again: "If a thing is worth doing, it is worth doing badly.").

Replies from: dlthomas
comment by dlthomas · 2011-12-07T21:37:54.210Z · LW(p) · GW(p)

An important point that I had intended the grandparent to point at, but on reflection I realize wasn't clear, is that not all of that 10% corresponds to a single type of cataclysm. Personally, I'd put much of the mass in "something we haven't foreseen."

comment by gwern · 2011-12-04T21:39:40.215Z · LW(p) · GW(p)

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Replies from: SilasBarta, wedrifid, dlthomas, Jonathan_Graehl, Tyrrell_McAllister
comment by SilasBarta · 2011-12-05T19:13:27.871Z · LW(p) · GW(p)

Percentage point difference in belief probability isn't all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%.

69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds.

That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 -> 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy.

Is that unreasonably big or unreasonably small?

EDIT: Oops, I can't convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.

Replies from: Unnamed, gwern
comment by Unnamed · 2011-12-06T22:42:27.145Z · LW(p) · GW(p)

If we take the odds ratio for each individual respondent (instead of the aggregate), the median odds ratio is 10.1 -> 3.3 more bits of evidence for life in the universe, compared to somewhere in the galaxy. 25th percentile odds ratio: 2.7 -> 1.4 more bits; 75th percentile odds ratio: 75.7 -> 6.2 more bits. (This is all using the publicly available data set; looking at the aggregate in that data set I'm getting an odds ratio of 3.6 -> 1.8 more bits.)

People who believe in God/religion/the supernatural tend to give a lower odds ratio, but other than that the odds ratio doesn't seem to be associated with any of the other variables on the survey.

comment by gwern · 2011-12-05T19:27:38.113Z · LW(p) · GW(p)

I'm not comfortable with bit odds, especially in this context, so I dunno. How would you frame that in the opposite terms, for lack of existence?

Replies from: SilasBarta
comment by SilasBarta · 2011-12-05T20:26:18.125Z · LW(p) · GW(p)

That gives .44 odds non-existence in universe, 1.43 odds non-existence in galaxy, a ratio of 3.24, or 1.70 more bits of evidence for no (non-human) life in the galaxy compared to the universe in general.

And I forget why those two answers are allowed to be different...

EDIT: I made an error in the first calculation; as I suspected, the values are symmetric.

comment by wedrifid · 2011-12-05T04:09:05.404Z · LW(p) · GW(p)

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Fear not! The 28% difference in the average meaningless. The difference I see in that quote is (90-30), which isn't nearly so bad - and the "1" is also rather telling. More importantly by contrasting the averages with the medians and quartiles we can get something of a picture of what the data looks like. Enough to make a guess as to how it would change if we cut the noise by sampling only, say, those with >= 200 reported karma.

(Note: I am at least as shocked by the current downvote of this comment as gwern is by his "20%", and for rather similar reasons.)

comment by dlthomas · 2011-12-06T20:53:44.096Z · LW(p) · GW(p)

Note that the top 25% put 99 or above for Universe. Of those, I would be surprised if there weren't a big chunk that put 100 (indicating 100 - epsilon, of course). This is not weighed in appropriately. Likewise for the bottom 25% for Galaxy.

Basically, "If you hugely truncate the outside edges, the average probabilities wind up too close together" should be entirely unsurprising.

comment by Jonathan_Graehl · 2011-12-06T20:22:33.833Z · LW(p) · GW(p)

I had the same reaction. The only defense I can imagine is that the second proposition is "in our galaxy" and not "in a random galaxy" - before looking, we should expect to find more other intelligent species in ours, which we know at least doesn't rule out the possibility :)

I tried to guess how many our-galaxy intelligent-life-expectation equivalents exist in our universe. I personally find 50 (the 25% quartile) laughably low.

1:50 and (100-99):(100-80) are fairly extreme - just not extreme enough.

comment by Tyrrell_McAllister · 2011-12-04T22:41:26.196Z · LW(p) · GW(p)

There's only a 20% difference, is that so?

"20% difference" between what and what?

Replies from: gwern
comment by gwern · 2011-12-04T22:50:28.768Z · LW(p) · GW(p)

The point being that if there is intelligent life elsewhere in the universe and it hasn't spread (in order to maintain the Great Silence), then the odds of our 1 galaxy, out of the millions or billions known, being the host ought to be drastically smaller even if we try to appeal to reasons to think our galaxy special because of ourselves (eg. panspermia).

Replies from: Oligopsony, Desrtopa
comment by Oligopsony · 2011-12-05T01:12:32.271Z · LW(p) · GW(p)

Such a set of probabilities may be justified if you're very uncertain (as seems superficially reasonable) about the baseline probability of life arising in any given galaxy. So perhaps one might assign a ~40% chance that life is just incredibly likely, and most every galaxy has multiple instances of biogenesis, and a ~40% chance that life is just so astronomically (har har har) improbable that the Earth houses the only example in the universe,

This is almost certainly much less reasonable once you start thinking about the Great Filter, unless you think the Filter is civilizations just happily chilling on their home planet or thereabouts for eons, but then not everybody's read or thought about the Filter.

Replies from: gwern
comment by gwern · 2011-12-05T04:31:38.487Z · LW(p) · GW(p)

I was kind of hoping most LWers at least had heard of the Great Silence/Fermi controversy, though.

Replies from: NancyLebovitz, wedrifid
comment by NancyLebovitz · 2011-12-05T16:17:14.851Z · LW(p) · GW(p)

Maybe there should be a question or two about the Fermi paradox.

comment by wedrifid · 2011-12-05T04:40:14.829Z · LW(p) · GW(p)

The bigger problem to me seems that both the numbers (galaxy and universe) are way too high. It seems like it should be more in the range of "meta-uncertainty + epsilon" for both answers. Maybe "epsilon * lots" for the universe one but even that should be lower than the uncertainty component.

comment by Desrtopa · 2011-12-05T14:29:48.024Z · LW(p) · GW(p)

If the strong filter is propagation through space, then for rates which people could plausibly assign to the rate of occurrence of intelligent life, the probabilities could be near identical.

What are the odds that a randomly selected population of 10000 has any left handed people? What are the odds that an entire country does?

Replies from: Nornagest
comment by Nornagest · 2011-12-05T18:02:53.356Z · LW(p) · GW(p)

Ditto if the strong filter is technological civilization (which strikes me as unlikely, given the anthropological record, but it is one of the Drake terms). If there are ten thousand intelligent species in the galaxy but we're the only one advanced enough to be emitting on radio wavelengths, we'd never hear about any of the others.

comment by FiftyTwo · 2011-12-05T22:18:27.880Z · LW(p) · GW(p)

Older people were less likely to believe in transhumanist claims,

This seems to contradict the hypothesis that people's belief in the plausibility of immortality is linked to their own nearness/fear of death. Was there any correlations in the expected singularity date?

Relevant SMBC (Summary futurists predicted date of immortality discovery is slightly before the end of their expected lifespan)

comment by [deleted] · 2011-12-05T20:51:37.942Z · LW(p) · GW(p)

2009:

  • 45% libertarianism
  • 38.4% liberalism
  • 12.3% socialism
  • 4.3% (6) conservativism
  • "not one person willing to own up to being a commie."

2011:

  • liberalism 34.5% (376)
  • libertarianism 32.3% (352)
  • socialism 26.6% at (290)
  • conservatism 2.8% (30)
  • communism 0.5% (5)

I generally expect LW to grow less metacontrarian on politics the larger it gets, so this change didn't surprise me. An alternative explanation (and now that I think of it more likley) is that the starting core group of LWers wasn't just more metacontrarian than usual, but probably also more libertarian in general.

Replies from: taryneast, Nornagest, army1987
comment by taryneast · 2011-12-06T19:31:46.692Z · LW(p) · GW(p)

And the large increase in population seems to include a large portion of students... which my experience tells me often has a higher-than-average portion of socialist leanings.

comment by Nornagest · 2011-12-06T19:51:59.185Z · LW(p) · GW(p)

The relative proportions of liberalism, libertarianism, and conservatism haven't changed much, and I don't think we can say much about five new communists; by far the most significant change appears to be the doubled proportion of socialists. So this doesn't look like a general loss of metacontrarianism to me.

I'm not sure how to account for that change, though. The simplest explanation seems to be that LW's natural demographic turns out to include a bunch of left-contrarian groups once it's spread out sufficiently from OB's relatively libertarian cluster, but I'd also say that socialism's gotten significantly more mainstream-respectable in the last couple of years; I don't think that could fully account for the doubling, but it might play a role.

comment by A1987dM (army1987) · 2011-12-07T15:13:27.104Z · LW(p) · GW(p)

What were the labels in the 2009 surveys, exactly? I am a libertarian socialist, and in the 2011 survey I voted “socialism” because the examples made clear that the American (capitalist) meaning of libertarianism was intended, but if the options had been simply labelled “socialism”, “libertarianism” etc. with no example I would have voted the latter. If there are many other libertarian socialists around, this might explain much of the difference between the 2009 and 2011 results.

comment by Oscar_Cunningham · 2011-12-04T19:43:43.614Z · LW(p) · GW(p)

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy.

This bit is interesting. If our age and gender affects our beliefs than at least some of us are doing it wrong. Update accordingly. I'm young and male, so I should give less credence to global warming and more credence to nearby aliens.

Replies from: None
comment by [deleted] · 2011-12-04T20:11:19.810Z · LW(p) · GW(p)

You have that backwards. If you're young and male, you should suspect that part of your confidence in global warming and lack of aliens is due to your demographics, and therefore update away from global warming and toward aliens.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-12-04T20:38:34.649Z · LW(p) · GW(p)

Thanks! Fixed.

comment by [deleted] · 2011-12-05T23:38:28.074Z · LW(p) · GW(p)

(9.9%) were atheist and spiritual

I thought you meant spiritual as in "Find something more important than you are and dedicate your life to it." did I misinterpret?

Replies from: taryneast, Polymeron, scav
comment by taryneast · 2011-12-06T19:35:39.283Z · LW(p) · GW(p)

If an interpretation wasn't given, then you were free to make up whatever meant something to you. To contrast with yours, i interpreted spiritualism in this sense to match "non-theistic spiritualism" eg nature-spirits, transcendental meditation, wish-magic and the like.

comment by Polymeron · 2011-12-11T11:52:32.133Z · LW(p) · GW(p)

It seems to me that a reasonable improvement for the next survey would be to lower the ambiguity of these categories.

comment by scav · 2011-12-06T12:47:26.323Z · LW(p) · GW(p)

I think you are entitled to make up your own interpretation of a question like that :) Yours is a reasonable one IMO.

comment by Armok_GoB · 2011-12-04T20:17:56.917Z · LW(p) · GW(p)

This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.

Replies from: Emile
comment by Emile · 2011-12-04T20:27:53.446Z · LW(p) · GW(p)

Which parts, specifically?

(it didn't have an effect like that on me, I didn't see that many surprising things)

Replies from: Armok_GoB
comment by Armok_GoB · 2011-12-04T23:45:58.029Z · LW(p) · GW(p)

I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here... In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.

Replies from: XiXiDu, Kaj_Sotala, None, selylindi
comment by XiXiDu · 2011-12-05T09:46:13.219Z · LW(p) · GW(p)

This made my trust in the community and my judgement of its average quality go down a LOT...

I expected almost everyone to agree with Eliezer on most important things...

Alicorn (top-poster) doesn't agree with Eliezer about ethics. PhilGoetz (top-poster) doesn't agree with Eliezer. Wei_Dai (top-poster) doesn't agree with Eliezer on AI issues. wedrifid (top-poster) doesn't agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.

I am pretty sure Yvain doesn't agree with Eliezer on quite a few things too (too lazy to look it up now).

Generally there are a lot of top-notch people who don't agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.

But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

Replies from: Armok_GoB, wallowinmaya, beoShaffer
comment by Armok_GoB · 2011-12-05T14:49:18.663Z · LW(p) · GW(p)

I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples... I'm not sure what's going on, but I'm sure that whatever it is it's my fault and extremely unflattering to my ability as a rationalist.

How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I'm to lazy to form my own opinions! :p

Replies from: NancyLebovitz, satt
comment by NancyLebovitz · 2011-12-05T16:07:30.574Z · LW(p) · GW(p)

I just wait, especially considering that which interpretation of QM is correct doesn't have urgent practical consequences.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-12-05T16:28:00.087Z · LW(p) · GW(p)

We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn't give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-05T16:32:30.366Z · LW(p) · GW(p)

Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.

It isn't obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-12-05T16:34:58.072Z · LW(p) · GW(p)

I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.

comment by satt · 2011-12-06T03:08:09.113Z · LW(p) · GW(p)

This is strange because I technically knew each of those individual examples... I'm not sure what's going on,

Sounds like plain old accidental compartmentalization. You didn't join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)

comment by David Althaus (wallowinmaya) · 2011-12-05T12:26:38.370Z · LW(p) · GW(p)

Holden Karnofsky has read all of the Sequences?

Replies from: XiXiDu
comment by XiXiDu · 2011-12-05T18:39:35.988Z · LW(p) · GW(p)

Holden Karnofsky has read all of the Sequences?

I wrote him an email to make sure. Here is his reply:

I've read a lot of the sequences. Probably the bulk of them. Possibly all of them. I've also looked pretty actively for SIAI-related content directly addressing the concerns I've outlined (including speaking to different people connected with SIAI).

comment by beoShaffer · 2011-12-05T20:04:27.711Z · LW(p) · GW(p)

take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

IIRC Peter Thiel can't give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.

Replies from: gwern
comment by gwern · 2011-12-05T20:25:24.888Z · LW(p) · GW(p)

Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it "receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families."

comment by Kaj_Sotala · 2011-12-05T10:33:19.662Z · LW(p) · GW(p)

I expected almost everyone to agree with Eliezer on most important things

That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.

Replies from: komponisto
comment by komponisto · 2011-12-05T11:02:26.504Z · LW(p) · GW(p)

Surely it depends on which questions are meant by "important things".

Replies from: Kaj_Sotala, Armok_GoB
comment by Kaj_Sotala · 2011-12-05T12:41:53.923Z · LW(p) · GW(p)

Granted.

comment by Armok_GoB · 2011-12-05T14:41:20.737Z · LW(p) · GW(p)

The most salient one would be religion.

Replies from: Nick_Roy
comment by Nick_Roy · 2011-12-05T14:50:11.296Z · LW(p) · GW(p)

What surprised you about the survey's results regarding religion?

Replies from: Armok_GoB
comment by Armok_GoB · 2011-12-05T15:55:37.277Z · LW(p) · GW(p)

That there are theists around?

Replies from: Nick_Roy, thomblake
comment by Nick_Roy · 2011-12-05T16:13:07.091Z · LW(p) · GW(p)

Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.

comment by thomblake · 2011-12-05T17:02:45.642Z · LW(p) · GW(p)

Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome's contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw.

Also, many people could have answered the survey who are new here.

Replies from: TheOtherDave, Armok_GoB
comment by TheOtherDave · 2011-12-05T17:18:40.595Z · LW(p) · GW(p)

It's also fairly unlikely that all the theists and quasitheists on LW have outed themselves as such.
Nor is there any particular reason they should.

comment by Armok_GoB · 2011-12-05T20:53:33.919Z · LW(p) · GW(p)

I assumed those were rare exceptions.

comment by [deleted] · 2011-12-05T03:15:19.522Z · LW(p) · GW(p)

I expected almost everyone to agree with Eliezer on most important things

Why? Don't you encounter enough contrarians on LW?

Replies from: gwern
comment by gwern · 2011-12-05T04:33:45.489Z · LW(p) · GW(p)

You may think you encounter a lot of contrarians on LW, but I disagree - we're all sheep.

But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? ('Collapse, MWI, uh...' - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI - because Eliezer is such a fan...

Replies from: Armok_GoB, None
comment by Armok_GoB · 2011-12-05T14:52:10.087Z · LW(p) · GW(p)

I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.

comment by [deleted] · 2011-12-05T06:10:39.972Z · LW(p) · GW(p)

Heh. The original draft of my comment above included just this example.

To be explicit, I don't believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-05T15:06:17.283Z · LW(p) · GW(p)

I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as "I don't think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world." or misapplications of rational ideas like "Wouldn't Occam's razor favor a single world?"

I still don't know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.

Replies from: None
comment by [deleted] · 2011-12-05T15:46:21.288Z · LW(p) · GW(p)

I updated in favour of MWI simply by demolishing faulty arguments I had against it.

It seems like doing this would only restore you to a non-informative prior, which still doesn't cohere with the survey result. What positive evidence is there in the QM sequence for MWI?

Replies from: Luke_A_Somers, ArisKatsaris
comment by Luke_A_Somers · 2011-12-05T16:31:08.937Z · LW(p) · GW(p)

The positive evidence for WMI is that it's already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!

Replies from: kilobug
comment by kilobug · 2011-12-05T16:36:23.892Z · LW(p) · GW(p)

MWI, as beautiful as it is, won't fully convince me until it can explain the Born probability - other interpretations don't do it more, so it's not a point "against" MWI, but it's still an additional rule you need to make the "jump" between QM and what we actually see. As long you need that additional rule, I've a deep feeling we didn't reach the bottom.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2011-12-06T06:13:18.595Z · LW(p) · GW(p)

I see two ways of resolving this. Both are valid, as far as I can tell. The first assumes nothing, but may not satisfy. The second only assumes that we even expect the theory to speak of probability.

1

Well, QM says what's real. It's out there. There are many ways of interpreting this thing. Among those ways is the Born Rule. If you take that way, you may notice our world, and in turn, us. If you don't look at it that way, you won't notice us, much as if you use a computer implementing a GAI as a cup holder. Yet, that interpretation can be made, and moreover it's compact and yields a lot.

So, since that interpretation can be made, apply the generalized anti-zombie principle - if it acts like a sapient being, it's a sapient being... And it'll perceive the universe only under interpretations under which it is a sapient being. So the Born Rule isn't a general property of the universe. It's a property of our viewpoint.

2

Just from decoherence, without bringing in Born's rule, we get the notion that sections of configuration space are splitting up and never coming back together again. If we're willing to take from that the notion that this splitting should map onto probabilities, then there is exactly one way of mapping from relative weights of splits onto probabilities, such that the usual laws of probability apply correctly. In particular:

1) probabilities are not always equal to zero.

2) the probability of a decoherent branch doesn't change after its initial decoherence (if it could change, it wouldn't be decoherent), and the rules are the same everywhere, and in every direction, and at every speed, and so on.

The simplest way to achieve this is to go with 'unitary operations don't shift probabilities, just change their orientation in Hilbert Space'. If we require that the probability rule be simpler than the physical theory it's to apply to (i.e. quantum mechanics itself), it's the only one, since all of the other candidates effectively take QM, nullify it, and replace it with something else. Being able to freely apply Unitary operations implies that the probability is a function only of component amplitude, not orientation in Hilbert Space.

3) given exclusive possibilities A and B, P(A or B) = P(A) + P(B).

These three are sufficient.

Given a labeling b on states, we have | psi > = sum(b) [ A(b) |b>]

Define for brevity the capital letters J, K, and M as the vector component of |psi> in a particular dimension j, k, or m. For example, K = A(k) | k >

It is possible (and natural, in the language of decoherence) to choose the labeling b such that each decoherent branch gets exactly one dimension (at some particular moment - it will propagate into some other dimension later, even before it decoheres again). Now, consider two recently decohered components, K' and M'. By running time backwards to before the split, we get the original K and M. Back at that time, we would have seen this as a different, single coherent component, J = K + M.

P ( J ) = P ( K + M) must be equal to P ( K ) + P ( M )

This could have occurred in any dimension, so we make this requirement general.

So, consider instead the ways of projecting a vector J into two orthogonal vectors, K and M. As seen above, the probability of J must not be changed by this re-projection. Let theta be the angle between J and M.

K = sin(theta) A(j) | k >

M = cos(theta) A(j) | m >

By condition (2), P(x) is a function of amplitude, not the vectors, so we can simplify the P ( J ) statement to:

P( A(j) ) = P ( sin(theta) A(j) ) + P( cos(theta) A(j) )

this must be true as a function of theta, and for any A(j). The pythagorean theorem shows the one way to achieve this:

P(x) = C x* x for some C.

Since the probabilities are not identically zero, we know that C is not zero.

This, you may note, is the Born Probability Rule.

Replies from: GDC3
comment by GDC3 · 2012-03-31T02:19:28.126Z · LW(p) · GW(p)

1 and 2 together are pretty convincing to me. The intuition runs like this: it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view. So an easy anthropic argument says that we should not be surprised to find ourselves within that interpretation.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-04-23T14:49:55.311Z · LW(p) · GW(p)

it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view

Even better than that - there can be other ways of making observers. Ours happens to be one. It doesn't need to be the only one. We don't even need to stake the argument on that difficult problem being impossible.

comment by ArisKatsaris · 2011-12-05T16:02:41.645Z · LW(p) · GW(p)

It seems like doing this would only restore you to a non-informative prior,

I still had in my mind the arguments in favour of many-worlds, like "lots of scientists seem to take it seriously", and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve.

What positive evidence is there in the QM sequence for MWI?

Well, it mentions some things like "it's deterministic and local, like all other laws of physics seem to be". Does that count?

Replies from: prase
comment by prase · 2011-12-05T18:53:21.569Z · LW(p) · GW(p)

Its determinism is of a very peculiar kind, not like that of other laws of physics seem to be.

comment by selylindi · 2011-12-05T19:53:11.573Z · LW(p) · GW(p)

Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don't actually know if that characterizes Eliezer.

It's slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.

comment by Unnamed · 2011-12-04T19:48:02.836Z · LW(p) · GW(p)

Could you make a copy of the survey (with the exact wordings of all the questions) available for download?

Replies from: Yvain, Vladimir_Nesov
comment by Scott Alexander (Yvain) · 2011-12-04T20:40:35.168Z · LW(p) · GW(p)

I've re-opened the survey at https://docs.google.com/spreadsheet/viewform?formkey=dHlYUVBYU0Q5MjNpMzJ5TWJESWtPb1E6MQ , but please don't send in any more responses.

comment by Vladimir_Nesov · 2011-12-04T20:39:43.312Z · LW(p) · GW(p)

(To clarify the need for making this happen: it seems that since the survey was closed, it's no longer possible to find the survey questions anywhere.)

comment by [deleted] · 2011-12-05T20:39:31.999Z · LW(p) · GW(p)

It would be neat if you posted a link to a downloadable spreadsheet like last time. I'd like to look at the data, if I happened to miss it via careless reading, sorry for bothering you.

Edit: Considering this is downovted I guess I must have missed it. I skimmed the post again and I'm just not seeing it, can someone please help with a link? :)

2nd Edit: Sorry missed it the first time!

Replies from: Emile
comment by Emile · 2011-12-05T22:19:36.698Z · LW(p) · GW(p)

Last word of the post.

Replies from: None
comment by [deleted] · 2011-12-05T22:24:33.076Z · LW(p) · GW(p)

Thanks!

comment by michaelsullivan · 2011-12-05T20:29:52.759Z · LW(p) · GW(p)

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1) Some revealed religion is true: 3.40, (0, 0, .15)

This result is, not exactly surprising to me, but odd by my reading of the questions. It may seem at first glance like a conjunction fallacy to rate the second question's probability much higher than the first (which I did). But in fact, the god question, like the supernatural question referred to a very specific thing "ontologically basic mental entities", while the "some revealed religion is more or less true" question was utterly vague about how to define revealed religion or more or less true.

As I remarked in comments on the survey, depending on my assumptions about what those two things mean, my potential answers ranged from epsilon to 100-epsilon. A bit of clarity would be useful here.

Also, given the large number of hard atheists on LW, it might be interesting to look at finer grained data for the 25+% of survey respondents who did not answer '0' for all three "religion" questions.

comment by MarkusRamikin · 2011-12-05T10:29:43.426Z · LW(p) · GW(p)

The other 72.3% of people who had to find Less Wrong the hard way.

Is it just me or is there something not quite right about this, as an English sentence.

Replies from: pedanterrific, army1987
comment by pedanterrific · 2011-12-05T10:33:46.578Z · LW(p) · GW(p)

Could be fixed by adding 'of'

Of the other 72.3% of people who had to find Less Wrong the hard way,

or removing 'who'

The other 72.3% of people had to find Less Wrong the hard way.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-12-05T10:38:52.595Z · LW(p) · GW(p)

Right. For some reason the period instead of comma confused me much more than it should have.

comment by A1987dM (army1987) · 2011-12-05T18:48:15.382Z · LW(p) · GW(p)

Yeah, which is ‘the hard way’ supposed to be? :-)

comment by VincenzoLingley · 2012-08-09T07:48:49.540Z · LW(p) · GW(p)

For the next survey:

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

Karma is sufficient to identify a lot of people. You could give ranges instead (making sure there are enough people in each range).

comment by A1987dM (army1987) · 2011-12-09T20:46:44.443Z · LW(p) · GW(p)

What is the last column of the .xls file about?

comment by XiXiDu · 2011-12-04T20:12:29.405Z · LW(p) · GW(p)

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

Replies from: TheOtherDave, army1987, steven0461, thomblake, Dorikka, kilobug, michaelsullivan
comment by TheOtherDave · 2011-12-04T20:42:44.653Z · LW(p) · GW(p)

The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.

Replies from: Sophronius
comment by Sophronius · 2011-12-04T20:57:25.820Z · LW(p) · GW(p)

My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong's existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-04T21:08:57.284Z · LW(p) · GW(p)

I don't quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?

Replies from: Sophronius
comment by Sophronius · 2011-12-04T23:31:09.236Z · LW(p) · GW(p)

More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.

comment by A1987dM (army1987) · 2011-12-04T21:59:51.495Z · LW(p) · GW(p)

The question IIRC wasn't about the most worrisome, but about the most likely -- it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability.

(ETA: I'm assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)

Replies from: Giles
comment by Giles · 2011-12-05T20:42:11.493Z · LW(p) · GW(p)

I think that worrisomeness should also factor in our ability to do anything about the problem.

If I'm selfish, then I don't particularly need to worry about global catastrophic risks that will kill (almost) everyone - I'd just die and there's nothing I can do about it. I'd worry more about risks that are survivable, since they might require some preparation.

If I'm altruistic then I don't particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I'd have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.

(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then "selfish" people would behave more like the "altruistic" people in the above paragraph).

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-05T21:01:19.961Z · LW(p) · GW(p)

Well, if you're selfish you'd assign more or less the same utility to all states of the world in which you're dead (unless you believe in afterlife), and in any event you'd assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.

comment by steven0461 · 2011-12-04T21:34:30.155Z · LW(p) · GW(p)

The sequences aren't necessarily claiming UFAI is the single most worrisome risk, just a seriously worrisome risk.

comment by thomblake · 2011-12-05T15:55:19.264Z · LW(p) · GW(p)

Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.

Replies from: kilobug, cousin_it
comment by kilobug · 2011-12-05T16:24:56.076Z · LW(p) · GW(p)

It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage.

In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.

comment by cousin_it · 2011-12-05T15:59:57.118Z · LW(p) · GW(p)

How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2011-12-05T16:30:28.727Z · LW(p) · GW(p)

Hypothetically suppose the following (throughout, assume "AI" stands for significantly superhuman artificial general intelligence):

1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn't particularly harder to build than UFAI is.

Given those premises, it's true that UFAI isn't a major existential risk, in that even if we do nothing about it, UFAI won't kill us. But it's also true that FAI is the best (indeed, the only) way to save us.

Are those premises internally contradictory in some way I'm not seeing?

Replies from: cousin_it
comment by cousin_it · 2011-12-05T16:33:29.891Z · LW(p) · GW(p)

No, you're right. thomblake makes the same point. I just wasn't thinking carefully enough. Thanks!

comment by thomblake · 2011-12-05T16:11:51.556Z · LW(p) · GW(p)

I don't. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don't get FAI to solve them soon.

comment by Dorikka · 2011-12-05T01:00:06.930Z · LW(p) · GW(p)

More that I think there's a significant chance that we're going to get blown up by nukes or a bioweapon before then.

comment by kilobug · 2011-12-04T22:32:27.577Z · LW(p) · GW(p)

For me the issue with "the most". Unfriendly AI is a worrisome existential risk, but it still relies on technological breakthrough that we don't clearly estimate. While "bioengineered pandemic" is something that in the short-term future may very well be possible.

That doesn't mean SIAI isn't doing an important job - Friendly AI is a hard task. If you start to try to solve a hard problem when you're about to die if you don't, well, it's too late. So it's great SIAI people are here to hack away the edges on the problem now.

comment by michaelsullivan · 2011-12-05T20:09:28.323Z · LW(p) · GW(p)

The phrasing of the question was quite specific: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"

If I estimate a very small probability of either FAI or UFAI before 2100, then I'm not likely to choose UFAI as "most likely to wipe out 90% of humanity before 2100" if I think there's a solid chance for something else to do so.

Consider that I interpreted the singularity question to mean "if you think there is any real chance of a singularity, then in the case that the singularity happens, give the year by which you think it has 50% probability." and answered with 2350, while thinking that the singularity had less than a 50% probability of happening at all.

Yes, Yvain did say to leave it blank if you don't think there will be a singularity. Given the huge uncertainty involved in anyone's prediction of the singularity or any question related to it, I took "don't believe it will happen" to mean that my estimated chance was low enough to not be worth reasoning about the case where it does happen, rather than that my estimate was below 50%.

comment by ataftoti · 2011-12-04T20:50:52.946Z · LW(p) · GW(p)

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual

I'm curious as to how people interpreted this. Does the latter mean that one believes in the supernatural but without a god figure, e.g. buddism, new age? This question looked confusing to me at first glance.

People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Why does it puzzle you?

Replies from: Jayson_Virissimo, pedanterrific, TheOtherDave
comment by Jayson_Virissimo · 2011-12-05T11:21:53.920Z · LW(p) · GW(p)

I'm curious as to how people interpreted this. Does the latter mean that one believes in the supernatural but without a god figure, e.g. buddism, new age? This question looked confusing to me at first glance.

I would have expected the opposite given Yvain's definition of "supernatural". The existence of an agent (or agents) that created the universe seems much more likely than the existence of ontologically basic mental entities. After all, one man's lead software designer of the simulation is another man's god.

Replies from: kilobug
comment by kilobug · 2011-12-05T12:36:01.771Z · LW(p) · GW(p)

Here we reach a usual definition problem about "god". Is "god" just someone who created the universe, but with its own limits, or is he omnipowerful omniscient eternal perfect as it is in monotheist religions ? The lead software designer of the simulation would be the first, but very likely not the second. Probably best to just taboo the word "god" in that context.

comment by pedanterrific · 2011-12-05T09:23:30.740Z · LW(p) · GW(p)

People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Why does it puzzle you?

I assume because higher existential risk would seem to generalize to lower chances of aliens existing (because they had the same or similar existential risk as us).

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-12-05T13:29:01.591Z · LW(p) · GW(p)

A more subtle interpretation, and one that I expect accounts for at least some of the people in this category, is that high existential risk makes it more likely that relatively nearby aliens exist but will never reach the point where they can contact us.

comment by TheOtherDave · 2011-12-04T21:37:09.478Z · LW(p) · GW(p)

If I remember correctly, the terms were defined in the survey itself, such that "spiritual and atheist" was something like believing in ontologically basic mental entities but not believing in a God that met that description. I didn't find the question confusing, but I did find it only peripherally related to what most people mean by either term. That said, it is a standard LW unpacking of those terms.

comment by TraderJoe · 2012-08-14T08:21:43.584Z · LW(p) · GW(p)

I'd be interested in knowing what percentage of LWers attended a private high school [or equivalent in country of origin].

comment by RobertLumley · 2011-12-05T21:36:35.280Z · LW(p) · GW(p)

so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

http://lesswrong.com/lw/82s/dont_call_yourself_a_rationalist/

Replies from: DavidAgain
comment by DavidAgain · 2011-12-05T22:35:23.347Z · LW(p) · GW(p)

More fundamentally than self-labelling, that's an utterly false dilemma. It helps show that the results weren't a totally random 'people on that site then': they show SOMETHING. But what they show must be much more open to debate. To 'rationalist', you can add 1) Has been exposed to LessWrong (sequences and community) 2) English-speaking (unless there were translations?) 3) Minded to take long online surveys: including at the least possibilities 3a) Egotistical enough to think that your survey results must be included 3b) Dedicated enough to the LessWrong community to wish to contribute 3c) Generally publically-minded 3d) Doesn't have enough to do 4) Likely to overestimate one's own IQ

It seems particularly odd to suggest these results are representative of rationalists while recognising both that the proportion of women has tripled since the last survey (and I don't think we're very close to working out what the true proportion is) and that men and women tend to have significantly different attitudes.

The 'direct line to prevailing rationalist opinion' is also straight after what I would guess is most skewed by point (1) above. I'd be shocked to see such high scores for Many Worlds, living in a simulation or cryonics amongst rationalists outside LessWrong.

Finally, could the last set of results itself have had an effect. The most likely effect would be in confirming the in-group opinions, leading to 'evaporative cooling' (if I may!). It seems less likely, but people could have directly calibrated too: I'd be interested in how much that page was accessed ahead of people taking this year's survey.

If 'rationalist' was used just to mean 'LessWrongian' then please ignore the above - and take Robert Lumley's advice!

comment by J_Taylor · 2011-12-04T20:35:47.812Z · LW(p) · GW(p)

I have no idea if this is universal. (Probably not.) However, in my area, using the term "blacks" in certain social circles is not considered proper vocabulary.

I don't have any huge problem with using the term. However, using it may be bad signalling and leaves Lesswrong vulnerable to pattern-matching.

Replies from: Yvain, Jack
comment by Scott Alexander (Yvain) · 2011-12-04T21:15:54.547Z · LW(p) · GW(p)

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

Replies from: J_Taylor, wedrifid
comment by J_Taylor · 2011-12-04T21:23:06.806Z · LW(p) · GW(p)

Journals use "blacks"? I had no idea it was used in technical writing. In some of my social circles, it just happens to be considered, at best, grandma-talk.

Generally, within these circles, "black people" is used.

However, I have no real preference regarding this matter.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-12-06T10:05:12.228Z · LW(p) · GW(p)

as opposed to black fish.

comment by wedrifid · 2011-12-05T09:36:13.191Z · LW(p) · GW(p)

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

Seriously? That seems a little cavalier of them.The medical and psychological influence of race isn't all that much to do with the skin color and a lot more to do with genetic population. That makes the term ambiguous to the point of uselessness. Unless "blacks" is assumed to mean, say, just those of African ancestry. In which case they could be writing "African".

comment by Jack · 2011-12-04T21:01:58.870Z · LW(p) · GW(p)

What is your area?

Replies from: J_Taylor
comment by J_Taylor · 2011-12-04T21:10:09.703Z · LW(p) · GW(p)

Southern United States.

Replies from: Jack, None
comment by Jack · 2011-12-04T21:17:58.466Z · LW(p) · GW(p)

The plural can look weird but as long as it doesn't come after a definite article, it's the standard term and I've never met anyone who was offended by it. The usual politically correct substitute, African-American, is offensive in an international context.

Replies from: J_Taylor
comment by J_Taylor · 2011-12-04T21:28:51.369Z · LW(p) · GW(p)

I have never met any black person who was offended by it. I have met some white people who will take you less seriously if you use the term.

However, if it is the standard term then it is the standard term. I certainly would not replace it with African-American.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-12-05T04:41:09.513Z · LW(p) · GW(p)

Moreover, there are plenty of black people in the world who are not African-American.

There's an infamous video from a few years back in which an American interviewer makes this mistake when talking to an Olympic athlete of British nationality and African ancestry. It becomes increasingly clear that the interviewer is merely doing a mental substitution of "African-American" for "black" without actually thinking about what the former term means ...

Replies from: wedrifid, J_Taylor
comment by wedrifid · 2011-12-05T09:38:47.681Z · LW(p) · GW(p)

Moreover, there are plenty of black people in the world who are not African-American.

Come to think of it we could put the emphasis of either of the terms.

comment by J_Taylor · 2011-12-05T04:49:53.720Z · LW(p) · GW(p)

I do not use "African-American" to refer to non-Americans.

Replies from: army1987
comment by A1987dM (army1987) · 2011-12-05T19:00:57.908Z · LW(p) · GW(p)

I even feel weird calling Obama an African-American (though I still do it, because he self-identifies as one). In my mental lexicon it usually specifically refers to descendants of the African slaves taken to the Americas a long time ago, whereas Obama's parents are a White American of English ancestry and a Kenyan who hadn't been to the US until college.

Replies from: anonymous259
comment by anonymous259 · 2011-12-05T19:39:46.640Z · LW(p) · GW(p)

Ironically, Obama is exactly the kind of person to whom that term should refer, if it means anything at all. Descendants of African slaves taken to the Americas a long time ago should have another term, such as "American blacks".

Despite his lack of membership in it, Obama self-identifies with the latter group for obvious political reasons; after all, "children of foreign exchange students" is not an important constituency.

comment by [deleted] · 2011-12-05T05:03:26.351Z · LW(p) · GW(p)

For what it's worth, I'm also from the southern US, and I also have the impression that "blacks" is slightly cringey and "black people" is preferred.

Replies from: J_Taylor
comment by J_Taylor · 2011-12-07T05:03:56.545Z · LW(p) · GW(p)

I am glad that my case is not too aberrant.

comment by Bruno_Coelho · 2011-12-05T11:45:05.857Z · LW(p) · GW(p)

26.6% are socialists? Now I understand why, the majority don't read the sequences.

Replies from: HonoreDB, ArisKatsaris, None, Kevin
comment by HonoreDB · 2011-12-05T15:09:17.147Z · LW(p) · GW(p)

You have just made a falsifiable prediction! If socialism does not turn out to be negatively correlated with reading the sequences, will you rethink your political views?

Replies from: selylindi, Oligopsony
comment by selylindi · 2011-12-05T20:08:39.136Z · LW(p) · GW(p)

Socialists averaged having read 47% of the sequences. If you include communists it goes down very slightly.

Non-socialists averaged having read 52% of the sequences.

The difference is not statistically significant at the customary alpha=0.05 level, but it's very close.

comment by Oligopsony · 2011-12-05T19:49:24.118Z · LW(p) · GW(p)

As it turns out, socialism does negatively correlate with reading the sequences: coding "less than 25%" as 12.5% and "almost all" as 100%, the naive correlation appears to be -0.06; on average, socialists claim to have read 47% of the sequences, compared to the 51% claim of nonsocialists, a difference of about .12sd. This is significant at the .1 level. Controlling for whether one has been here since the OB days, we go down to about a 2 percentage point difference in sequence completion, about .06sd within each "cohort" and not at all statistically significant.

Replies from: Eliezer_Yudkowsky, Unnamed
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-05T20:48:13.826Z · LW(p) · GW(p)

Statistical significance is a mental disease. The effect size is low enough that I just updated in the direction of "either socialism is more interesting than it looks or people in Europe define 'socialism' as liberalism".

Replies from: kilobug, steven0461, None, Oligopsony, army1987, Apprentice
comment by kilobug · 2011-12-07T13:01:57.922Z · LW(p) · GW(p)

The survey defined socialism as "what is done in Scandinavia", which for me is social-democracy : free-market capitalism with wealth redistribution, strong social safety net, regulation to protect workers, customers or the environment, and some critical sectors (like education) more or less directly handled by the state.

My own definition of socialism is "socialized ownership of the means of production", which can take many shapes : government ownership is a form socialism, but cooperatives or mutualism are other forms of socialism. Socialism doesn't necessarily means centralized planning, even if it is usually described as such.

But anyway, since the survey defined socialism in the first meaning, I used under it the first meaning to answer "socialism" in the survey, and I think we can safely assume most people who answered "socialism" used it under the first meaning.

Replies from: lessdazed, army1987
comment by lessdazed · 2011-12-07T17:10:28.278Z · LW(p) · GW(p)

Let the next survey have the same definitions for communism, conservatism, and liberalism.

Define "socialism" as "Sending children to cigar making factories and sugar making plantations instead of school, like in Cuba; sending suspected dissenters to a KGB prison in Siberia for torture, like in Russia; or sending baby Pandas to reeducation camps for torture, like in China."

Define "libertarianism" as "Rule by corporations, like in the United States; or beating chimney-sweeps to death with a cane while wearing a bowler hat and monocle, like in Britain."

I predict similar responses. You can't expect people to comply with redefined political labels.

comment by A1987dM (army1987) · 2011-12-07T18:00:42.993Z · LW(p) · GW(p)

Actually, I answered socialism because I'm a libertarian socialist, the examples made clear that libertarianism and socialism were supposed to mean ‘American-style (i.e. capitalist) libertarianism’ and ‘social-democracy’ respectively, neither of which are anywhere near my position, the survey had no “none of the above” answer and that “socialism” is less unsatisfactory to me than that “libertarianism”.

comment by steven0461 · 2011-12-05T20:56:40.221Z · LW(p) · GW(p)

Or rationality doesn't much impact people's views on politics. Good thinkers seem to me to be all over the place politically.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-09T22:32:22.170Z · LW(p) · GW(p)

This doesn't surprise me. I have believed for a while now that political left-ness versus right-ness is determined by terminal values, not by beliefs or epistemology (except for the case of religiously based moral opinions, but in a forum of mostly atheists this effect doesn't show up much.)

comment by [deleted] · 2011-12-05T21:03:22.447Z · LW(p) · GW(p)

"either socialism is more interesting than it looks or people in Europe define 'socialism' as liberalism".

I wouldn't be surprised.

In my country a "liberal" is basically "classical liberal", some kind of conservative or libertaria, but I've grown accustomed to American usage over the past three years.

We need data on what the proportion of Americans is.

Replies from: ArisKatsaris, steven0461
comment by ArisKatsaris · 2011-12-05T21:11:24.091Z · LW(p) · GW(p)

This was the question and the options, as detailed in the survey:

"Given that no label can completely describe a person's political views, with which of these labels do you MOST identify?"

  • Libertarian, for example like the US Libertarian Party: socially permissive, minimal/no taxes, minimal/no redistribution of wealth
  • Conservative, for example the US Republican Party and UK Tories: traditional values, low taxes, low redistribution of wealth
  • Liberal, for example the US Democratic Party or the UK Labour Party: socially permissive, more taxes, more redistribution of wealth
  • Socialist, for example Scandinavian countries: socially permissive, high taxes, major redistribution of wealth
  • Communist, for example the old Soviet Union: complete state control of many facets of life

I chose Socialist simply because I prefer what they seem to have in Scandinavia, than what the US Democratic Party or the UK Labour Party seems to have on offer.

Replies from: Prismattic, None, None
comment by Prismattic · 2011-12-05T22:06:04.265Z · LW(p) · GW(p)

I chose Socialist simply because I prefer what they seem to have in Scandinavia, than what the US Democratic Party or the UK Labour Party seems to have on offer.

The US Democratic Party is a lot less coherent entity than the left-er party in most countries. In most OECD countries, the person who says "I wish our government spending : GDP ratio was more like Canada's" is a right-wing position while "I wish our government spending: GDP ratio was more like Sweden's" is a left-wing position. In the the US, people espousing either of these views end up in the Democratic coalition, because the entire spectrum is shifted so far to the right, and there is nowhere else to go.

A lot of US Democrat-leaning voters wish the US was a parliamentary system, so that the centrist and center-left wings of the party could split (as they are in Canada between Liberals and New Democrats).

Replies from: None
comment by [deleted] · 2011-12-06T08:02:58.141Z · LW(p) · GW(p)

"I wish our government spending : GDP ratio was more like Canada's" is a right-wing position while "I wish our government spending: GDP ratio was more like Sweden's"

Yes politics really is this boring over here.

The US Democratic Party is a lot less coherent entity than the left-er party in most countries.

Arguably the Republican party is also a less coherent party than many right-er parties in continental Europe, where you usually have a "libertarian leaning" smaller-goverment party, a social conservative (Christian-ish) party and occasionally also in addition to that a nationalist party.

Consider for a moment that Ron Paul, Patrick Buchanan and George Bush are in the same party. What's the overlap between these three in terms of something like trade tariffs, immigration, foreign relations, which parts of government spending should be cut, where spending should increase, meddling in social issues, education ect. ?

I've heard some of my countrymen complain we have too many parties with little variation among them. But I'm rather glad coalition building is required to be done in a arguably more transparent way. It also makes individual parties a temporary affair, since they break up and recombine all the time. Bad parties also tend to fail to enter parliament when they screw up things too much, which helps cull blind loyalty votes. It also allows some parties like the Greens or the Pirate party that otherwise wouldn't be heard to get a voice in parliament and I'm glad they do.

comment by [deleted] · 2011-12-05T21:19:07.870Z · LW(p) · GW(p)

I chose Socialist simply because I prefer what they seem to have in Scandinavia, than what the US Democratic Party or the UK Labour Party seems to have on offer.

I generally prefer what they have in Scandinavia compared to British Labour and US Democrats as well. Though I chose conservative so I'm not sure how this maps.

Perhaps "traditional values" (whatever that means) combined with basically a neutral attitude to wealth redistribution and an eye for expected quality of life? Or perhaps growing up in a country where everyone I know considers themselves some kind of "social democrat", I have an odd idea of what "conservative" stands for.

Actually I'd be quite interested to get more data on the 29 other conservatives here, I wonder if we're just "secular right" types (like I partially consider myself to be since I'm an atheist) or if some of the crypto and pseudo theists are conservative as well. Also I wonder how many where influenced by Moldbug or any of the other representatives of the internet (new? alternative?) intellectual right. To give one data point on the latter, I never even considered there might be interesting material from right wing thought, until I was exposed to it on-line and began seeing merit in it.

Edit: I counted 7 committed or lukewarm theists among conservative LWers.

Replies from: steven0461
comment by steven0461 · 2011-12-05T21:34:31.698Z · LW(p) · GW(p)

Actually I'd be quite interested to get more data on the 29 other conservatives here, I wonder if we're just "secular right" types (like I partially consider myself to be since I'm an atheist) or if some of the crypto and pseudo theists are conservative as well.

It should be right there in the spreadsheet, under ReligiousViews.

Replies from: None
comment by [deleted] · 2011-12-05T21:44:53.836Z · LW(p) · GW(p)

I must have missed it, where was it linked to?

Replies from: steven0461
comment by steven0461 · 2011-12-05T22:07:32.186Z · LW(p) · GW(p)

At the bottom.

Replies from: None
comment by [deleted] · 2011-12-05T22:25:19.062Z · LW(p) · GW(p)

Thanks!

comment by [deleted] · 2011-12-05T21:14:49.851Z · LW(p) · GW(p)

Thanks it's been some time since I took the survey and I forgot how that question was formulated.

comment by steven0461 · 2011-12-05T21:07:37.727Z · LW(p) · GW(p)

Where I come from, liberals are mostly right-wing too, but it's hard to disentangle whether that's because the whole spectrum is to the left or whether the definitions are just different. The survey did explain that "liberal" meant US liberal. (ETA: as per the sibling comment, it's a little more complicated than that)

Replies from: dbaupp
comment by dbaupp · 2011-12-06T01:07:27.855Z · LW(p) · GW(p)

This is the case in Australia too. The "Liberal" party is the major right-wing political party (and thus are more similar to the Republicans than the Democrats (in the US)), so there is a distinction between "big-L" and "little-L" liberals.

comment by Oligopsony · 2011-12-06T15:33:47.282Z · LW(p) · GW(p)

Statistical significance is a mental disease.

Only if you misuse it! Statistical significance is a good shorthand for how likely it is that a result is a fluke, which is helpful to know. Concluding from even astronomically precise p-values to some particular non-null hypothesis without other evidence in its favor, like "reading the Sequences causes people to be less socialist," is another kettle of tea.

Or to go from a different angle: I don't think much of anything can be concluded about communists and the sequences, even though there's a non-tiny effect size, because, like, there are only five of us. (Probably the same applies to conservatives, just slightly less so; I'm too lazy to do the math.) One's better off with reasonable priors: Sequences probably don't impact politics that much, communists are probably like socialists in their likelihood of reading a blog by a George Mason economist, the terms probably aren't cutting reality at its joints, and so on.

or people in Europe define 'socialism' as liberalism"

"Socialism" is a really imprecise word aside from various specialized contexts and discourses; all you can really conclude is that the person identifying as such values equality relatively highly. (In this sense it's a poor descriptor but perhaps a less mind-killing label than a good descriptor of policy preferences would be, although "left" is probably better at this still, so it's not clear to me that there's an actual sweet spot that would justify continued use of the term.) Per the definitions offered in the survey, socialism ("socially permissive, high taxes, major redistribution of wealth") and liberalism ("socially permissive, more taxes, more redistribution of wealth") are basically the same thing, aside from being defined absolutely or relatively (such that liberalism would technically be to socialism's left if you think your scope polity has high taxes and major redistribution of wealth.) Since there's a lot of left ideological space between "a bigger welfare state" and (meaningfully narrow interpretations of) "state control of many aspects of life," it's difficult to say how many self-reported socialists are social democrats and how many are one flavor of "hard" left or another. Similarly, it's hard to guess how many of the Moldbug set classified themselves as libertarian versus conservative - is "socially permissive"/"traditional values" about religion or race and gender? - and hence what the paleo/"liberaltarian" split is among libertarians (though you could probably catch the Objectivist and ancap cluster with virtue ethics and deontology.)

It seems pretty clear that the political mainstream here, as in almost all educated Western sets, is some form of cosmopolitan democratic capitalism, but it's difficult to see how large that mainstream is. Maybe you'd see some interesting correlations with being some flavor of weirdo or a weirdo at all - so if you have a strong prior that reading the Sequences will draw you into that mainstream, the low effect size of reported socialism on sequence reading probably shouldn't affect it much! - I dunno.

comment by A1987dM (army1987) · 2011-12-07T17:56:09.768Z · LW(p) · GW(p)

IIRC, liberalism itself has different meanings on the two sides of the Atlantic, even though I can't remember what either of them was.

Replies from: thomblake
comment by thomblake · 2011-12-07T21:42:11.897Z · LW(p) · GW(p)

In some places (notably the United States), "liberal" means "politically left", which in turn pretty much refers to a package of political views held by those who oppose the "conservative" or "politically right".

In other places, "liberal" retains its original association with concern for liberty, and is comparable to the American use of "Libertarian".

This mostly happened for arbitrary historical reasons, but the easiest way to attach a story to it is as follows:

During the English Civil War and the French Revolution, those in support of popular rule (liberty/equality) were called "Liberals" and those in support of rule by the monarchy were called "Conservatives". The French setup had the liberals sitting on the left and conservatives sitting on the right, thus creating the basis for the "spectrum" between left and right views. Of course, the common meanings of "conservative" and "liberal" were already in opposition ("he is liberal" means "he is free [with ...]") and so even when the "conservative" political view no longer referred to supporters of the monarchy, the "liberal" view was simply whatever opposed them.

Someone who insists political parties in the US have anything to do with ideology rather than simply being a combination of coalition and accident, might characterize modern "conservative" and "liberal" as representing the battle between "liberty" versus "equality" (respectively, ironically).

Replies from: Prismattic
comment by Prismattic · 2011-12-08T00:44:58.935Z · LW(p) · GW(p)

In other places, "liberal" retains its original association with concern for liberty, and is comparable to the American use of "Libertarian".

I don't really want to get into a mindkilling debate about this here, but is at least worth noting that some modern US liberals dispute the libertarians' claim to be the heirs to classical liberalism.

Replies from: thomblake
comment by thomblake · 2011-12-08T14:56:34.021Z · LW(p) · GW(p)

That's an odd claim. All the capitalist libertarians I know are Lockeans, and are well aware of those sorts of constraints.

comment by Apprentice · 2011-12-07T12:39:13.668Z · LW(p) · GW(p)

As a Scandinavian socialist I support things like:

  • High taxes and public spending - around 50% of GDP seems about right.
  • Cradle-to-grave socialized medicine.
  • Publicly funded education - up to and including the university level.
  • An elaborate social safety net.
  • Extensive feminist social engineering schemes (long state-funded paternity leave and so on).

Why do I support things like that? Because I'm used to them and my first-hand experience tells me they work quite well. Also because the Scandinavian countries look good in international comparisons of various things. Though of course we can think of alternative explanations for that - maybe Scandinavia works well because it is populated by Scandinavians (a Steve Sailer type explanation) - I'm open to persuasion.

If you define socialism as "a system with state ownership of the means of production" then I'm not a socialist but some other type of collectivist.

Replies from: Tripitaka
comment by Tripitaka · 2011-12-07T12:44:45.104Z · LW(p) · GW(p)

Extensive feminist social engineering schemes

Please tell me that this is an official phrase.

Replies from: Apprentice
comment by Apprentice · 2011-12-07T13:01:47.490Z · LW(p) · GW(p)

Haha, I'm afraid not. These things are always framed in terms of "equal rights" or "social justice" or "changing perceptions".

comment by Unnamed · 2011-12-09T23:17:39.471Z · LW(p) · GW(p)

There is a big effect if you look at time in the community rather than sequence reading. 38% of newcomers who have been part of the LW community for under a year picked "socialist", compared to only 21% of the oldtimers who have been part of LW for 2 years or more. Most of the shift is from libertarianism, which is at only 28% of newcomers vs. 41% of oldtimers.

Replies from: Jack
comment by Jack · 2011-12-09T23:50:51.770Z · LW(p) · GW(p)

Since old-timers tend to be Overcoming Bias fans the libertarian-leaning nature of that blog explains the difference.

comment by ArisKatsaris · 2011-12-05T12:04:51.286Z · LW(p) · GW(p)

Downvoted for implicit insult.

As someone who selected "socialist" (and yet has read the sequences) I want to remind you that that the example given in the survey for what "socialist" meant in the context of the survey was Sweden -- it wasn't Cuba, Venezuela, or Greece.

Replies from: kilobug
comment by kilobug · 2011-12-05T12:33:23.915Z · LW(p) · GW(p)

Not wanting to enter a political debate, but putting Cuba, Venezuela and Greece in the same bag is quite shocking to me. I don't see how Greece is more "socialist" than most of Europe, and Venezuela is very different from Cuba in so many aspects than trying to group them is making an arbitrary cluster (unless you're speaking of a generic specific thing like "the two founding members of Alba").

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-05T13:09:36.999Z · LW(p) · GW(p)

Not wanting to enter a political debate, but putting Cuba, Venezuela and Greece in the same bag is quite shocking to me.

Not wanting to enter a political debate either, but my point was NOT that these three are in the same bag, it was that I don't consider any of them in the same bag as Sweden which was the example given.

comment by [deleted] · 2011-12-05T20:56:31.106Z · LW(p) · GW(p)

Somehow I don't think adding a smiley face to this post would have helped you get into the positives Bruno_Coelho.

Which in itself is very interesting.

comment by Kevin · 2011-12-05T15:18:43.953Z · LW(p) · GW(p)

Political identification, in an age when individuals have basically no impact on national politics, is more about aesthetics and signalling than anything else. Socialism is just so much more populist than libertarianism.

Replies from: NihilCredo
comment by NihilCredo · 2011-12-08T03:09:35.509Z · LW(p) · GW(p)

in an age when individuals have basically no impact on national politics

Did they ever?