2013 Survey Results

post by Scott Alexander (Yvain) · 2014-01-19T02:51:57.048Z · LW · GW · Legacy · 560 comments

Contents

  Part I. Population
  II. Categorical Data
  III. Numeric Data
  IV. Bivariate Correlations
  V. Hypothesis Testing
    A. Do people in the effective altruism movement donate more money to charity? Do they donate a higher percent of their income to charity? Are they just generally more altruistic people?
    B. Can we finally resolve this IQ controversy that comes up every year?
    C. Can we predict who does or doesn't cooperate on prisoner's dilemmas?
  VI. Monetary Prize
  VII. Calibration Questions
  VIII. Public Data
None
560 comments

Thanks to everyone who took the 2013 Less Wrong Census/Survey. Extra thanks to Ozy, who helped me out with the data processing and statistics work, and to everyone who suggested questions.

This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.

Part I. Population

1636 people answered the survey.

Compare this to 1195 people last year, and 1090 people the year before that. It would seem the site is growing, but we do have to consider that each survey lasted a different amount of time; for example, last survey lasted 23 days, but this survey lasted 40.

However, almost everyone who takes the survey takes it in the first few weeks it is available. 1506 of the respondents answered within the first 23 days, proving that even if the survey ran the same length as last year's, there would still have been growth.
As we will see lower down, growth is smooth across all categories of users (lurkers, commenters, posters) EXCEPT people who have posted to Main, the number of which remains nearly the same from year to year.

We continue to have very high turnover - only 40% of respondents this year say they also took the survey last year.

II. Categorical Data

SEX:
Female: 161, 9.8%
Male: 1453, 88.8%
Other: 1, 0.1%
Did not answer: 21, 1.3%

[[Ozy is disappointed that we've lost 50% of our intersex readers.]]

GENDER:
F (cisgender): 140, 8.6%
F (transgender MtF): 20, 1.2%
M (cisgender): 1401, 85.6%
M (transgender FtM): 5, 0.3%
Other: 49, 3%
Did not answer: 21, 1.3%

SEXUAL ORIENTATION:
Asexual: 47, 2.9%
Bisexual: 188, 12.2%
Heterosexual: 1287, 78.7%
Homosexual: 45, 2.8%
Other: 39, 2.4%
Did not answer: 19, 1.2%

RELATIONSHIP STYLE:
Prefer monogamous: 829, 50.7%
Prefer polyamorous: 234, 14.3%
Other: 32, 2.0%
Uncertain/no preference: 520, 31.8%
Did not answer: 21, 1.3%

NUMBER OF CURRENT PARTNERS:
0: 797, 48.7%
1: 728, 44.5%
2: 66, 4.0%
3: 21, 1.3%
4: 1, .1%
6: 3, .2%
Did not answer: 20, 1.2%

RELATIONSHIP STATUS:
Married: 304, 18.6%
Relationship: 473, 28.9%
Single: 840, 51.3%

RELATIONSHIP GOALS:
Looking for more relationship partners: 617, 37.7%
Not looking for more relationship partners: 993, 60.7%
Did not answer: 26, 1.6%

HAVE YOU DATED SOMEONE YOU MET THROUGH THE LESS WRONG COMMUNITY?
Yes: 53, 3.3%
I didn't meet them through the community but they're part of the community now: 66, 4.0%
No: 1482, 90.5%
Did not answer: 35, 2.1%

COUNTRY:
United States: 895, 54.7%
United Kingdom: 144, 8.8%
Canada: 107, 6.5%
Australia: 69, 4.2%
Germany: 68, 4.2%
Finland: 35, 2.1%
Russia: 22, 1.3%
New Zealand: 20, 1.2%
Israel: 17, 1.0%
France: 16, 1.0%
Poland: 16, 1.0%

LESS WRONGERS PER CAPITA:
Finland: 1/154,685.
New Zealand: 1/221,650.
Canada: 1/325,981.
Australia: 1/328,659.
United States: 1/350,726
United Kingdom: 1/439,097
Israel: 1/465,176.
Germany: 1/1,204,264.
Poland: 1/2,408,750.
France: 1/4,106,250.
Russia: 1/6,522,727

RACE:
Asian (East Asian): 60, 3.7%
Asian (Indian subcontinent): 37, 2.3%
Black: 11, .7%
Middle Eastern: 9, .6%
White (Hispanic): 73, 4.5%
White (non-Hispanic): 1373, 83.9%
Other: 51, 3.1%
Did not answer: 22, 1.3%

WORK STATUS:
Academics (teaching): 77, 4.7%
For-profit work: 552, 33.7%
Government work: 55, 3.4%
Independently wealthy: 14, .9%
Non-profit work: 46, 2.8%
Self-employed: 103, 6.3%
Student: 661, 40.4%
Unemployed: 105, 6.4%
Did not answer: 23, 1.4%

PROFESSION:
Art: 27, 1.7%
Biology: 26, 1.6%
Business: 44, 2.7%
Computers (AI): 47, 2.9%
Computers (other academic computer science): 107, 6.5%
Computers (practical): 505, 30.9%
Engineering: 128, 7.8%
Finance/economics: 92, 5.6%
Law: 36, 2.2%
Mathematics: 139, 8.5%
Medicine: 31, 1.9%
Neuroscience: 13, .8%
Philosophy: 41, 2.5%
Physics: 92, 5.6%
Psychology: 34, 2.1%
Statistics: 23, 1.4%
Other hard science: 31, 1.9%
Other social science: 43, 2.6%
Other: 139, 8.5%
Did not answer: 38, 2.3%

DEGREE:
None: 84, 5.1%
High school: 444, 27.1%
2 year degree: 68, 4.2%
Bachelor's: 554, 33.9%
Master's: 323, 19.7%
MD/JD/other professional degree: 31, 2.0%
PhD.: 90, 5.5%
Other: 22, 1.3%
Did not answer: 19, 1.2%

POLITICAL:
Communist: 11, .7%
Conservative: 64, 3.9%
Liberal: 580, 35.5%
Libertarian: 437, 26.7%
Socialist: 502, 30.7%
Did not answer: 42, 2.6%

COMPLEX POLITICAL WITH WRITE-IN:
Anarchist: 52, 3.2%
Conservative: 16, 1.0%
Futarchist: 42, 2.6%
Left-libertarian: 142, 8.7%
Liberal: 5
Moderate: 53, 3.2%
Pragmatist: 110, 6.7%
Progressive: 206, 12.6%
Reactionary: 40, 2.4%
Social democrat: 154, 9.5%
Socialist: 135, 8.2%
Did not answer: 26.2%

[[All answers with more than 1% of the Less Wrong population included. Other answers which made Ozy giggle included "are any of you kings?! why do you CARE?!", "Exclusionary: you are entitled to an opinion on nuclear power when you know how much of your power is nuclear", "having-well-founded-opinions-is-really-hard-ist", "kleptocrat", "pirate", and "SPECIAL FUCKING SNOWFLAKE."]]

AMERICAN PARTY AFFILIATION:
Democratic Party: 226, 13.8%
Libertarian Party: 31, 1.9%
Republican Party: 58, 3.5%
Other third party: 19, 1.2%
Not registered: 447, 27.3%
Did not answer or non-American: 856, 52.3%

VOTING:
Yes: 936, 57.2%
No: 450, 27.5%
My country doesn't hold elections: 2, 0.1%
Did not answer: 249, 15.2%

RELIGIOUS VIEWS:
Agnostic: 165, 10.1%
Atheist and not spiritual: 1163, 71.1%
Atheist but spiritual: 132, 8.1%
Deist/pantheist/etc.: 36, 2.2%
Lukewarm theist: 53, 3.2%
Committed theist 64, 3.9%

RELIGIOUS DENOMINATION (IF THEIST):
Buddhist: 22, 1.3%
Christian (Catholic): 44, 2.7%
Christian (Protestant): 56, 3.4%
Jewish: 31, 1.9%
Mixed/Other: 21, 1.3%
Unitarian Universalist or similar: 25, 1.5%

[[This includes all religions with more than 1% of Less Wrongers. Minority religions include Dzogchen, Daoism, various sorts of Paganism, Simulationist, a very confused secular humanist, Kopmist, Discordian, and a Cultus Deorum Romanum practitioner whom Ozy wants to be friends with.]]

FAMILY RELIGION:
Agnostic: 129, 11.6%
Atheist and not spiritual: 225, 13.8%
Atheist but spiritual: 73, 4.5%
Committed theist: 423, 25.9%
Deist/pantheist, etc.: 42, 2.6%
Lukewarm theist: 563, 34.4%
Mixed/other: 97, 5.9%
Did not answer: 24, 1.5%

RELIGIOUS BACKGROUND:
Bahai: 3, 0.2%
Buddhist: 13, .8%
Christian (Catholic): 418, 25.6%
Christian (Mormon): 38, 2.3%
Christian (Protestant): 631, 38.4%
Christian (Quaker): 7, 0.4%
Christian (Unitarian Universalist or similar): 32, 2.0%
Christian (other non-Protestant): 99, 6.1%
Christian (unknown): 3, 0.2%
Eckankar: 1, 0.1%
Hindu: 29, 1.8%
Jewish: 136, 8.3%
Muslim: 12, 0.7%
Native American Spiritualist: 1, 0.1%
Mixed/Other: 85, 5.3%
Sikhism: 1, 0.1%
Traditional Chinese: 11, .7%
Wiccan: 1, 0.1%
None: 8, 0.4%
Did not answer: 107, 6.7%

MORAL VIEWS:
Accept/lean towards consequentialism: 1049, 64.1%
Accept/lean towards deontology: 77, 4.7%
Accept/lean towards virtue ethics: 197, 12.0%
Other/no answer: 276, 16.9%
Did not answer: 37, 2.3%

CHILDREN
0: 1414, 86.4%
1: 77, 4.7%
2: 90, 5.5%
3: 25, 1.5%
4: 7, 0.4%
5: 1, 0.1%
6: 2, 0.1%
Did not answer: 20, 1.2%

MORE CHILDREN:
Have no children, don't want any: 506, 31.3%
Have no children, uncertain if want them: 472, 29.2%
Have no children, want children: 431, 26.7%
Have no children, didn't answer: 5, 0.3%
Have children, don't want more: 124, 7.6%
Have children, uncertain if want more: 25, 1.5%
Have children, want more: 53, 3.2%

HANDEDNESS:
Right: 1256, 76.6%
Left: 145, 9.5%
Ambidextrous: 36, 2.2%
Not sure: 7, 0.4%
Did not answer: 182, 11.1%

LESS WRONG USE:
Lurker (no account): 584, 35.7%
Lurker (account) 221, 13.5%
Poster (comment, no post): 495, 30.3%
Poster (Discussion, not Main): 221, 12.9%
Poster (Main): 103, 6.3%

SEQUENCES:
Never knew they existed: 119, 7.3%
Knew they existed, didn't look at them: 48, 2.9%
~25% of the Sequences: 200, 12.2%
~50% of the Sequences: 271, 16.6%
~75% of the Sequences: 225, 13.8%
All the Sequences: 419, 25.6%
Did not answer: 24, 1.5%

MEETUPS:
No: 1134, 69.3%
Yes, once or a few times: 307, 18.8%
Yes, regularly: 159, 9.7%

HPMOR:
No: 272, 16.6%
Started it, haven't finished: 255, 15.6%
Yes, all of it: 912, 55.7%

CFAR WORKSHOP ATTENDANCE:
Yes, a full workshop: 105, 6.4%
A class but not a full-day workshop: 40, 2.4%
No: 1446, 88.3%
Did not answer: 46, 2.8%

PHYSICAL INTERACTION WITH LW COMMUNITY:
Yes, all the time: 94, 5.7%
Yes, sometimes: 179, 10.9%
No: 1316, 80.4%
Did not answer: 48, 2.9%

VEGETARIAN:
No: 1201, 73.4%
Yes: 213, 13.0%
Did not answer: 223, 13.6%

SPACED REPETITION:
Never heard of them: 363, 22.2%
No,  but I've heard of them: 495, 30.2%
Yes, in the past: 328, 20%
Yes, currently: 219, 13.4%
Did not answer: 232, 14.2%

HAVE YOU TAKEN PREVIOUS INCARNATIONS OF THE LESS WRONG SURVEY?
Yes: 638, 39.0%
No: 784, 47.9%
Did not answer: 215, 13.1%

PRIMARY LANGUAGE:
English: 1009, 67.8%
German: 58, 3.6%
Finnish: 29, 1.8%
Russian: 25, 1.6%
French: 17, 1.0%
Dutch: 16, 1.0%
Did not answer: 15.2%

[[This includes all answers that more than 1% of respondents chose. Other languages include Urdu, both Czech and Slovakian, Latvian, and Love.]]

ENTREPRENEUR:
I don't want to start my own business: 617, 37.7%
I am considering starting my own business: 474, 29.0%
I plan to start my own business: 113, 6.9%
I've already started my own business: 156, 9.5%
Did not answer: 277, 16.9%

EFFECTIVE ALTRUIST:
Yes: 468, 28.6%
No: 883, 53.9%
Did not answer: 286, 17.5%

WHO ARE YOU LIVING WITH?
Alone: 348, 21.3%
With family: 420, 25.7%
With partner/spouse: 400, 24.4%
With roommates: 450, 27.5%
Did not answer: 19, 1.3%

DO YOU GIVE BLOOD?
No: 646, 39.5%
No, only because I'm not allowed: 157, 9.6%
Yes, 609, 37.2%
Did not answer: 225, 13.7%

GLOBAL CATASTROPHIC RISK:
Pandemic (bioengineered): 374, 22.8%
Environmental collapse including global warming: 251, 15.3%
Unfriendly AI: 233, 14.2%
Nuclear war: 210, 12.8%
Pandemic (natural) 145, 8.8%
Economic/political collapse: 175, 1, 10.7%
Asteroid strike: 65, 3.9%
Nanotech/grey goo: 57, 3.5%
Didn't answer: 99, 6.0%

CRYONICS STATUS:
Never thought about it / don't understand it: 69, 4.2%
No, and don't want to: 414, 25.3%
No, still considering: 636, 38.9%
No, would like to: 265, 16.2%
No, would like to, but it's unavailable: 119, 7.3%
Yes: 66, 4.0%
Didn't answer: 68, 4.2%

NEWCOMB'S PROBLEM:
Don't understand/prefer not to answer: 92, 5.6%
Not sure: 103, 6.3%
One box: 1036, 63.3%
Two box: 119, 7.3%
Did not answer: 287, 17.5%

GENOMICS:
Yes: 177, 10.8%
No: 1219, 74.5%
Did not answer: 241, 14.7%

REFERRAL TYPE:
Been here since it started in the Overcoming Bias days: 285, 17.4%
Referred by a friend: 241, 14.7%
Referred by a search engine: 148, 9.0%
Referred by HPMOR: 400, 24.4%
Referred by a link on another blog: 373, 22.8%
Referred by a school course: 1, .1%
Other: 160, 9.8%
Did not answer: 29, 1.9%

REFERRAL SOURCE:
Common Sense Atheism: 33
Slate Star Codex: 20
Hacker News: 18
Reddit: 18
TVTropes: 13
Y Combinator: 11
Gwern: 9
RationalWiki: 8
Marginal Revolution: 7
Unequally Yoked: 6
Armed and Dangerous: 5
Shtetl Optimized: 5
Econlog: 4
StumbleUpon: 4
Yudkowsky.net: 4
Accelerating Future: 3
Stares at the World: 3
xkcd: 3
David Brin: 2
Freethoughtblogs: 2
Felicifia: 2
Givewell: 2
hatrack.com: 2
HPMOR: 2
Patri Friedman: 2
Popehat: 2
Overcoming Bias: 2
Scientiststhesis: 2
Scott Young: 2
Stardestroyer.net: 2
TalkOrigins: 2
Tumblr: 2

[[This includes all sources with  more than one referral; needless to say there was a long tail]]

III. Numeric Data

(in the form mean + stdev (1st quartile, 2nd quartile, 3rd quartile) [n = number responding]))

Age: 27.4 + 8.5 (22, 25, 31) [n = 1558]
Height: 176.6 cm + 16.6 (173, 178, 183) [n = 1267]

Karma Score: 504 + 2085 (0, 0, 100) [n = 1438]
Time in community: 2.62 years + 1.84 (1, 2, 4) [n = 1443]
Time on LW: 13.25 minutes/day + 20.97 (2, 10, 15) [n = 1457]

IQ: 138.2 + 13.6 (130, 138, 145) [n = 506]
SAT out of 1600: 1474 + 114 (1410, 1490, 1560) [n = 411]
SAT out of 2400: 2207 + 161 (2130, 2240, 2330) [n = 333]
ACT out of 36: 32.8 + 2.5 (32, 33, 35) [n = 265]

P(Aliens in observable universe): 74.3 + 32.7 (60, 90, 99) [n = 1496]
P(Aliens in Milky Way): 44.9 + 38.2 (5, 40, 85) [n = 1482]
P(Supernatural): 7.7 + 22 (0E-9, .000055, 1) [n = 1484]
P(God): 9.1 + 22.9 (0E-11, .01, 3) [n = 1490]
P(Religion): 5.6 + 19.6 (0E-11, 0E-11, .5) [n = 1497]
P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]  
P(AntiAgathics): 27.6 + 31.2 (2, 10, 50) [n = 1493]
P(Simulation): 24.1 + 28.9 (1, 10, 50) [n = 1400]
P(ManyWorlds): 50 + 29.8 (25, 50, 75) [n = 1373]
P(Warming): 80.7 + 25.2 (75, 90, 98) [n = 1509]
P(Global catastrophic risk): 72.9 + 25.41 (60, 80, 95) [n = 1502]
Singularity year: 1.67E +11 + 4.089E+12 (2060, 2090, 2150) [n = 1195]

[[Of course, this question was hopelessly screwed up by people who insisted on filling the whole answer field with 9s, or other such nonsense. I went back and eliminated all outliers - answers with more than 4 digits or answers in the past - which changed the results to: 2150 + 226 (2060, 2089, 2150)]]

Yearly Income: $73,226 +423,310 (10,000, 37,000, 80,000) [n = 910]
Yearly Charity: $1181.16 + 6037.77 (0, 50, 400) [n = 1231]
Yearly Charity to MIRI/CFAR: $307.18 + 4205.37 (0, 0, 0) [n = 1191]
Yearly Charity to X-risk (excluding MIRI or CFAR): $6.34 + 55.89 (0, 0, 0) [n = 1150]

Number of Languages: 1.49 + .8 (1, 1, 2) [n = 1345]
Older Siblings: 0.5 + 0.9 (0, 0, 1) [n = 1366]
Time Online/Week: 42.7 hours + 24.8 (25, 40, 60) [n = 1292]
Time Watching TV/Week: 4.2 hours + 5.7 (0, 2, 5) [n = 1316]

[[The next nine questions ask respondents to rate how favorable they are to the political idea or movement above on a scale of 1 to 5, with 1 being "not at all favorable" and 5 being "very favorable". You can see the exact wordings of the questions on the survey.]]

Abortion: 4.4 + 1 (4, 5, 5) [n = 1350]
Immigration: 4.1 + 1 (3, 4, 5) [n = 1322]
Basic Income: 3.8 + 1.2 (3, 4, 5) [n = 1289]
Taxes: 3.1 + 1.3 (2, 3, 4) [n = 1296]
Feminism: 3.8 + 1.2 (3, 4, 5) [n = 1329]
Social Justice: 3.6 + 1.3 (3, 4, 5) [n = 1263]
Minimum Wage: 3.2 + 1.4 (2, 3, 4) [n = 1290]
Great Stagnation: 2.3 + 1 (2, 2, 3) [n = 1273]
Human Biodiversity: 2.7 + 1.2 (2, 3, 4) [n = 1305]

IV. Bivariate Correlations

Ozy ran bivariate correlations between all the numerical data and recorded all correlations that were significant at the .001 level in order to maximize the chance that these are genuine results. The format is variable/variable: Pearson correlation (n). Yvain is not hugely on board with the idea of running correlations between everything and seeing what sticks, but will grudgingly publish the results because of the very high bar for significance (p < .001 on ~800 correlations suggests < 1 spurious result) and because he doesn't want to have to do it himself.

Less Political:
SAT score (1600)/SAT score (2400): .835 (56)
Charity/MIRI and CFAR donations: .730 (1193)
SAT score out of 2400/ACT score: .673 (111)
SAT score out of 1600/ACT score: .544 (102)
Number of children/age: .507 (1607)
P(Cryonics)/P(AntiAgathics): .489 (1515)
SAT score out of 1600/IQ: .369 (173)
MIRI and CFAR donations/XRisk donations: .284 (1178)
Number of children/ACT score: -.279 (269)
Income/charity: .269 (884)
Charity/Xrisk charity: .262 (1161)
P(Cryonics)/P(Simulation): .256 (1419)
P(AntiAgathics)/P(Simulation): .253 (1418)
Number of current partners/age: .238 (1607) 
Number of children/SAT score (2400): -.223 (345)
Number of current partners/number of children: .205 (1612)
SAT score out of 1600/age: -.194 (422)
Charity/age: .175 (1259)
Time on Less Wrong/IQ: -.164 (492)
P(Warming)/P(GlobalCatastrophicRisk): .156 (1522)
Number of current partners/IQ: .155 (521)
P(Simulation)/age: -.153 (1420)
Immigration/P(ManyWorlds): .150 (1195)
Income/age: .150 (930)
P(Cryonics)/age: -.148 (1521)
Income/children: .145 (931)
P(God)/P(Simulation): .142 (1409)
Number of children/P(Aliens): .140 (1523)
P(AntiAgathics)/Hours Online: .138 (1277)
Number of current partners/karma score: .137 (1470)
Abortion/P(ManyWorlds): .122 (1215)
Feminism/Xrisk charity donations: -.122 (1104)
P(AntiAgathics)/P(ManyWorlds) .118 (1381)
P(Cryonics)/P(ManyWorlds): .117 (1387)
Karma score/Great Stagnation: .114 (1202)
Hours online/P(simulation): .114 (1199)
P(Cryonics)/Hours Online: .113 (1279)
P(AntiAgathics)/Great Stagnation: -.111 (1259)
Basic income/hours online: .111 (1200)
P(GlobalCatastrophicRisk)/Great Stagnation: -.110 (1270)
Age/X risk charity donations: .109 (1176)
P(AntiAgathics)/P(GlobalCatastrophicRisk): -.109 (1513)
Time on Less Wrong/age: -.108 (1491)
P(AntiAgathics)/Human Biodiversity: .104 (1286)
Immigration/Hours Online: .104 (1226)
P(Simulation)/P(GlobalCatastrophicRisk): -.103 (1421)
P(Supernatural)/height: -.101 (1232)
P(GlobalCatastrophicRisk)/height: .101 (1249)
Number of children/hours online: -.099 (1321)
P(AntiAgathics)/age: -.097 (1514)
Karma score/time on LW: .096 (1404)

This year for the first time P(Aliens) and P(Aliens2) are entirely uncorrelated with each other. Time in Community, Time on LW, and IQ are not correlated with anything particularly interesting, suggesting all three fail to change people's views.

Results we find amusing: high-IQ and high-karma people have more romantic partners, suggesting that those are attractive traits. There is definitely a Cryonics/Antiagathics/Simulation/Many Worlds cluster of weird beliefs, which younger people and people who spend more time online are slightly more likely to have - weirdly, that cluster seems slightly less likely to believe in global catastrophic risk. Older people and people with more children have more romantic partners (it'd be interesting to see if that holds true for the polyamorous). People who believe in anti-agathics and global catastrophic risk are less likely to believe in a great stagnation (presumably because both of the above rely on inventions). People who spend more time on Less Wrong have lower IQs. Height is, bizarrely, correlated with belief in the supernatural and global catastrophic risk.

All political viewpoints are correlated with each other in pretty much exactly the way one would expect. They are also correlated with one's level of belief in God, the supernatural, and religion. There are minor correlations with some of the beliefs and number of partners (presumably because polyamory), number of children, and number of languages spoken. We are doing terribly at avoiding Blue/Green politics, people.

More Political:
P(Supernatural)/P(God): .736 (1496)
P(Supernatural)/P(Religion): .667 (1492)
Minimum wage/taxes: .649 (1299)
P(God)/P(Religion): .631 (1496)
Feminism/social justice: .619 (1293)
Social justice/minimum wage: .508 (1262)
P(Supernatural)/abortion: -.469 (1309)
Taxes/basic income: .463 (1285)
P(God)/abortion: -.461 (1310)
Social justice/taxes: .456 (1267)
P(Religion)/abortion: -.413
Basic income/minimum wage: .392 (1283)
Feminism/taxes: .391 (1318)
Feminism/minimum wage: .391 (1312)
Feminism/human biodiversity: -.365 (1331)
Immigration/feminism: .355 (1336)
P(Warming)/taxes: .340 (1292)
Basic income/social justice: .311 (1270)
Immigration/social justice: .307 (1275)
P(Warming)/feminism: .294 (1323)
Immigration/human biodiversity: -.292 (1313)
P(Warming)/basic income: .290 (1287)
Social justice/human biodiversity: -.289 (1281)
Basic income/feminism: .284 (1313)
Human biodiversity/minimum wage: -.273 (1293)
P(Warming)/social justice: .271 (1261)
P(Warming)/minimum wage: .262 (1284)
Human biodiversity/taxes: -.251 (1270).
Abortion/feminism: .239 (1356)
Abortion/social justice: .220 (1292)
P(Warming)/immigration: .215 (1315)
Abortion/immigration: .211 (1353)
P(Warming)/abortion: .192 (1340)
Immigration/taxes: .186 (1322)
Basic income/taxes: .174 (1249)
Abortion/taxes: .170 (1328)
Abortion/minimum wage: .169 (1317)
P(warming)/human biodiversity: -.168 (1301)
Abortion/basic income: .168 (1314)
Immigration/Great Stagnation: -.163 (1281)
P(God)/feminism: -.159 (1294)
P(Supernatural)/feminism: -.158 (1292)
Human biodiversity/Great Stagnation: .152 (1287)
Social justice/Great Stagnation: -.135 (1242)
Number of languages/taxes: -.133 (1242)
P(God)/P(Warming): -.132 (1491)
P(Supernatural)/immigration: -.131 (1284)
P(Religion)immigration: -.129 (1296)
P(God)/immigration: -.127 (1286)
P(Supernatural)/P(Warming): -.125 (1487)
P(Supernatural)/social justice: -.125 (1227)
P(God)/taxes: -.145
Minimum wage/Great Stagnation: -124 (1269)
Immigration/minimum wage: .122 (1308)
Great Stagnation/taxes: -.121 (1270)
P(Religion)/P(Warming): -.113 (1505)
P(Supernatural)/taxes: -.113 (1265)
Feminism/Great Stagnation: -.112 (1295)
Number of children/abortion: -.112 (1386)
P(Religion)/basic income: -.108 (1296)
Number of current partners/feminism: .108 (1364)
Basic income/human biodiversity: -.106 (1301)
P(God)/Basic Income: -.105 (1255)
Number of current partners/basic income: .105 (1320)
Human biodiversity/number of languages: .103 (1253)
Number of children/basic income: -.099 (1322)
Number of children/P(Warming): -.091 (1535)

V. Hypothesis Testing

A. Do people in the effective altruism movement donate more money to charity? Do they donate a higher percent of their income to charity? Are they just generally more altruistic people?

1265 people told us how much they give to charity; of those, 450 gave nothing. On average, effective altruists (n = 412) donated $2503 to charity, and other people (n = 853) donated $523  - obviously a significant result. Effective altruists gave on average $800 to MIRI or CFAR, whereas others gave $53. Effective altruists gave on average $16 to other x-risk related charities; others gave only $2.

In order to calculate percent donated I divided charity donations by income in the 947  people helpful enough to give me both numbers. Of those 947, 602 donated nothing to charity, and so had a percent donated of 0. At the other extreme, three  people donated 50% of their (substantial) incomes to charity, and 55 people donated at least 10%. I don't want to draw any conclusions about the community from this because the people who provided both their income numbers and their charity numbers are a highly self-selected sample.

303 effective altruists donated, on average, 3.5% of their income to charity, compared to 645 others who donated, on average, 1% of their income to charity. A small but significant (p < .001) victory for the effective altruism movement.

But are they more compassionate people in general? After throwing out the people who said they wanted to give blood but couldn't for one or another reason, I got 1255 survey respondents giving me an unambiguous answer (yes or no) about whether they'd ever given blood. I found that 51% of effective altruists had given blood compared to 47% of others - a difference which did not reach statistical significance.

Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference.

Conclusion: effective altruists give more money to charity, both absolutely and as a percent of income, but are no more likely (or perhaps only slightly more likely) to be compassionate in other ways.

B. Can we finally resolve this IQ controversy that comes up every year?

The story so far - our first survey in 2009 found an average IQ of 146. Everyone said this was stupid, no community could possibly have that high an average IQ, it was just people lying and/or reporting results from horrible Internet IQ tests.
Although IQ fell somewhat the next few years - to 140 in 2011 and 139 in 2012 - people continued to complain. So in 2012 we started asking for SAT and ACT scores, which are known to correlate well with IQ and are much harder to get wrong. These scores confirmed the 139 IQ result on the 2012 test. But people still objected that something must be up.

This year our IQ has fallen further to 138 (no Flynn Effect for us!) but for the first time we asked people to describe the IQ test they used to get the number. So I took a subset of the people with the most unimpeachable IQ tests - ones taken after the age of 15 (when IQ is more stable), and from a seemingly reputable source. I counted a source as reputable either if it name-dropped a specific scientifically validated IQ test (like WAIS or Raven's Progressive Matrices), if it was performed by a reputable institution (a school, a hospital, or a psychologist), or if it was a Mensa exam proctored by a Mensa official.

This subgroup of 101 people with very reputable IQ tests had an average IQ of 139 - exactly the same as the average among survey respondents as a whole.

I don't know for sure that Mensa is on the level, so I tried again deleting everyone who took a Mensa test - leaving just the people who could name-drop a well-known test or who knew it was administered by a psychologist in an official setting. This caused a precipitous drop all the way down to 138.

The IQ numbers have time and time again answered every challenge raised against them and should be presumed accurate.

C. Can we predict who does or doesn't cooperate on prisoner's dilemmas?

As mentioned above, I included a prisoner's dilemma type question in the survey, offering people the chance to make a little money by screwing all the other survey respondents over.

Tendency to cooperate on the prisoner's dilemma was most highly correlated with items in the general leftist political cluster identified by Ozy above. It was most notable for support for feminism, with which it had a correlation of .15, significant at the p < .01 level, and minimum wage, with which it had a correlation of .09, also significant at p < .01. It was also significantly correlated with belief that other people would cooperate on the same question.

I compared two possible explanations for this result. First, leftists are starry-eyed idealists who believe everyone can just get along - therefore, they expected other people to cooperate more, which made them want to cooperate more. Or, second, most Less Wrongers are white, male, and upper class, meaning that support for leftist values - which often favor nonwhites, women, and the lower class - is itself a symbol of self-sacrifce and altruism which one would expect to correlate with a question testing self-sacrifice and altruism.

I tested the "starry-eyed idealist" hypothesis by checking whether leftists were more likely to believe other people would cooperate. They were not - the correlation was not significant at any level.

I tested the "self-sacrifice" hypothesis by testing whether the feminism correlation went away in women. For women, supporting feminism is presumably not a sign of willingness to self-sacrifice to help an out-group, so we would expect the correlation to disappear.

In the all-female sample, the correlation between feminism and PD cooperation shrunk from .15 to a puny .04, whereas the correlation between the minimum wage and PD was previously .09 and stayed exactly the same at .09. This provides some small level of support for the hypothesis that the leftist correlation with PD cooperation represents a willingness to self-sacrifice in a population who are not themselves helped by leftist values.

(on the other hand, neither leftists nor cooperators were more likely to give money to charity, so if this is true it's a very selective form of self-sacrifice)

VI. Monetary Prize

1389 people answered the prize question at the bottom. 71.6% of these [n = 995] cooperated; 28.4% [n = 394] defected.
The prize goes to a person whose two word phrase begins with "eponymous". If this person posts below (or PMs or emails me) the second word in their phrase, I will give them $60 * 71.6%, or about $43. I can pay to a PayPal account, a charity of their choice that takes online donations, or a snail-mail address via check.

VII. Calibration Questions

The population of Europe, according to designated arbiter Wikipedia, is 739 million people.

People were really really bad at giving their answers in millions. I got numbers anywhere from 3 (really? three million people in Europe?) to 3 billion (3 million billion people = 3 quadrillion). I assume some people thought they were answering in billions, others in thousands, and other people thought they were giving a straight answer in number of individuals.

My original plan was to just adjust these to make them fit, but this quickly encountered some pitfalls. Suppose someone wrote 1 million (as one person did). Could I fairly guess they meant 100 million, even though there's really no way to guess that from the text itself? 1 billion? Maybe they just thought there were really one million people in Europe?

If I was too aggressive correcting these, everyone would get close to the right answer not because they were smart, but because I had corrected their answers. If I wasn't aggressive enough, I would end up with some guy who answered 3 quadrillion Europeans totally distorting the mean.

I ended up deleting 40 answers that suggested there were less than ten million or more than eight billion Europeans, on the grounds that people probably weren't really that far off so it was probably some kind of data entry error, and correcting everyone who entered a reasonable answer in individuals to answer in millions as the question asked.

The remaining 1457 people who can either follow simple directions or at least fail to follow them in a predictable way estimated an average European population in millions of 601 + 35.6 (380, 500, 750).

Respondents were told to aim for within 10% of the real value, which means they wanted between 665 million and 812 million. 18.7% of people [n = 272] got within that window.

I divided people up into calibration brackets of [0,5], [6,15], [16, 25] and so on. The following are what percent of people in each bracket were right.

[0,5]: 7.7%
[6,15]: 12.4%
[16,25]: 15.1%
[26,35]: 18.4%
[36,45]: 20.6%
[46,55]: 15.4%
[56,65]: 16.5%
[66,75]: 21.2%
[76,85]: 36.4%
[86,95]: 48.6%
[96,100]: 100%

Among people who should know better (those who have read all or most of the Sequences and have > 500 karma, a group of 162 people)

[0,5]: 0
[6,15]: 17.4%
[16,25]: 25.6%
[26,35]: 16.7%
[36,45]: 26.7%
[46,55]: 25%
[56,65]: 0%
[66,75]: 8.3%
[76,85]: 40%
[86,95]: 66.6%
[96,100]: 66.6%

Clearly, the people who should know better don't.

This graph represents your performance relative to ideal performance. Dipping below the blue ideal line represents overconfidence; rising above it represents underconfidence. With few exceptions you were very overconfident. Note that there were so few "elite" LWers at certain levels that the graph becomes very noisy and probably isn't representing much; that huge drop at 60 represents like two or three people. The orange "typical LWer" line is much more robust.

There is one other question that gets at the same idea of overconfidence. 651 people were willing to give valid 90% confidence interval on what percent of people would cooperate (this is my fault; I only added this question about halfway through the survey once I realized it would be interesting to investigate). I deleted four for giving extremely high outliers like 9999% which threw off the results, leaving 647 valid answers. The average confidence interval was [28.3, 72.0], which just BARELY contains the correct answer of 71.6%. Of the 647 of you, only 346 (53.5%) gave 90% confidence intervals that included the correct answer!

Last year I complained about horrible performance on calibration questions, but we all decided it was probably just a fluke caused by a particularly weird question. This year's results suggest that was no fluke and that we haven't even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!

VIII. Public Data

There's still a lot more to be done with this survey. User:Unnamed has promised to analyze the "Extra Credit: CFAR Questions" section (not included in this post), but so far no one has looked at the "Extra Credit: Questions From Sarah" section, which I didn't really know what to do with. And of course this is most complete survey yet for seeking classic findings like "People who disagree with me about politics are stupid and evil".

1480 people - over 90% of the total - kindly allowed me to make their survey data public. I have included all their information except the timestamp (which would make tracking pretty easy) including their secret passphrases (by far the most interesting part of this exercise was seeing what unusual two word phrases people could come up with on short notice).

560 comments

Comments sorted by top scores.

comment by jamesf · 2014-01-19T03:32:04.506Z · LW(p) · GW(p)

Next survey, I'd be interested in seeing statistics involving:

  • Recreational drug use
  • Quantified Self-related activities
  • Social media use
  • Self-perceived physical attractiveness on the 1-10 scale
  • Self-perceived holistic attractiveness on the 1-10 scale
  • Personal computer's operating system

Excellent write-up and I look forward to next year's.

Replies from: Acidmind, Desrtopa, ChristianKl, Frazer, shokwave
comment by Acidmind · 2014-01-19T11:04:04.863Z · LW(p) · GW(p)

I'd like:

  • Estimated average self-perceived physical attractiveness in the community
  • Estimated average self-perceived holistic attractiveness in the community

Oh, we are really self-serving elitist overconfident pricks, aren't we?

Replies from: Creutzer
comment by Creutzer · 2014-01-19T11:11:25.937Z · LW(p) · GW(p)

How do you expect anybody to be able to answer that and what does it even mean? First, what community, exactly? Second, average - over what?

Replies from: ChristianKl, jkaufman
comment by ChristianKl · 2014-01-19T15:55:13.140Z · LW(p) · GW(p)

I think he means the people who take the survey.

If you ask in the survey for the self-perceived physical attractiveness you can ask in the same survey for the estimated average of all survey takers.

comment by jefftk (jkaufman) · 2014-01-19T15:44:35.884Z · LW(p) · GW(p)

I think Acidmind means we should ask people their self-perceived attractiveness, and then ask them to estimate the average that will be given by all people taking the survey.

comment by Desrtopa · 2014-01-21T21:48:26.513Z · LW(p) · GW(p)

Self-perceived physical attractiveness on the 1-10 scale Self-perceived holistic attractiveness on the 1-10 scale

While I don't remember the precise level, I would note that there are studies suggesting a rather surprisingly low level of correlation between self perceived attractiveness and attractiveness as perceived by others, and if we could induce a sufficient sample of participants to submit images of themselves to be rated by others (possibly in a context where they would not themselves find out the rating they received,) I think the comparison of those two values would be much more interesting than self-perceived attractiveness alone.

Replies from: jamesf
comment by jamesf · 2014-01-22T04:44:13.429Z · LW(p) · GW(p)

That's kind of the idea. I'm more interested in correlations involving self-perceived attractiveness, particularly the holistic one, than correlations involving measured physical attractiveness. It's a nice proxy for self-esteem.

Anonymity is a bit of a problem, though I suppose a pool of people that are as likely as your average human to know anyone who uses LW could be wrangled with some effort.

Replies from: Desrtopa
comment by Desrtopa · 2014-01-22T15:12:39.285Z · LW(p) · GW(p)

I'd be interested in seeing how the relationship among less wrong users between self perceived attractiveness and attractiveness as perceived by others compares to the relationship in the general population.

comment by ChristianKl · 2014-01-19T16:08:27.170Z · LW(p) · GW(p)

Quantified Self-related activities

I thought quite a bit about this and couldn't decide on many good questions.

The Anki question is sort of a result of this desire.

I thought of asking about pedometer usage such as Fitbit/Nike Plus etc but I'm not sure if the amount of people is enough to warrant the question.

Which specific questions would you want?

Social media use

By what metric? Total time investment? Few people can give you an accurate answer to that question.

Asking good questions isn't easy.

Self-perceived physical attractiveness on the 1-10 scale

I personally don't think that term is very meaningful. I do have hotornot pictures that scored a 9, but what does that mean? The last time I used tinder I click through a lot of female images and very few liked me back. But I haven't yet isolated factors or know about average success rates for guy's using Tinder.

Recreational drug use

There interested in not gathering data that would cause someone to admit criminal behavior. A person might be findable if you know there stances on a few questions. There also the issue of possible outsiders being able to say: "30% of LW participants are criminals!"

Personal computer's operating system

I agree, that would be nice question.

Replies from: jamesf, shokwave
comment by jamesf · 2014-01-19T17:17:56.958Z · LW(p) · GW(p)

Quantified Self examples:

  • Have you attempted and stuck with the recording of personal data for >1 month for any reason? (Y/N)
  • If so, did you find it useful? (Y/N)

Social media example:

  • How many hours per week do you think you spend on social media?

Asking about self-perceived attractiveness tells us little about how attractive a person is, but quite a bit about how they see themselves, and I want to learn how that's correlated with answers to all these other questions.

Maybe the recreational drug use question(s) could be stripped from the public data?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-19T18:17:16.961Z · LW(p) · GW(p)

Have you attempted and stuck with the recording of personal data for >1 month for any reason? (Y/N)

Having a calendar with time of when you do what actions is recording of personal data and for most people for timeframes longer than a month.

Anyone who uses Anki gets automated backround data recording of how many minutes per day he uses Anki.

Replies from: jamesf
comment by jamesf · 2014-01-19T18:42:51.187Z · LW(p) · GW(p)

I might be willing to call either of those self-quantifying activities. Definitely the first one, if you actually put most activities you do on there rather than just the ones that aren't habit or important enough to definitely not forget. I think the question could be modified to capture the intent. Let's see...

Have you ever made an effort to record personal data for future analysis and stuck with it for >1 month? (Y/N)

Replies from: ChristianKl
comment by ChristianKl · 2014-01-19T19:02:58.259Z · LW(p) · GW(p)

That sounds like a good question. Hopefully we remember when the time comes up.

comment by shokwave · 2014-01-20T13:23:43.664Z · LW(p) · GW(p)

There interested in not gathering data that would cause someone to admit criminal behavior.

As far as I'm aware - and correct me if I'm wrong - drug use is not a crime (and by extension admitting past drug use isn't either). Possession, operating a vehicle under the influence, etc, are all crimes, but actually having used drugs isn't a criminal act.

There also the issue of possible outsiders being able to say: "30% of LW participants are criminals!"

The current survey (hell, the IQ section alone) gives them more ammunition than they could possibly expend, I feel.

Replies from: ChristianKl, Lalartu, nshepperd
comment by ChristianKl · 2014-01-20T13:38:02.783Z · LW(p) · GW(p)

The current survey (hell, the IQ section alone)

What the problem with someone external writing an article about how LW is a group who thinks they are high IQ?

Replies from: shokwave, lmm
comment by shokwave · 2014-01-21T04:25:00.745Z · LW(p) · GW(p)

The same problem you presumably have with someone external writing an article about how LW is a group of criminals: it makes us look bad.

You might not agree with self-proclaimed high IQ being a social negative, but most of the world does.

Replies from: Lumifer, ChristianKl, Emile
comment by Lumifer · 2014-01-23T16:14:23.221Z · LW(p) · GW(p)

You might not agree with self-proclaimed high IQ being a social negative, but most of the world does.

So? Fuck 'em.

Replies from: shokwave, army1987
comment by shokwave · 2014-01-24T13:12:15.386Z · LW(p) · GW(p)

Excellent in-group signalling but terrible public relations move.

Replies from: Mestroyer, Lumifer
comment by Mestroyer · 2014-01-24T16:30:51.532Z · LW(p) · GW(p)

We don't need or want to signal friendliness to absolutely everyone. We want to carefully choose what kind of filters and how many filters we apply to people who might be interested in our community. Every filter comes with a cost in that it reduces our growth, and must be justified through increasing the quality of our discussions. However, filter not at all, and you might as well just step out onto the street and talk to strangers.

Personally, I am all for filtering out the "punish for not putting modesty before facts" attitude. Both because I find it irritating, and because it drives away boastful awesome people, and I like substantiated boasting and the people who do it.

In other words, "Yeah, fuck 'em."

comment by Lumifer · 2014-01-24T15:56:55.673Z · LW(p) · GW(p)

So is admitting to being an atheist, for example. Optimizing for public relations is rarely a good move.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-01-24T15:59:48.351Z · LW(p) · GW(p)

So is admitting to being an atheist, for exampl

That's a lot more culture specific.

comment by A1987dM (army1987) · 2014-01-24T10:16:04.940Z · LW(p) · GW(p)

I would also say exactly the same thing with “recreational drug use” replacing “high IQ”.

Replies from: Lumifer
comment by Lumifer · 2014-01-24T15:38:06.807Z · LW(p) · GW(p)

True, though a notable difference is that recreational drug use is illegal in many jurisdictions.

comment by ChristianKl · 2014-01-21T12:50:13.581Z · LW(p) · GW(p)

I don't think the goal of LW is to be socially approval for the average person.

On the one hand it's to grow people who might want to participate in LW. The fact that LW has many smart people in it, could draw the right people into LW.

On the other hand it's to further the agenda of CFAR, MIRI and FHI. I don't think the world listens less to a programmer who wants to warn about the dangers of UFAI when the programmer proclaims that he's smart.

It's very hard for me to see a media article that wouldn't describe CFAR as a bunch of people who think they are smart. If you write the advancement of rationality on your bannar, that something that everyone is to assume anyway. Having polled IQ data doesn't do further damage.

Replies from: private_messaging, shokwave
comment by private_messaging · 2014-01-24T13:51:53.707Z · LW(p) · GW(p)

On the other hand it's to further the agenda of CFAR, MIRI and FHI. I don't think the world listens less to a programmer who wants to warn about the dangers of UFAI when the programmer proclaims that he's smart.

Mostly, of people who proclaim IQ of, say, 150 or higher, over 9 out of 10 times it's going to be because of some kind of issue such as narcissism.

The funniest aspect of self declared bayesianism is that "bayesians" never imagine that it could be applied to what they say (and go on fuming about punishments and status games and reflexes whenever it is).

Replies from: Vaniver, Jiro
comment by Vaniver · 2014-01-24T17:25:29.175Z · LW(p) · GW(p)

The funniest aspect of self declared bayesianism is that "bayesians" never imagine that it could be applied to what they say (and go on fuming about punishments and status games and reflexes whenever it is).

Emphasis mine. Alternatively, those Bayesians with social graces aren't available, because they don't do anything ridiculous enough to remember.

Replies from: private_messaging
comment by private_messaging · 2014-01-24T18:11:10.774Z · LW(p) · GW(p)

Fair enough, albeit social graces in that case would imply good understanding of how other people process evidence, which would make self-labeling as "bayesian" seem very silly.

comment by Jiro · 2014-01-24T16:05:37.773Z · LW(p) · GW(p)

Imagine that 1% of the population have high IQs (and will claim so) and 10% of the population are narcissistic, and half of those like to claim they have high IQ. The Bayseian calculation would be P(high IQ|claim high IQ) = P(claim high IQ|high IQ) P(high IQ) divided by P(claim high IQ|high IQ) P(high IQ) + P(claim high IQ|narcissism) P(narcissism) = (1.00 0.01) / (1.00 0.01 + 0.5 0.10) = 1/6.

You can quibble about the exact figures, but private_messaging is correct here. Because narcissism is relatively common, the claim of having high IQ is very weak evidence for having high IQ but very strong evidence for being narcissistic. (Although it's stronger evidence for high IQ in a community where high IQ is more common.)

Replies from: army1987, private_messaging
comment by A1987dM (army1987) · 2014-01-25T08:50:46.748Z · LW(p) · GW(p)

You can quibble about the exact figures,

Indeed, I think you're way overestimating P(claim high IQ|high IQ).

comment by private_messaging · 2014-01-24T16:50:29.657Z · LW(p) · GW(p)

To clarify, it's still as strong of evidence of having high IQ as a statement can be, it is just not strong enough to overcome the low prior.

Then there's the issue that - I do not know about the US but it seems fairly uncommon to have taken a professionally administered IQ test here, whenever you are smart or not. It may be that LW has an unusually high percentage of people who took such a test.

comment by shokwave · 2014-01-23T12:33:41.253Z · LW(p) · GW(p)

If you replace "smart" with "used drugs recreationally" you might see my point?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-23T14:42:15.026Z · LW(p) · GW(p)

If you replace "smart" with "used drugs recreationally" you might see my point?

Actually I don't think that rationality (as the CFAR mission) has much to do with using drugs recreationally it does have something to do with being smart. You could have a CFAR that experiments with various mind altering substances to see which of those improve rationality. That's not the CFAR that we have.

I did a lot of QS PR. That means having a 2 hour interview where the journalist might pick 30 seconds of phrases that come on TV. I wouldn't have had any issue in that context of playing into a nerd stereotype. On the other hand I wouldn't have said something that fits QS users into the stereotype of drug users.

Replies from: shokwave
comment by shokwave · 2014-01-24T13:11:26.404Z · LW(p) · GW(p)

Fair enough; drug use is a lot more public relations damaging than self-proclaimed high IQ.

comment by Emile · 2014-01-21T12:05:47.952Z · LW(p) · GW(p)

Depends of how loudly you self-proclaim it. It's not as we had a mensa banner on the frontpage or something.

Replies from: shokwave
comment by shokwave · 2014-01-23T12:34:50.908Z · LW(p) · GW(p)

And the same goes for recreational drug-use, no? If it's just in the survey like IQ is and we don't have a banner proclaiming it, the argument that it might make us look bad doesn't hold any water.

comment by lmm · 2014-01-20T20:54:33.107Z · LW(p) · GW(p)

It makes it easy to portray LW as a bunch of out-of-touch nerds?

Replies from: blacktrance, ChristianKl
comment by blacktrance · 2014-01-20T21:00:53.859Z · LW(p) · GW(p)

"I'm part of a community, you live in a bubble, he's out of touch."

comment by ChristianKl · 2014-01-20T23:12:48.926Z · LW(p) · GW(p)

How does having a high IQ means someone is out-of-touch?

Yes, people can argue that LW is a bunch of nerds, but I don't think that's much of a problem. If we get a newsarticles about how smart nerds think that unfriendly AI is a big risk for humanity, I don't think the fact that those smart nerds think that they are high IQ is a problem.

It's different for arguing criminality or for arguing being delusional because of drug use.

Replies from: Nornagest
comment by Nornagest · 2014-01-20T23:34:33.398Z · LW(p) · GW(p)

There is a stereotype -- at least in the United States -- of nerds believing that high intelligence entitles them to claim insight and moral purity beyond their actual abilities, and implicitly of their inevitable downfall and the triumph of good old-fashioned common sense. We risk pattern-matching to this stereotype in any case, thanks to bandying about unusual ethical considerations in academic language, but talking up our own intelligence doesn't help at all.

It isn't having high IQ, in other words, so much as talking about it.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-20T23:47:22.923Z · LW(p) · GW(p)

We risk pattern-matching to this stereotype in any case

I can't see how you could structure LW in a way that someone who wants to talk about LW as a bunch of nerds can't do so. You don't need a statistic about the average IQ of LW to do so. Gathering the IQ data doesn't bring up anything that wasn't there before.

The basilisk episode is a lot more useful if you want to argue that LW is a group of out of touch nerds. See rationalwiki.

comment by Lalartu · 2014-01-24T09:36:36.039Z · LW(p) · GW(p)

If one is known for using drugs, then every unusual claim he makes is dismissed as a literal pipe dream. It is a huge blow to authority.

comment by nshepperd · 2014-01-20T23:40:49.680Z · LW(p) · GW(p)

How do you use a drug without possessing it at some point? Isn't admitting use of drugs a fortiori an admission of possession of drugs?

comment by Frazer · 2014-05-02T03:35:41.782Z · LW(p) · GW(p)

I'd also like to see time spent per day meditating, or other form of mental training

Replies from: ChristianKl
comment by ChristianKl · 2014-05-02T12:07:59.939Z · LW(p) · GW(p)

How would you word the question?

comment by shokwave · 2014-01-19T10:20:51.748Z · LW(p) · GW(p)
  • Are you Ask or Guess culture?
Replies from: ChristianKl
comment by ChristianKl · 2014-01-19T15:55:48.164Z · LW(p) · GW(p)

I'm not culture.

In some social circles I might behave in one way, in others another way. In different situations I act differently depending on how strongly I want to communicate a demand.

Replies from: shokwave
comment by shokwave · 2014-01-20T13:05:24.478Z · LW(p) · GW(p)

Good point. It might not even make sense to ask "Which culture of social interaction do you feel most at home with, Ask or Guess?".

comment by Vaniver · 2014-01-19T06:15:13.357Z · LW(p) · GW(p)

Repeating complaints from last year:

So in 2012 we started asking for SAT and ACT scores, which are known to correlate well with IQ and are much harder to get wrong. These scores confirmed the 139 IQ result on the 2012 test.

The 2012 estimate from SATs was about 128, since the 1994 renorming destroyed the old relationship between the SAT and IQ. Our average SAT (on 1600) was again about 1470, which again maps to less than 130, but not by much. (And, again, self-reported average probably overestimates actual population average.)

Last year I complained about horrible performance on calibration questions, but we all decided it was probably just a fluke caused by a particularly weird question. This year's results suggest that was no fluke and that we haven't even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!

I still think you're asking this question in a way that's particularly hard for people to get right. (The issue isn't the fact you ask about, but what sort of answers you look for.)

You've clearly got an error in your calibration chart; you can't have 2 out of 3 elite LWers be right in the [95,100] category but 100% of typical LWers are right in that category. Or are you not including the elite LWers in typical LWers? Regardless, the person who gave a calibration of 99% and the two people who gave calibrations of 100% aren't elite LWers (karmas of 0, 0, and 4; two 25% of the sequences and one 50%).

With few exceptions you were very overconfident.

The calibration chart doesn't make clear the impact of frequency. If most people are providing probabilities of 20%, and they're about 20% right, then most people are getting it right- and the 2-3 people who provided a probability of 60% don't matter.

There are a handful of ways to depict this. One I haven't seen before, which is probably ugly, is to scale the width of the points by the frequency. Instead, here's a flat graph of the proportion of survey respondents who gave each calibration bracket:

Significant is that if you add together the 10, 20, and 30 brackets (the ones around the correct baseline probability of ~20% of getting it right) you get 50% for typical LWers and 60% for elite LWers; so most people were fairly close to correctly calibrated, and the people who thought they had more skill on the whole dramatically overestimated how much more skill they had.

(I put down 70% probability, but was answering the wrong question; I got the population of the EU almost exactly right, which I knew from GDP and per-capita comparisons to the US. Oops.)

Replies from: private_messaging
comment by private_messaging · 2014-01-19T14:24:50.865Z · LW(p) · GW(p)

The 2012 estimate from SATs was about 128, since the 1994 renorming destroyed the old relationship between the SAT and IQ. Our average SAT (on 1600) was again about 1470, which again maps to less than 130, but not by much. (And, again, self-reported average probably overestimates actual population average.)

It's very interesting that the same mistake was boldly made again this year... I guess this mistake is sort of self reinforcing due to the uncannily perfect equality between mean IQ and what's incorrectly estimated from the SAT scores.

Replies from: Vaniver, Yvain
comment by Vaniver · 2014-01-19T21:16:43.852Z · LW(p) · GW(p)

Actually, I just ran the numbers on the SAT2400 and they're closer; the average percentile predicted from that is 99th, which corresponds to about 135.

Replies from: private_messaging, None, Yvain
comment by private_messaging · 2014-01-19T23:10:39.405Z · LW(p) · GW(p)

For non-Americans, what's the difference between SAT 2400 and SAT 1600 ?

Averaging sat scores is a little iffy because, given a cut-off, they won't have Gaussian distribution. Also, given imperfect correlation it is unclear how one should convert the scores. If I pick someone with SAT in top 1% I shouldn't expect IQ in the top 1% because of regression towards the mean. (Granted I can expect both scores to be closer if I were picking by some third factor influencing both).

It'd be interesting to compare frequency of advanced degrees with the scores, for people old enough to have advanced degrees.

Replies from: Prismattic, Richard_Kennaway
comment by Prismattic · 2014-01-20T00:18:45.061Z · LW(p) · GW(p)

The SAT used to have only two sections, with a maximum of 800 points each, for a total of 1600 (the worst possible score, IIRC, was 200 on each for 400). At some point after I graduated high school, they added a 3rd 800 point section (I think it might be an essay), so the maximum score went from 1600 to 2400.

Replies from: Fermatastheorem
comment by Fermatastheorem · 2014-01-21T04:32:15.364Z · LW(p) · GW(p)

Yes, it's a timed essay.

comment by Richard_Kennaway · 2014-01-20T00:06:31.772Z · LW(p) · GW(p)

Also, given imperfect correlation it is unclear how one should convert the scores. If I pick someone with SAT in top 1% I shouldn't expect IQ in the top 1% because of regression towards the mean.

The correlation is the slope of the regression line in coordinates normalised to unit standard deviations. Assuming (for mere convenience) a bivariate normal distribution, let F be the cumulative distribution function of the unit normal distribution, with inverse invF. If someone is at the 1-p level of the SAT distribution (in the example p=0.01) then the level to guess they are at in the IQ distribution (or anything else correlated with SAT) is q = F(c invF(p)). For p=0.01, here are a few illustrative values:

c   0.0000    0.1000    0.2000    0.3000    0.4000    0.5000    0.6000    0.7000    0.8000    0.9000    1.0000

q   0.5000    0.4080    0.3209    0.2426    0.1760    0.1224    0.0814    0.0517    0.0314    0.0181    0.0100

The standard deviation of the IQ value, conditional on the SAT value, is the unconditional standard deviation multiplied by c' = sqrt(1-c^2). The q values for 1 standard deviation above and below are therefore given by qlo = F(-c' + c invF(p)) and qhi = F(c' + c invF(p)).

qlo 0.1587    0.1098    0.0742    0.0493    0.0324    0.0212    0.0141    0.0096    0.0069    0.0057    0.0100

qhi 0.8413    0.7771    0.6966    0.6010    0.4944    0.3832    0.2757    0.1803    0.1036    0.0487    0.0100
Replies from: private_messaging
comment by private_messaging · 2014-01-24T15:44:42.566Z · LW(p) · GW(p)

There are subtleties though. E.g. if we take some programming contest finalists / winners, and take their IQ scores, those are regressed towards the mean from their programming contest performance. Their other abilities will be regressed towards the mean from the same height, not from IQ. This might explain the dramatic cognitive skill disparity between, say, Mensa and some professional group of same IQs.

comment by [deleted] · 2014-02-21T16:26:47.326Z · LW(p) · GW(p)

2210 was 98th percentile in 2013. But it was 99th in 2007.

I haven't seen an SAT-IQ comparison site I trust. This one listed on gwern's website for example seems wrong.

Replies from: Vaniver
comment by Vaniver · 2014-02-21T21:30:38.277Z · LW(p) · GW(p)

2210 was 98th percentile in 2013. But it was 99th in 2007.

If I remember correctly, I did SAT->percentile->average, rather than SAT->average->percentile; the first method should lead to a higher estimate if the tail is negative (which I think it is).

[edit]Over here is the work and source for that particular method- turns out I did SAT->average->percentile to get that result, with a slightly different table, and I guess I didn't report the average percentile that I calculated (which you had to rely on interpolation for anyway).

This one listed on gwern's website for example seems wrong.

It's only accurate up to 1994.

comment by Scott Alexander (Yvain) · 2014-01-20T02:41:20.920Z · LW(p) · GW(p)

One reason SAT1600 and SAT2400 scores may differ is that some of the SAT1600 scores might in fact have come from before the 1994 renorming. Have you tried doing pre-1994 and post-1994 scores separately (guessing when someone took the SAT based on age?)

Replies from: Vaniver
comment by Vaniver · 2014-01-20T04:52:47.181Z · LW(p) · GW(p)

SAT1600 scores by age:

Average SAT for LWers 30 and under (217 total): 1491. (27 1600s.)

Average SAT for LWers 31 to 35 (74 total): 1462.7 (9 1600s.)

Average SAT for LWers 36 and older (81 total): 1437. (One 1600, by someone who's 56.)

I'm pretty sure the 36 and above are all the older SAT, suspect the middle group contains both, and pretty confident the younger group is mostly the newer SAT. The strong majority comes from the post 1995 test, and the scores don't seem to have changed by all that much in nominal terms.

Replies from: private_messaging
comment by private_messaging · 2014-01-20T11:12:33.028Z · LW(p) · GW(p)

Which creates another question, why do the SAT 2400 and SAT 1600 differ so much?

comment by Scott Alexander (Yvain) · 2014-01-21T03:02:46.423Z · LW(p) · GW(p)

According to Vaniver's data downthread, SAT taken only from LWers older than 36 (taking the old SAT) predicts 140 IQ.

I can't calculate the IQ of LWers younger than 36 because I can't find a site I trust to predict IQ from new SAT. The only ones I get give absurd results like average SAT 1491 implies average IQ 151.

comment by jefftk (jkaufman) · 2014-01-19T16:12:22.727Z · LW(p) · GW(p)

The IQ numbers have time and time again answered every challenge raised against them and should be presumed accurate.

What if the people who have taken IQ tests are on average smarter than the people who haven't? My impression is that people mostly take IQ tests when they're somewhat extreme: either low and trying to qualify for assistive services or high and trying to get "gifted" treatment. If we figure lesswrong draws mostly from the high end, then we should expect the IQ among test-takers to be higher than what we would get if we tested random people who had not previously been tested.

The IQ Question read: "Please give the score you got on your most recent PROFESSIONAL, SCIENTIFIC IQ test - no Internet tests, please! All tests should have the standard average of 100 and stdev of 15."

Among the subset of people making their data public (n=1480), 32% (472) put an answer here. Those 472 reports average 138, in line with past numbers. But 32% is low enough that we're pretty vulnerable to selection bias.

(I've never taken an IQ test, and left this question blank.)

Replies from: VincentYu, ArisKatsaris
comment by VincentYu · 2014-01-20T15:01:31.166Z · LW(p) · GW(p)

What if the people who have taken IQ tests are on average smarter than the people who haven't? My impression is that people mostly take IQ tests when they're somewhat extreme: either low and trying to qualify for assistive services or high and trying to get "gifted" treatment. If we figure lesswrong draws mostly from the high end, then we should expect the IQ among test-takers to be higher than what we would get if we tested random people who had not previously been tested.

This sounds plausible, but from looking at the data, I don't think this is happening in our sample. In particular, if this were the case, then we would expect the SAT scores of those who did not submit IQ data to be different from those who did submit IQ data. I ran an Anderson–Darling test on each of the following pairs of distributions:

  • SAT out of 2400 for those who submitted IQ data (n = 89) vs SAT out of 2400 for those who did not submit IQ data (n = 230)
  • SAT out of 1600 for those who submitted IQ data (n = 155) vs SAT out of 1600 for those who did not submit IQ data (n = 217)

The p-values came out as 0.477 and 0.436 respectively, which means that the Anderson–Darling test was unable to distinguish between the two distributions in each pair at any significance.

As I did for my last plot, I've once again computed for each distribution a kernel density estimate with bootstrapped confidence bands from 999 resamples. From visual inspection, I tend to agree that there is no clear difference between the distributions. The plots should be self-explanatory:

(More details about these plots are available in my previous comment.)

Edit: Updated plots. The kernel density estimates are now fixed-bandwidth using the Sheather–Jones method for bandwidth selection. The density near the right edge is bias-corrected using an ad hoc fix described by whuber on stats.SE.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-20T22:53:41.130Z · LW(p) · GW(p)

Thanks for digging into this! Looks like the selection bias isn't significant.

comment by ArisKatsaris · 2014-01-19T17:06:20.310Z · LW(p) · GW(p)

But 32% is low enough that we're pretty vulnerable to selection bias

The large majority of LessWrongers in the USA have however also provided their SAT scores, and those are also very high values (from what little I know of SATs)...

Replies from: Vaniver
comment by Vaniver · 2014-01-19T20:45:42.256Z · LW(p) · GW(p)

The large majority of LessWrongers in the USA have however also provided their SAT scores, and those are also very high values (from what little I know of SATs)...

The reported SAT numbers are very high, but the reported IQ scores are extremely high. The mean reported SAT score, if received on the modern 1600 test, corresponds to an IQ in the upper 120s, not the upper 130s. The mean reported SAT2400 score was 2207, which corresponds to 99th but not 99.5th percentile. 99th percentile is an IQ of 135, which suggests that the self-reports may not be that off compared to the SAT self-reports.

Replies from: michaelsullivan, jaime2000
comment by michaelsullivan · 2014-01-22T17:29:19.800Z · LW(p) · GW(p)

Some of us took the SAT before 1995, so it's hard to disentangle those scores. A pre-1995 1474 would be at 99.9x percentile, in line with an IQ score around 150-155. If you really want to compare, you should probably assume anyone age 38 or older took the old test and use the recentering adjustment for them.

I'm also not sure how well the SAT distinguishes at the high end. It's apparently good enough for some high IQ societies, who are willing to use the tests for certification. I was shown my results and I had about 25 points off perfect per question marked wrong. So the distinction between 1475 and 1600 on my test would probably be about 5 total questions. I don't remember any questions that required reasoning I considered difficult at the time. The difference between my score and one 100 points above or below might say as much about diligence or proofreading as intelligence.

Admittedly, the variance due to non-g factors should mostly cancel in a population the size of this survey, and is likely to be a feature of almost any IQ test.

That said, the 1995 score adjustment would have to be taken into account before using it as a proxy for IQ.

Replies from: private_messaging
comment by private_messaging · 2014-01-22T17:38:01.478Z · LW(p) · GW(p)

Conversion is a very tricky matter, because the correlation is much less than 1 ( 0.369 in the survey, apparently).

With correlation less than 1, regression towards the mean comes into play, so the predicted IQ from perfect SAT is actually not that high (someone posted coefficients in a parallel discussion), and predicted SAT from very high IQ is likewise not that awesome.

The reason the figures seem rather strange, is that they imply some kind of extreme filtering by IQ here. The negative correlation between time here and IQ suggest that the content is not acting as much of a filter, or is acting as a filter in the opposite direction.

Replies from: Vaniver
comment by Vaniver · 2014-01-22T18:38:44.221Z · LW(p) · GW(p)

The negative correlation between time here and IQ suggest that the content is not acting as much of a filter, or is acting as a filter in the opposite direction.

Well, alternatively old-timers feel it's more important to accurately estimate their IQ, and new-comers feel it's more important to be impressive. There also might not be an effect that needs explaining: I haven't looked at a scatterplot of IQ by time in community or karma yet for this year; last year, there were a handful of low-karma people who reported massive IQs, and once you removed those outliers the correlation mostly vanished.

Replies from: private_messaging
comment by private_messaging · 2014-01-22T20:02:05.337Z · LW(p) · GW(p)

You still need to explain how the population ended up so extremely filtered.

Without the rest of the survey, one might imagine that various unusual beliefs here are something that's only very smart people can see as correct and so only very smart people agree and join, but the survey has shown that said unusual beliefs weren't correlated with self reported IQ or SAT score.

comment by jaime2000 · 2014-01-20T19:55:39.143Z · LW(p) · GW(p)

The Wikipedia article states that those are percentiles of test-takers, not the population as a whole. What percentage of seniors take the SAT? I tried googling, but I could not find the figure.

My first thought is that most people who don't take the SAT don't intend to go to college and are likely to be below the mean reported SAT score, but then I realized that a non-negligible subset of those people must have taken only the ACT as their admission exam.

Replies from: Vaniver
comment by Vaniver · 2014-01-20T20:01:39.418Z · LW(p) · GW(p)

I don't have solid numbers myself, but percentile of test-takers should underestimate percentile of population. However, there is regression to the mean to take into account, as well as that many people take the SAT multiple times and report the most favorable score, both of which suggest that score on test should overestimate IQ, and I'm fudging it by treating those two as if they cancel out.

Replies from: michaelsullivan
comment by michaelsullivan · 2014-01-22T17:31:59.650Z · LW(p) · GW(p)

Don't most people who report IQ scores do the same thing if they have taken multiple tests?

Replies from: Vaniver, Elund
comment by Vaniver · 2014-01-22T18:35:39.370Z · LW(p) · GW(p)

Possibly. My suspicion is that less people have taken multiple professional IQ tests (I've only taken one professional one) than multiple SATs (I think I took it three times, at various ages). I score significantly better on the Raven's subtest than on other subtests, and so my IQ.dk score was significantly higher than my professional IQ test last year- but this year I only reported the professional one, because that was all that was asked for. (I might not be representative.)

comment by Elund · 2014-10-24T22:13:56.186Z · LW(p) · GW(p)

Not if they followed the survey instructions, which asked for only the scores from the most recent professional IQ test they took.

comment by Zack_M_Davis · 2014-01-19T00:47:42.705Z · LW(p) · GW(p)

The second word in the winning secret phrase is pony (chosen because you can't spell the former without the latter); I'll accept the prize money via PayPal to main att zackmdavis daht net.

(As I recall, I chose to Defect after looking at the output of one call to Python's random.random() and seeing a high number, probably point-eight-something. But I shouldn't get credit for following my proposed procedure (which turned out to be wrong anyway) because I don't remember deciding beforehand that I was definitely using a "result > 0.8 means Defect" convention (when "result < 0.2 means Defect" is just as natural). I think I would have chosen Cooperate if the random number had come up less than 0.8, but I haven't actually observed the nearby possible world where it did, so it's at least possible that I was rationalizing.)

(Also, I'm sorry for being bad at reading; I don't actually think there are seven hundred trillion people in Europe.)

Replies from: simplicio
comment by simplicio · 2014-01-20T14:18:35.347Z · LW(p) · GW(p)

When I heard about Yvain's PD contest, I flipped a coin. I vowed that if it came up heads, I would Paypal the winner $200 (on top of their winnings), and if it came up tails I would ask them for the prize money they won.

It came up tails. YOUR MOVE.

(No, not really. But somebody here SHOULD have made such a commitment.)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2014-01-21T04:47:53.322Z · LW(p) · GW(p)

Hey, it's not too late: if you should have made such a commitment, then the mere fact that you didn't actually do so shouldn't stop you now. Go ahead, flip a coin; if it comes up heads, you pay me $200; if it comes up tails, I'll ask Yvain to give you the $42.96.

Replies from: Eliezer_Yudkowsky, simplicio
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-21T15:52:19.794Z · LW(p) · GW(p)

...I don't think this is a very wise offer to make on the Internet unless the "coin" is somewhere you can both see it.

Replies from: Zack_M_Davis, Fermatastheorem
comment by Zack_M_Davis · 2014-01-22T04:20:25.610Z · LW(p) · GW(p)

Yes, of course I thought of that when considering my reply, but in this particular context (where we're considering counterfactual dealmaking presumably because the idea of pulling such a stunt in real life is amusing), I thought it was more in the spirit of things to be trusting. As you know, Newcomblike arguments still go through when Omega is merely a very good and very honest predictor rather than a perfect one, and my prior beliefs about reasonably-well-known Less Wrongers make me willing to bet that Simplicio probably isn't going to lie in order to scam me out of forty-three dollars. (If it wasn't already obvious, my offer was extended to Simplicio only and for the specified amounts only.)

comment by Fermatastheorem · 2014-01-21T20:33:14.241Z · LW(p) · GW(p)

Nevermind - I thought I'd found a site that would flip a coin and save the result with a timestamp.

Why hasn't anybody made this yet?

Replies from: gwern, MugaSofer
comment by gwern · 2014-01-22T02:52:14.035Z · LW(p) · GW(p)

Precommitment is a solved problem which doesn't need a trusted website. For example, simplicio could've released a hash precommitment (made using a local hash utility like sha512sum) to Yvain after taking the survey and just now unveiled that input, if he was serious about the counterfactual.

(He would also want to replace the 'flip a coin' with eg. 'total number of survey participants was odd'.)

You can even still easily do a verifiable coin flip now. For example, you could pick a commonly observable future event like a property of a Bitcoin block 24 hours from now, or you could both post a hash precommitment of a random bit, then when both are posted, each releases the chosen bit, verifies the other's hash, and XOR the 2 bits to choose the winner.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2014-01-22T21:18:09.547Z · LW(p) · GW(p)

No need for Bitcoin etc; one side commits to a bit, then the other side calls heads or tails, and they win if the call was correct.

comment by MugaSofer · 2014-01-30T15:30:16.653Z · LW(p) · GW(p)

They have - they're known as "dice rollers", because they're usually used for rolling dice in play-by-post RPGs.

For example.

comment by simplicio · 2014-01-21T15:02:23.772Z · LW(p) · GW(p)

Em, I don't actually like those odds all that much, thanks!

comment by [deleted] · 2014-01-19T18:32:29.462Z · LW(p) · GW(p)

Yvain - Next year, please include a question asking if the person taking the survey uses PredictionBook. I'd be curious to see if these people are better calibrated.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-01-23T06:28:27.827Z · LW(p) · GW(p)

Maybe ask them how many predictions they have made so we can see if using it more makes you better.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-26T01:36:32.275Z · LW(p) · GW(p)

Probably a good idea-- I use PredictionBook for casual entertainment, not as a serious effort at self-calibration.

comment by Vaniver · 2014-01-19T04:30:14.033Z · LW(p) · GW(p)

Thanks for doing this!

Results from previous years: 2009 2011 2012

comment by shminux · 2014-01-19T04:15:24.199Z · LW(p) · GW(p)

Yvain is not hugely on board with the idea of running correlations between everything and seeing what sticks, but will grudgingly publish the results because of the very high bar for significance (p < .001 on ~800 correlations suggests < 1 spurious result) and because he doesn't want to have to do it himself.

The standard way to fix this is to run them on half the data only and then test their predictive power on the other half. This eliminates almost all spurious correlations.

Replies from: Nominull, Kawoomba
comment by Nominull · 2014-01-19T04:59:15.868Z · LW(p) · GW(p)

Does that actually work better than just setting a higher bar for significance? My gut says that data is data and chopping it up cleverly can't work magic.

Replies from: Dan_Weinand, ChristianKl
comment by Dan_Weinand · 2014-01-19T05:53:07.811Z · LW(p) · GW(p)

Cross validation is actually hugely useful for predictive models. For a simple correlation like this, it's less of a big deal. But if you are fitting a local linearly weighted regression line for instance, chopping the data up is absolutely standard operating procedure.

comment by ChristianKl · 2014-01-19T16:04:10.933Z · LW(p) · GW(p)

Does that actually work better than just setting a higher bar for significance? My gut says that data is data and chopping it up cleverly can't work magic.

How do you decide for how high to hang your bar for significance? It very hard to estimate how high you have to hang it depending on how you go fishing in your data. The advantage of the two step procedure is that you are completely free to fish how you want in the first step. There are even cases where you might want a three step procedure.

comment by Kawoomba · 2014-01-19T08:48:10.459Z · LW(p) · GW(p)

Alternatively, Bonferroni correction.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-01-19T09:51:25.543Z · LW(p) · GW(p)

That's roughly what Yvain did, by taking into consideration the number of correlations tested when setting the significance level.

comment by jefftk (jkaufman) · 2014-01-19T14:13:55.716Z · LW(p) · GW(p)

Hypothesis: the predictions on the population of Europe are bimodal, split between people thinking of geographical Europe (739M) vs people thinking of the EU (508M). I'm going to go check the data and report back.

Replies from: jkaufman, William_Quixote, ArisKatsaris
comment by jefftk (jkaufman) · 2014-01-19T15:30:58.290Z · LW(p) · GW(p)

I've cleaned up the data and put it here.

Here's a "sideways cumulative density function", showing all guesses from lowest to highest:

There were a lot of guesses of "500" but that might just be because 500 is a nice round number. There were more people guessing within 50 of 508M (165) than in the 100-wide regions immediately above or below (126 within 50 of 408, 88 within 50 of 608) and more people guessing within 50 of 739 (107) than in the 100-wide regions immediately above or below (91 within 50 of 639, 85 within 50 of 839).

Here's a histogram that shows this, but in order to actually see a dip between the 508ish numbers and 739ish numbers the bucketing needs to group those into separate categories with another category in between, so I don't trust this very much:

If someone knows how to make an actual probability density function chart that would be better, because it wouldn't be sensitive to these arbitrary divisions on where to place the histogram boundaries.

Replies from: VincentYu
comment by VincentYu · 2014-01-19T21:47:05.419Z · LW(p) · GW(p)

Here is a kernel density estimate of the "true" distribution, with bootstrapped) pointwise 95% confidence bands from 999 resamples:

It looks plausibly bimodal, though one might want to construct a suitable hypothesis test for unimodality versus multimodality. Unfortunately, as you noted, we cannot distinguish between the hypothesis that the bimodality is due to rounding (at 500 M) versus the hypothesis that the bimodality is due to ambiguity between Europe and the EU. This holds even if a hypothesis test rejects a unimodal model, but if anyone is still interested in testing for unimodality, I suggest considering Efron and Tibshirani's approach using the bootstrap.

Edit: Updated the plot. I switched from adaptive bandwidth to fixed bandwidth (because it seems to achieve higher efficiency), so parts of what I wrote below are no longer relevant—I've put these parts in square brackets.

Plot notes: [The adaptive bandwidth was achieved with Mathematica's built-in "Adaptive" option for SmoothKernelDistribution, which is horribly documented; I think it uses the same algorithm as 'akj' in R's quantreg package.] A Gaussian kernel was used with the bandwidth set according to Silverman's rule-of-thumb [and the sensitivity ('alpha' in akj's documentation) set to 0.5]. The bootstrap confidence intervals are "biased and unaccelerated" because I don't (yet) understand how bias-corrected and accelerated bootstrap confidence intervals work. Tick marks on the x-axis represent the actual data with a slight jitter added to each point.

comment by William_Quixote · 2014-01-19T15:21:07.579Z · LW(p) · GW(p)

As one datapoint I went with Europe as EU so it's plausible others did too

Replies from: XiXiDu, ahbwramc, Nornagest
comment by XiXiDu · 2014-01-19T15:27:12.589Z · LW(p) · GW(p)

As one datapoint I went with Europe as EU so it's plausible others did too

Same here.

comment by ahbwramc · 2014-01-20T03:34:47.942Z · LW(p) · GW(p)

Me too, at least sort of - I just had a number stored in my brain that I associated with "Europe." Turned out it was EU only, although I didn't have any confusion about the question - I thought I was answering for all of Europe.

comment by Nornagest · 2014-01-20T03:48:31.466Z · LW(p) · GW(p)

I also interpreted Europe as EU, although I was about 20% off that as well.

comment by ArisKatsaris · 2014-01-19T16:00:52.393Z · LW(p) · GW(p)

The misinterpretation of the survey's meaning of "Europe" as "EU" is itself a failure as significant as wrongly estimating its population... so it's not as if it excuses people who got it wrong and yet neither sought for clarification, nor took the possibility of misinterpretation into account when giving their confidence ratios...

Replies from: Aleksander, William_Quixote
comment by Aleksander · 2014-01-19T16:28:57.234Z · LW(p) · GW(p)

You might as well ask, "Who is the president of America?" and then follow up with, "Ha ha got you! America is a continent, you meant USA."

Replies from: ArisKatsaris, army1987
comment by ArisKatsaris · 2014-01-19T16:35:39.861Z · LW(p) · GW(p)

I don't think you're making the argument that Yvain deliberately wanted to trick people into giving a wrong answer -- so I really don't see your analogy as illuminating anything.

It was a question. People answered it wrongly whether by making a wrong estimation of the answer, or by making a wrong estimation of the meaning of the question. Both are failures -- and why should we consider the latter failure as any less significant than the former?

EDIT TO ADD: Mind you, reading the excel of the answers it seems I'm among the people who gave an answer in individuals when the question was asking number in millions. So it's not as if I didn't also have a failure in answering -- and yet I do consider that one a less significant failure. Perhaps I'm just being hypocritical in this though.

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-01-19T20:28:40.387Z · LW(p) · GW(p)

Perhaps I'm just being hypocritical in this though.

Confirm. ;) (Nope, I didn't misinterpret it as EU.)

Even if people recognized the ambiguity, it's not obvious that one should go for an intermediate answer rather than putting all one's eggs in one basket by guessing which was meant. If I were taking the survey and saw that ambiguity, I'd probably be confused for a bit, then realize I was taking longer than I'd semi-committed to taking, answer make a snap judgement, and move on.

comment by A1987dM (army1987) · 2014-01-20T16:40:13.073Z · LW(p) · GW(p)

The continent is basically never called just “America” in modern English (except in the phrases “North America” and “South America”), it's “the Americas”.

comment by William_Quixote · 2014-01-19T21:13:27.846Z · LW(p) · GW(p)

Its also not obvious that people who went with the EU interpretation were incorrect. Language is contextual, if we were to parse the Times, Guardian, BBC, etc over the past year and see how the word "Europe" is actually used, it might be the land mass, or it might be the EU. Certainly one usage will have been more common than the other, but its not obvious to me which one it will have been.

That said, if I had noticed the ambiguity and not auto parsed it as EU, I probably would have expected the typical American to use Europe as land mass and since I think Yvain is American that's what I should have gone with.

On the other other hand, the goal of the question is to gauge numerical calibration, not to gauge language parsing. If someone thought they were answering about the EU, and picked a 90% confidence interval that did in fact include the population of the EU that gives different information about the quantity we are trying to measure then if someone thinks Europe means the continent including Russia and picks a 90% confidence interval that does not include the population of the landmass. Remember this is not a quiz in school to see if someone gets "the right answer" this is a tool that's intended to measure something.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-20T03:57:43.097Z · LW(p) · GW(p)

Yvain explicitly said "Wikipedia's Europe page".

Replies from: simplicio
comment by simplicio · 2014-01-20T13:56:56.396Z · LW(p) · GW(p)

Which users could not double-check because they might see the population numbers.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-21T03:59:37.473Z · LW(p) · GW(p)

But they should expect the Wikipedia page to refer to the continent.

comment by gwern · 2014-01-19T04:36:15.054Z · LW(p) · GW(p)

REFERRAL SOURCE:...Gwern: 9

Hah, my score almost doubled from last year.

comment by Beluga · 2014-01-19T13:22:53.918Z · LW(p) · GW(p)

Not sure how much sense it makes to take the arithmetic mean of probabilities when the odds vary over many orders of magnitude. If the average is, say, 30%, then it hardly matters whether someone answers 1% or .000001%. Also, it hardly matters whether someone answers 99% or 99.99999%.

I guess the natural way to deal with this would be to average (i.e., take the arithmetic mean of) the order of magnitude of the odds (i.e., log[p/(1-p)], p someone's answer). Using this method, it would make a difference whether someone is "pretty certain" or "extremely certain" that a certain statement is true or false.

Does anyone know what the standard way for dealing with this issue is?

Replies from: Manfred, Eugine_Nier
comment by Manfred · 2014-01-19T23:17:30.551Z · LW(p) · GW(p)

Yeah, log odds sounds like a good way to do it. Aggregating estimates is hard because peoples' estimates aren't independent, but averaging log odds will at least do better than averaging probabilities.

comment by Eugine_Nier · 2014-01-20T03:42:55.062Z · LW(p) · GW(p)

Use medians and percentiles instead of means and standard deviations.

comment by MondSemmel · 2014-01-19T12:59:07.984Z · LW(p) · GW(p)

Thanks for taking the time to conduct and then analyze this survey!

What surprised me:

  • Average IQ seemed insane to me. Thanks for dealing extensively with that objection.
  • Time online per week seems plausible from personal experience, but I didn't expect the average to be so high.
  • The overconfidence data hurts, but as someone pointed out in the comments, it's hard to ask a question which isn't misunderstood.

What disappointed me:

  • Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one's values and an empirical question. Truly Blue/Green. On the topic of politics in general, see below.
  • People, use spaced repetition! It's been studied academically and been shown to work brilliantly; it's really easy to incorporate in your daily life in comparison to most other LW material etc... Well, I'm comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities.

And a comment at the end:

"We are doing terribly at avoiding Blue/Green politics, people."

Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?

Incorporating LW debiasing techniques into daily life will necessarily be significantly harder than just reading the Sequences, and even those have only been read by a relatively small proportion of posters...

Replies from: ArisKatsaris, Sophronius, taryneast, taryneast, JacekLach
comment by ArisKatsaris · 2014-01-19T15:56:32.846Z · LW(p) · GW(p)

Average IQ seemed insane to me.

To me it has always sounded right. I'm MENSA-level (at least according to the test the local MENSA association gave me) and LessWrong is the first forum I ever encountered where I've considered myself below-average -- where I've found not just one or two but several people who can think faster and deeper than me.

Replies from: Viliam_Bur, Luke_A_Somers
comment by Viliam_Bur · 2014-01-20T10:12:59.961Z · LW(p) · GW(p)

Same for me.

comment by Luke_A_Somers · 2014-01-31T17:48:27.972Z · LW(p) · GW(p)

Below average or simply not exceptional? I'm certainly not exceptional here but I don't think I'm particularly below average. I suppose it depends on how you weight the average.

comment by Sophronius · 2014-01-20T10:55:49.230Z · LW(p) · GW(p)

Average IQ seemed insane to me. Thanks for dealing extensively with that objection.

With only 500 people responding to the IQ question, it is entirely possible that this is simply a selection effect. I.e. only people with high IQ test themselves or report their score while lower IQ people keep quiet.

Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one's values and an empirical question. Truly Blue/Green.

There's nothing necessarily wrong with this. You are assuming that feminism is purely a matter of personal preference, incorrectly I feel. If you reduce feminism to simply asking "should women have the right to vote" then you should in fact find a correlation between that and "is there such a thing as global warming", because the correct answer in each case is yes.

Not saying I am necessarily in favour of modern day feminism, but it does bother me that people simply assume that social issues are independent of fact. This sounds like "everyone is entitled to their opinion" nonsense to me.

What I find more surprising is that there is no correlation between IQ and political beliefs whatsoever. I suspect that this is simply because the significance level is too strict to find anything.

Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?

With this, on the other hand, I agree completely.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-25T23:56:35.117Z · LW(p) · GW(p)

I've heard GMOs described as the left equivalent for global warming-- maybe there should be a question about GMOs on next survey.

Replies from: army1987, Jiro, Sophronius, ChristianKl, Eugine_Nier
comment by A1987dM (army1987) · 2014-01-26T10:07:26.371Z · LW(p) · GW(p)

I've heard GMOs described as the left equivalent for global warming-- maybe there should be a question about GMOs on next survey.

While we're here, there may be questions about animal testing, alternative medicine, gun control, euthanasia, and marijuana legalization. (I'm not saying that the left is wrong about all of these.)

comment by Jiro · 2014-01-26T00:09:55.165Z · LW(p) · GW(p)

I object to GMOs, but I object to GMOs not because of fears that they may be unnoticed health hazards, but rather because they are often used to apply DRM and patents to food, and applying DRM and patents to food has the disadvantages of applying DRM and patents to computer software. Except it's much worse since 1) you can do without World of Warcraft, but you can't do without food, and 2) traditional methods of producing food involve copying and organisms used for food normally copy themselves.

Replies from: army1987, Lumifer, NancyLebovitz
comment by A1987dM (army1987) · 2014-01-26T09:56:57.528Z · LW(p) · GW(p)

2) traditional methods of producing food involve copying and organisms used for food normally copy themselves

ISTR I've read farmers have preferred to buy seeds from specialized companies rather than planting their own from the previous harvest since decades before the first commercial GMO was introduced.

Replies from: None
comment by [deleted] · 2014-01-26T20:38:00.523Z · LW(p) · GW(p)

Yes, but they wouldn't be sued out of existence IF they had to keep their own.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-27T08:52:07.883Z · LW(p) · GW(p)

Good point.

comment by Lumifer · 2014-01-26T03:00:46.157Z · LW(p) · GW(p)

I object to GMOs, but I object to GMOs not because of fears that they may be unnoticed health hazards, but rather because they are often used to apply DRM and patents to food

It seems that should make you object to certain aspects of the Western legal system.

Given your reasoning I don't understand why you object to GMOs but don't object on the same grounds to, say, music and videos which gave us DMCA, etc.

Replies from: Jiro
comment by Jiro · 2014-01-26T04:51:35.319Z · LW(p) · GW(p)

I object to DRM and patents on entertainment as well. (You can't actually patent music and videos, but software is subject to software patents and I do object to those.)

If you're asking why I don't object to entertainment as a class, it's because of practical considerations--there is quite a bit of entertainment without DRM, small scale infringers are much harder to catch for entertainment, much entertainment is not patented, and while entertainment is copyrighted, it does not normally copy itself and copying is not a routine part of how one uses it in the same way that producing and saving seeds is of using seeds. Furthermore, pretty much all GMO organisms are produced by large companies who encourage DRM and patents. There are plenty of producers of entertainment who have no interest in such things, even if they do end up using DVDs with CSS.

comment by NancyLebovitz · 2014-01-26T02:18:54.521Z · LW(p) · GW(p)

What do you think of golden rice?

Replies from: Jiro
comment by Jiro · 2014-01-26T05:01:54.211Z · LW(p) · GW(p)

I don't object to it except insofar as it's used as a loss leader for companies' other GMO products which are subject to DRM and patents.

comment by Sophronius · 2014-01-26T18:46:41.749Z · LW(p) · GW(p)

Is it, though? I did a quick fact check on this, and found this article which seems to say it is more split down the middle (for as much as US politicians are representative, anyway). It also highlights political divides for other topics.

It's a pity that some people here are so anti-politics (not entirely unjustified, but still). I think polling people here on issues which are traditionally right or left wing but which have clear-cut correct answers to them would make for quite a nice test of rationality.

Replies from: Lumifer
comment by Lumifer · 2014-01-26T23:10:32.219Z · LW(p) · GW(p)

which have clear-cut correct answers to them

Are you quite sure about that? Any examples outside of young earth / creationists?

Replies from: Sophronius
comment by Sophronius · 2014-01-27T18:03:26.291Z · LW(p) · GW(p)

Am I sure that some political questions have clear cut answers? Well, yes... of course. Just because someone points at a factual question and says "that's political!" doesn't magically cause that question to fall into a special subcategory of questions that can never be answered. That just seems really obvious to me.

It's much harder to give examples that everyone here will agree on of course, and which won't cause another of those stupid block-downvoting sprees, but I can give it a try:
-My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what's between their legs. I have heard similar claims from gender studies classes. That counts as obviously false, surely?
-A guy in college tried to convince me that literally any child could be raised to be Mozart. More generally, the whole "blank slate" notion where people claim that genes don't matter at all. Can we all agree that this is false? Regardless of whether you see yourself as left or right or up or down?
-Women should be allowed to apply for the same jobs as men. Surely even people who think that women are less intelligent than men on average should agree with this? Even though in the past it was a hot-button issue?
-People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone. Is this contentious? It shouldn't be.

Do you agree that the above list gives some examples of political questions that every rational person should nonetheless agree with?

Replies from: Lumifer, army1987, James_Miller, Vaniver, ChristianKl, Nornagest, army1987, TheAncientGeek
comment by Lumifer · 2014-01-27T18:22:29.685Z · LW(p) · GW(p)

Do you agree that the above list gives some examples of political questions that every rational person should nonetheless agree with?

No, I don't. To explain why, let me point out that you list of four questions neatly divides into two halves.

Your first two questions are empirically testable questions about what reality is. As such they are answerable by the usual scienc-y means and a rational person will have to accept the answers.

Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.

Rationality does not determine values.

Replies from: army1987, nshepperd, Sophronius
comment by A1987dM (army1987) · 2014-01-28T16:39:53.140Z · LW(p) · GW(p)

The question “should people be allowed to do in their bedroom whatever they want as long as it doesn't harm [directly] anyone [else]?” (extra words added to address Vaniver's point) can be split into two: “which states of the world would allowing people to do in their bedroom etc. result in?”, and “which states of the world are good?”

Now, it's been claimed that most disagreements about policies are about the former and all neurologically healthy people would agree about the latter if they thought about it clearly enough -- which would make Sophronius's claim below kind-of sort-of correct -- but I'm no longer sure of that.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T16:53:42.283Z · LW(p) · GW(p)

Now, it's been claimed that most disagreements about policies are about the former and all neurologically healthy people would agree about the latter if they thought about it clearly enough

First, I don't think this claim is true. Second, I'm not sure what "neurologically healthy" means. I know a lot of people I would call NOT neurotypical. And, of course, labeling people mentally sick for disagreeing with the society's prevailing mores was not rare in history.

Replies from: nshepperd
comment by nshepperd · 2014-01-30T01:33:14.143Z · LW(p) · GW(p)

all neurologically healthy people would agree about the latter if they thought about it clearly enough

This is what you are missing. The simple fact that someone disagrees does not mean they are mentally sick or have fundamentally different value systems. It could equally well mean that either they or the "prevailing social mores" are simply mistaken. People have been known to claim that 51 is a prime number, and not because they actually disagree about what makes a number prime, but just because they were confused at the time.

It's not reasonable to take people's claims that "by 'should' I mean that X maximises utility for everyone" or "by 'should' I mean that I want X" at face value, because people don't have access to or actually use logical definitions of the everyday words they use, they "know it when they see it" instead.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T01:37:40.973Z · LW(p) · GW(p)

This is what you are missing.

No, I don't think I'm missing this piece. The claim is very general: ALL "neurologically healthy people".

People can certainly be mistaken about matters of fact. So what?

It's not reasonable to take people's claims that "by 'should' I mean that X maximises utility for everyone"

Of course not, the great majority of people are not utilitarians and have no interest in maximizing utility for everyone. In normal speech "should" doesn't mean anything like that.

comment by nshepperd · 2014-01-27T22:58:02.646Z · LW(p) · GW(p)

Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.

If "should" has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of "should" employed by Sophronius in the text. It would be more accurate to say that you can be very rational and still disapprove of homosexuality (as disapproval is an attitude, as opposed to a propositional statement).

Replies from: Lumifer
comment by Lumifer · 2014-01-28T01:26:28.920Z · LW(p) · GW(p)

If "should" has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of "should" employed by Sophronius

Maybe. But that's a personal "should", specific to a particular individual and not binding on anyone else.

Sophronius asserts that values (and so "should"s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.

Replies from: nshepperd
comment by nshepperd · 2014-01-29T01:19:14.219Z · LW(p) · GW(p)

What does this mean, "not binding"? What is a personal "should"? Is that the same as a personal "blue"?

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:16:14.705Z · LW(p) · GW(p)

A personal "should" is "I should" -- as opposed to "everyone should". If I think I should, say, drink more, that "should" is not binding on anyone else.

Replies from: nshepperd, None
comment by nshepperd · 2014-01-30T01:42:53.595Z · LW(p) · GW(p)

But the original context was "we should". Sophronius obviously intended the sentence to refer to everyone. I don't see anything relative about his use of words.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T02:19:27.982Z · LW(p) · GW(p)

Sophronius obviously intended the sentence to refer to everyone.

Correct, and that's why I said

Sophronius asserts that values (and so "should"s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.

Replies from: nshepperd
comment by nshepperd · 2014-01-30T05:48:36.212Z · LW(p) · GW(p)

I'm struggling to figure out how to communicate the issue here.

If you agree that what Sophronius intended to say was "everyone should" why would you describe it as a personal "should"? (And what does "binding on someone" even mean, anyway?)

Replies from: Lumifer
comment by Lumifer · 2014-01-30T06:03:26.114Z · LW(p) · GW(p)

Well, perhaps you should just express your point, provided you have one? Going in circles around the word "should" doesn't seem terribly useful.

Replies from: nshepperd
comment by nshepperd · 2014-01-30T06:06:32.209Z · LW(p) · GW(p)

Well, to me it's obvious that "People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone." was a logical proposition, either true or false. And whether it's true or false has nothing to do with whether anyone else has the same terminal values as Sophronius. But you seem to disagree?

Replies from: Lumifer
comment by Lumifer · 2014-01-30T06:10:04.882Z · LW(p) · GW(p)

Well, to me it's obvious that "People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone." was a logical proposition, either true or false.

Do you mean it would be true or false for everyone? At all times? In all cultures and situations? In the same way "Sky is blue" is true?

Replies from: army1987, nshepperd
comment by A1987dM (army1987) · 2014-01-30T08:04:43.535Z · LW(p) · GW(p)

But the sky isn't blue for everyone at all times in all situations!

comment by nshepperd · 2014-01-30T06:32:19.427Z · LW(p) · GW(p)

Yes. Logical propositions are factually either true or false. It doesn't matter who is asking. In exactly the same way that "everyone p-should put pebbles into prime heaps" doesn't care who's asking, or indeed how "the sky is blue" doesn't care who's asking.

Replies from: Lumifer, army1987
comment by Lumifer · 2014-01-30T16:05:44.139Z · LW(p) · GW(p)

Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I'm not inclined to do another round. Instead I'll just state my position.

I think that statements which do not describe reality but instead speak of preferences, values, and "should"s are NOT "factually either true or false". They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it in a slightly different manner, value statements can consistent or inconsistent with some value system, and they also can be instrumentally rational or not in pursuit of some goals (and whether they are rational or not is conditional on the the particular goals).

To get specific, "People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone" is true within some value system and false within some other value systems. Both kinds of value systems exist. I see no basis for declaring one kind of value systems "factually right" and another kind "factually wrong".

As a example consider a statement "The sum of the triangle's inner angles is 180 degrees". Is this true? In some geometries, yes, in others, no. This statement is not true unconditionally, to figure out whether it's true in some specific case you have to specify a particular geometry. And in some real-life geometries it is true and in other real-life geometries it is false.

Replies from: nshepperd, blacktrance
comment by nshepperd · 2014-01-31T00:50:27.087Z · LW(p) · GW(p)

Well, I'm not trying to say that some values are factual and others are imaginary. But when someone makes a "should" statement (makes a moral assertion), "should" refers to a particular predicate determined by their actual value system, as your value system determines your language. Thus when people talk of "you should do X" they aren't speaking of preferences or values, rather they are speaking of whatever it is their value system actually unfolds into.

(The fact that we all use the same word, "should" to describe what could be many different concepts is, I think, justified by the notion that we mostly share the same values, so we are in fact talking about the same thing, but that's an empirical issue.)

As a example consider a statement "The sum of the triangle's inner angles is 180 degrees". Is this true?

Hopefully this will help demonstrate my position. I would say that when being fully rigorous is it a type error to ask whether a sentence is true. Logical propositions have a truth value, but sentences are just strings of symbols. To turn "The sum of the triangle's inner angles is 180 degrees" into a logical proposition you need to know what is meant by "sum", "triangle", "inner angles", "180", "degrees" and indeed "is".

As an example, if the sentence was uttered by Bob, and what he meant by "triangle" was a triangle in euclidean space, and by "is" he meant "is always" (universally quantified), then what he said is factually (unconditionally) true. But if he uttered the same sentence, in a language where "triangle" means a triangle in a hyperbolic space, or in a general space, then what he said would be unconditionally false. There's no contradiction here because in each case he said a different thing.

comment by blacktrance · 2014-01-30T16:08:12.936Z · LW(p) · GW(p)

Value systems are themselves part of reality, as people already have values.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T16:23:59.937Z · LW(p) · GW(p)

In this context I define reality as existing outside of people's minds. What exists solely within minds in not real.

comment by A1987dM (army1987) · 2014-01-30T08:08:41.553Z · LW(p) · GW(p)

Logical propositions are factually either true or false.

Yes they are, but the same sentence can state different logical propositions depending on where, when and by whom it is uttered.

Replies from: nshepperd
comment by nshepperd · 2014-01-30T12:18:48.259Z · LW(p) · GW(p)

They can. But when a person utters a sentence, they generally intend to state the derelativized proposition indicated by the sentence in their language. When I say "P", I don't mean ""P" is a true sentence in all languages at all places", I mean P(current context).

Which is why it's useless to say "I have a different definition of 'should'", because the original speaker wasn't talking about definitions, they were talking about whatever it is "should" actually refers to in their actual language.

(I actually thought of mentioning that the sky isn't always blue in all situations, but decided not to.)

comment by [deleted] · 2014-01-29T17:30:18.155Z · LW(p) · GW(p)

Well, if you should drink more because you're dehydrated, then you're right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:45:21.666Z · LW(p) · GW(p)

Or are you saying that there are ultimately personal shoulds?

Yes, of course there are.

Replies from: None
comment by [deleted] · 2014-01-29T19:02:30.574Z · LW(p) · GW(p)

'Of course' nothing, I find that answer totally shocking. Can you think of an example? Or can you explain how such shoulds are supposed to work?

So far as I understand it, for every 'should' there is some list of reasons why. If two people have the same lists of reasons, then whatever binds one binds them both. So there's nothing personal about shoulds, except insofar as we rarely have all the same reasons to do or not do something.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T19:59:25.534Z · LW(p) · GW(p)

I find that answer totally shocking

Doesn't take much to shock you :-)

Can you think of an example?

Sure. Let's say there is a particular physical place (say, a specific big boulder on the shore of a lake) where I, for some reason, feel unusually calm, serene, and happy. It probably triggers some childhood memories and associations. I like this place. I should spend more time there.

If two people have the same lists of reasons, then whatever binds one binds them both.

No two people are the same. Besides, the importance different people attach to the same reasons varies greatly.

And, of course, to bind another with your "should" requires you to know this other very very well. To the degree I would argue is unattainable.

Replies from: None
comment by [deleted] · 2014-01-29T20:11:09.004Z · LW(p) · GW(p)

I like this place. I should spend more time there.

So say this place also makes me feel calm, serene, and happy. It also triggers in me some childhood memories and associations. I like the place. I also have (like you) no reasons not to go there. Lets say (however unlikely it might be) we have all the same reasons, and we weigh these reasons exactly the same. Nevertheless, it's not the case that I should spend more time there. Have I just told you a coherent story?

And, of course, to bind another with your "should" requires you to know this other very very well. To the degree I would argue is unattainable.

So lets say you're very thirsty. Around you, there's plenty of perfectly potable water. And lets say I know you're not trying to be thirsty for some reason, but that you've just come back from a run. I think I'm in a position to say that you should drink the water. I don't need to know you very well to be sure of that. What am I getting wrong here?

Replies from: Lumifer
comment by Lumifer · 2014-01-29T20:25:20.040Z · LW(p) · GW(p)

however unlikely it might be

That's a rather crucial part. I am asserting that not only two people will not have the same reasons and weight them exactly the same, but you also can't tell whether a person other than you has the same reasons and weights them exactly the same.

You're basically saying "let's make an exact copy of you -- would your personal "shoulds" apply to that exact copy?"

The answer is yes, but an exact copy of me does not exist and that's why my personal shoulds don't apply to other people.

I think I'm in a position to say that you should drink the water.

You can say, of course. But when I answer "no, I don't think so", is your "should" stronger than my "no"?

Replies from: None
comment by [deleted] · 2014-01-29T20:53:26.102Z · LW(p) · GW(p)

Ahh, okay, it looks like we are just misunderstanding one another. I originally asked you whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.

But it seems to me your answer to this is in fact 'no', there are no such ultimately personal shoulds. All shoulds bind everyone subject to the reasons backing them up, it's just that those reasons rarely (if ever) coincide.

You can say, of course. But when I answer "no, I don't think so", is your "should" stronger than my "no"?

Yes. You're wrong that you shouldn't drink. The only should on the table is my correct one. Your 'no' has no strength at all.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T21:29:12.605Z · LW(p) · GW(p)

whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.

What's "numerically different"?

And what did you mean by "ultimately", then? In reality all people are sufficiently different for my personal shoulds to apply only to me and not necessarily to anyone else. The set of other-than-me people to which my personal should must apply is empty. Is that insufficiently "ultimately"?

Yes. You're wrong that you shouldn't drink. The only should on the table is my correct one. Your 'no' has no strength at all.

I beg to disagree. Given that you have no idea about reasons that I might have for not drinking, I don't see why your "should" is correct. Speaking of which, how do you define "correct" in this situation, anyway? What makes you think that the end goals you imagine are actually the end goals that I am pursuing?

Replies from: None
comment by [deleted] · 2014-01-29T22:09:46.698Z · LW(p) · GW(p)

What's "numerically different"?

I just mean something like 'there are two of them, rather than one'. So they can have all the same (non-relational) properties, but not be the same thing because there are two of them.

The set of other-than-me people to which my personal should must apply is empty.

Well, that's an empirical claim, for which we'd need some empirical evidence. It's certainly possible that my personal 'should' could bind you too, since it's possible (however unlikely) that we could be subject to exactly the same reasons in exactly the same way.

This is an important point, because it means that shoulds bind all and every person subject to the reasons that back them up. It may be true that people are subject to very different sets of reasons, such that in effect 'shoulds' only generally apply to one person. I think this empirical claim is false, but that's a bit beside the point.

Given that you have no idea about reasons that I might have for not drinking

It's part of the hypothetical that I do know the relevant reasons and your aims: you're thirsty, there's plenty of water, and you're not trying to stay thirsty. Those are all the reasons (maybe the reality is never this simple, though I think it often is...again, that's an empirical question). Knowing those, my 'you should drink' is absolutely binding on you.

I don't need to define 'correct'. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink. That's all I mean by correct: that it's true to say 'if X, Y, Z, then you should drink'.

Replies from: Lumifer, Jiro
comment by Lumifer · 2014-01-30T01:26:37.568Z · LW(p) · GW(p)

Well, that's an empirical claim, for which we'd need some empirical evidence.

You really want evidence that there are no exact copies of me walking around..?

It's certainly possible that my personal 'should' could bind you too

No, I don't think it is possible. At this point it is fairly clear that we are not exact copies of each other :-D

it means that shoulds bind all and every person subject to the reasons that back them up

Nope, I don't think so. You keep on asserting, basically, that if you find a good set of reasons why I should do X and I cannot refute these reasons, I must do X. That is not true. I can easily tell you to go jump into the lake and not do X.

It's part of the hypothetical that I do know the relevant reasons and your aims

And another crucial part -- no, you can not know all of my relevant reasons and my aims. We are different people and you don't have magical access to the machinations of my mind.

I don't need to define 'correct'. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink.

Yes, you do need to define "correct". The reasons may or may not be sufficient -- you don't know.

It does seem we have several very basic disagreements.

Replies from: None
comment by [deleted] · 2014-01-30T01:56:56.276Z · LW(p) · GW(p)

You really want evidence that there are no exact copies of me walking around..?

I deny the premise on which this is necessary: I think most people share the reasons for most of what they do most of the time. For example, when my friend and I come in from a run, we share reasons for drinking water. The 'should' that binds me, binds him equally. I think this is by far the most common state of affairs, the great complexity and variety of human psychology notwithstanding. The empirical question is whether our reasons for acting are in general very complicated or not.

It's certainly possible that my personal 'should' could bind you too

No, I don't think it is possible.

I think you do, since I'm sure you think it's possible that we are (in the relevant ways) identical. Improbable, to be sure. But possible.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T02:14:14.349Z · LW(p) · GW(p)

The 'should' that binds me, binds him equally.

I think I would describe it as you, being in similar situations, each formulate a personal "should" that happens to be pretty similar. But it's his own "should" which binds him, not yours.

Replies from: None
comment by [deleted] · 2014-01-30T15:32:02.835Z · LW(p) · GW(p)

But I don't suppose you would say this about answering a mathematical problem. If I conclude that six times three is eighteen, and you conclude similarly, isn't it the case that we've done 'the same problem' and come to 'the same answer'? Aren't we each subject to the same reasons, in trying to solve the problem?

Or did each of us solve a personal math problem, and come to a personal answer that happens to be the same number?

Replies from: Lumifer
comment by Lumifer · 2014-01-30T16:16:50.994Z · LW(p) · GW(p)

Aren't we each subject to the same reasons, in trying to solve the problem?

In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.

Same thing for testable statements about physical reality -- disagreements (between rational people) can be solved by the usual scientific methods.

But preferences and values exist only inside minds and I'm asserting that each mind is unique. My preferences and values can be the same as yours but they don't have to be. In contrast, the physical reality is the same for everyone.

Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/

Replies from: None
comment by [deleted] · 2014-01-31T15:11:18.740Z · LW(p) · GW(p)

In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.

I don't see how that's any different from most value judgements. All human beings have a basically common set of values, owing to our neurological and biological similarities. Granted, you probably can't advise me on whether or not to go to grad school, or run for office, but you can advice me to wear my seat belt or drink water after a run. That doesn't seem so different from math: math is also in our heads, it's also a space of widespread agreement and some limited disagreement in the hard cases.

It may look like the Israeli's and the Palestinians just don't see eye to eye on practical matters, but remember how big the practical reasoning space is. Them truly not seeing eye to eye would be like the Palestinians demanding the end of settlements, and the Israelis demanding that Venus be bluer.

Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/

I don't see why. There's no reason to infer from the fact that a 'should' binds someone that you can force them to obey it.

Now, as to why it's a problem if your reasons for acting aren't sufficient to determine a 'should'. Suppose you hold that A, and that if A then B. You conclude from this that B. I also hold that A, and that if A then B. But I don't conclude that B. I say "Your conclusion doesn't bind me." B, I say, is 'true for you', but not 'true for me'. I explain that reasoning is personal, and that just because you draw a conclusion doesn't mean anyone else has to.

If I'm right, however, it doesn't look like 'A, if A then B' is sufficient to conclude B for either of us, since B doesn't necessarily follow from these two premises. Some further thing is needed. What could this be? it can't be another premise (like, 'If you believe that A and that if A then B, conclude that B') because that just reproduces the problem. I'm not sure what you'd like to suggest here, but I worry that so long as, in general, reasons aren't sufficient to determine practical conclusions (our 'shoulds') then nothing could be. Acting would be basically irrational, in that you could never have a sufficient reason for what you do.

Replies from: Lumifer
comment by Lumifer · 2014-02-03T21:47:07.696Z · LW(p) · GW(p)

All human beings have a basically common set of values

Nope. There is a common core and there is a lot of various non-core stuff. The non-core values can be wildly different.

but you can advice me to wear my seat belt or drink water after a run

We're back to the same point: you can advise me, but if I say "no", is your advice stronger than my "no"? You think it is, I think not.

I worry that so long as, in general, reasons aren't sufficient to determine practical conclusions (our 'shoulds') then nothing could be.

The distinction between yourself and others is relevant here. You can easily determine whether a particular set of reasons is sufficient for you to act. However you can only guess whether the same set of reasons is sufficient for another to act. That's why self-shoulds work perfectly fine, but other-shoulds have only a probability of working. Sometimes this probability is low, sometimes it's high, but there's no guarantee.

Replies from: None
comment by [deleted] · 2014-02-04T02:09:31.999Z · LW(p) · GW(p)

We're back to the same point: you can advise me, but if I say "no", is your advice stronger than my "no"? You think it is, I think not.

What do you mean by 'stronger'? I think we all have free will: it's impossible, metaphysically, for me to force you to do anything. You always have a choice. But that doesn't mean I can't point out your obligations or advantage with more persuasive or rational force than you can deny them. It may be that you're so complicated an agent that I couldn't get a grip on what reasons are relevant to you (again, empirical question), but if, hypothetically speaking, I do have as good a grip on your reasons as you do, and if it follows from the reasons to which you are subject that you should do X, and you think you should do ~X, then I'm right and you're wrong and you should do X.

But I cannot, morally speaking, coerce or threaten you into doing X. I cannot, metaphysically speaking, force you to do X. If that is what you mean by 'stronger', then we agree.

My point is, you seem to be picking out a quantitative point: the degree of complexity is so great, that we cannot be subject to a common 'should'. Maybe! But the evidence seems to me not to support that quantitative claim.

But aside from the quantitative claim, there's a different, orthogonal, qualitative claim: if we are subject to the same reasons, we are subject to the same 'should'. Setting aside the question of how complex our values and preferences are, do you agree with this claim? Remember, you might want to deny the antecedent of this conditional, but that doesn't entail that the conditional is false.

Replies from: Lumifer
comment by Lumifer · 2014-02-04T02:28:49.514Z · LW(p) · GW(p)

What do you mean by 'stronger'?

In the same sense we talked about it in the {grand}parent post. You said:

You're wrong that you shouldn't drink. The only should on the table is my correct one. Your 'no' has no strength at all.

...to continue

the degree of complexity is so great, that we cannot be subject to a common 'should'.

We may. But there is no guarantee that we would.

if we are subject to the same reasons, we are subject to the same 'should'. Setting aside the question of how complex our values and preferences are, do you agree with this claim?

We have to be careful here. I understand "reasons" as, more or less, networks of causes and consequences. "Reasons" tell you what you should do to achieve something. But they don't tell you what to achieve -- that's the job of values and preferences -- and how to weight the different sides in a conflicting situation.

Given this, no, same reasons don't give rise to the same "should"s because you need the same values and preferences as well.

Replies from: None
comment by [deleted] · 2014-02-04T15:27:16.573Z · LW(p) · GW(p)

So we have to figure out what a reason is. I took 'reasons' to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative. So, the reasoning behind an action might look something like this:

1) I want an apple. 2) The store sells apples, for a price I'm willing to pay. 3) It's not too much trouble to get there. 4) I have no other reason not to go get some apples. C) I should get some apples from the store.

My claim is just that (C) follows and is true of everyone for whom (1)-(4) is true. If (1)-(4) is true of you, but you reject (C), then you're wrong to do so. Just as anyone would be wrong to accept 'If P then Q' and 'P' but reject the conclusion 'Q'.

Replies from: Lumifer
comment by Lumifer · 2014-02-04T16:03:21.227Z · LW(p) · GW(p)

I took 'reasons' to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative.

That's circular reasoning: if you define reasons as "everything necessary and sufficient", well, of course, if they don't conclude in an imperative they are not sufficient and so are not proper reasons :-/

In your example (4) is the weak spot. You're making a remarkable wide and strong claim -- one common in logical exercise but impossible to make in reality. There are always reasons pro and con and it all depends on how do you weight them.

Consider any objection to your conclusion (C) (e.g. "Eh, I'm feel lazy now") -- any objection falls under (4) and so you can say that it doesn't apply. And we're back to the circle...

Replies from: None
comment by [deleted] · 2014-02-04T17:20:01.797Z · LW(p) · GW(p)

That's circular reasoning:...

Not if I have independent reason to think that 'everything necessary and sufficient to conclude an imperative' is a reason, which I think I do.

In your example (4) is the weak spot. You're making a remarkable wide and strong claim -- one common in logical exercise but impossible to make in reality.

To be absolutely clear: the above is an empirical claim. Something for which we need evidence on the table. I'm indifferent to this claim, and it has no bearing on my point.

My point is just this conditional: IF (1)-(4) are true of any individual, that individual cannot rationally reject (C).

You might object to the antecedent (on the grounds that (4) is not a claim we can make in practice), but that's different from objecting to the conditional. If you don't object to the conditional, then I don't think we have any disagreement, except the empirical one. And on that score, I find you view very implausible, and neither of us is prepared to argue about it. So we can drop the empirical point.

comment by Jiro · 2014-01-30T00:44:11.318Z · LW(p) · GW(p)

That fails to include weighing of that against other considerations. If you're thirsty, there's plenty of water, and you're not trying to stay thirsty, you "should drink water" only if the other considerations don't mean that drinking water is a bad idea despite the fact that it would quench your thirst. And in order to know that someone's other considerations don't outweigh the benefit of drinking water, you need to know so much about the other person that that situation is pretty much never going to happen with any nontrivial "should".

Replies from: None
comment by [deleted] · 2014-01-30T15:27:54.839Z · LW(p) · GW(p)

That fails to include weighing of that against other considerations.

By hypothesis, there are no other significant considerations. I think most of the time, people's rational considerations are about as simple as my hypothetical makes them out to be. Lumifer thinks they're generally much more complicated. That's an empirical debate that we probably can't settle.

But there's also the question of whether or not 'shoulds' can be ultimately personal. Suppose two lotteries. The first is won when your name is drawn out of a hat. Only one name is drawn, and so there's only one possible winner. That's a 'personal' lottery. Now take an impersonal lottery, where you win if your chosen 20 digit number matches the one drawn by the lottery moderators. Supposing you win, it's just because your number matched theirs. Anyone whose number matched theirs would win, but it's very unlikely that there will be more than one winner (or even one).

I'm saying that, leaving the empirical question aside, 'shoulds' bind us in the manner of an impersonal lottery. If we have a certain set of reasons, then they bind us, and they equally bind everyone who has that set of reasons (or something equivalent).

Lumifer is saying (I think) that 'shoulds' bind us in the manner of the personal lottery. They apply to each of us personally, though it's possible that by coincidence two different shoulds have the same content and so it might look like one should binds two people.

A consequence of Lumifer's view, it seems to me, is that a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act. This seems to me to be a very serious problem for the view.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T16:29:13.807Z · LW(p) · GW(p)

a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act.

Correct, I would agree to that.

This seems to me to be a very serious problem for the view.

Why so?

comment by Sophronius · 2014-01-27T19:35:03.484Z · LW(p) · GW(p)

We seem to disagree on a fundamental level. I reject your notion of a strict fact-value distinction: I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity. Rationality indeed does not determine values, in the same way that rationality does not determine cheese, but questions about morality and cheese should both be answered in a rational and factual manner all the same.

If someone tells me that they grew up in a culture where they were taught that eating cheese is a sin, then I'm sorry to be so blunt about it (ok, not really) but their culture is stupid and wrong.

Replies from: Lumifer
comment by Lumifer · 2014-01-27T19:53:43.118Z · LW(p) · GW(p)

I strongly reject your notion of a strict fact-value distinction. I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity.

Interesting. That's a rather basic and low-level disagreement.

So, let's take a look at Alice and Bob. Alice says "I like the color green! We should paint all the buildings in town green!". Bob says "I like the color blue! We should paint all the buildings in town blue!". Are these statements meaningless? Or are they reducible to factual matters?

By the way, your position was quite popular historically. The Roman Catholic church was (and still is) a big proponent.

Replies from: Alejandro1, Sophronius
comment by Alejandro1 · 2014-01-27T20:26:29.277Z · LW(p) · GW(p)

I cannot speak for Sophronius of course, but here is one possible answer. It may be that morality is "objective" in the sense that Eliezer tried to defend in the metaethics sequence. Roughly, when someone says X is good they mean that X is part of of a loosely defined set of things that make humans flourish, and by virtue of the psychological unity of mankind we can be reasonably confident that this is a more-or-less well-defined set and that if humans were perfectly informed and rational they would end up agreeing about which things are in it, as the CEV proposal assumes.

Then we can confidently say that both Alice and Bob in your example are objectively mistaken (it is completely implausible that CEV is achieved by painting all buildings the color that Alice or Bob happens to like subjectively the most, as opposed to leaving the decision to the free market, or perhaps careful science-based urban planning done by a FAI). We can also confidently say that some real-world expressions of values (e.g. "Heretics should be burned at the stake", which was popular a few hundred years ago) are false. Others are more debatable. In particular, the last two examples in Sophronius' list are cases where I am reasonably confident that his answers are the correct ones, but not as close to 100%-epsilon probability as I am on the examples I gave above.

Replies from: Lumifer
comment by Lumifer · 2014-01-27T20:40:10.635Z · LW(p) · GW(p)

Roughly, when someone says X is good they mean that X is part of of a loosely defined set of things that make humans flourish, and by virtue of the psychological unity of mankind we can be reasonably confident that this is a more-or-less well-defined set and that if humans were perfectly informed and rational they would end up agreeing about which things are in it

Well, I can't speak for other people but when I say "X is good" I mean nothing of that sort. I am pretty sure the majority of people on this planet don't think of "good" this way either.

Then we can confidently say

Nope, you can say. If your "we" includes me then no, "we" can't say that.

Replies from: Alejandro1
comment by Alejandro1 · 2014-01-27T21:37:19.332Z · LW(p) · GW(p)

By "Then we can confidently say" I just meant "Assuming we accept the above analysis of morality, then we can confidently say…". I am not sure I accept it myself; I proposed it as a way one could believe that normative questions have objective answers without straying as far form the general LW worldview as being a Roman Catholic.

By the way, the metaethical analysis I outlined does not require that people think consciously of something like CEV whenever they use the word "good". It is a proposed explication in the Carnapian sense of the folk concept of "good" in the same way that, say, VNM utility theory is an explication of "rational".

comment by Sophronius · 2014-01-27T20:16:51.173Z · LW(p) · GW(p)

So, let's take a look at Alice and Bob. Alice says "I like the color green! We should paint all the buildings in town green!". Bob says "I like the color blue! We should paint all the buildings in town blue!". Are these statements meaningless? Or are they reducible to factual matters?

These statements are not meaningless. They are reducible to factual matters. "I like the colour blue" is a factual statement about Bob's preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Bob's brain). Presumably Bob is correct in his assertion, but if I know Bob well enough I might point out that he absolutely detests everything that is the colour blue even though he honestly believes he likes the colour blue. The statement would be false in that case.

Furthermore, the statement "We should paint all the buildings in town blue!" follows logically from his previous statement about his preferences regarding blueness. Certainly, the more people are found to prefer blueness over greenness, the more evidence this provides in favour of the claim "We should paint all the buildings in town blue!" which is itself reducible to "A large number of people including myself prefer for the buildings in this town to be blue, and I therefore favour painting them in this colour!"

Contrast the above with the statement "I like blue, therefore we should all have cheese", which is also a should claim but which can be rejected as illogical. This should make it clear that should statements are not all equally valid, and that they are subject to logical rigour just like any other claim.

Replies from: Lumifer
comment by Lumifer · 2014-01-27T20:30:00.402Z · LW(p) · GW(p)

"I like the colour blue" is a factual statement about Bob's preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Bob's brain).

Let's introduce Charlie.

"I think women should be barefoot and pregnant" is a factual statement about Charlie's preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie's brain).

Furthermore, the statement "We should paint all the buildings in town blue!" follows logically from his previous statement about his preferences regarding blueness.

Futhermore, the statement "We should make sure women remain barefoot and pregnant" follows logically from Charlie's previous statement about his preferences regarding women.

I would expect you to say that Charlie is factually wrong. In which way is he factually wrong and Bob isn't?

Certainly, the more people are found to prefer blueness over greenness, the more evidence this provides in favour of the claim "We should paint all the buildings in town blue!"

The statement "We should paint all the buildings in town blue!" is not a claim in need of evidence. It is a command, an expression of what Bob thinks should happen. It has nothing to do with how many people think the same.

Replies from: nshepperd, Sophronius
comment by nshepperd · 2014-01-29T01:12:10.881Z · LW(p) · GW(p)

Assuming "should" is meant in a moral sense, we can say that "We should paint all the buildings in town blue!" is in fact a claim in need of evidence. Specifically, it says (to 2 decimal places) that we would all be better off / happier / flourish more if the buildings are painted blue. This is certainly true if it turns out the majority of the town really likes blue, so that they would be happier, but it does not entirely follow from Bob's claim that he likes blue—if the rest of the town really hated blue, then it would be reasonable to say that their discomfort outweighed his happiness. In this case he would be factually incorrect to say "We should paint all the buildings in town blue!".

In contrast, you can treat "We should make sure women remain barefoot and pregnant" as a claim in need of evidence, and in this case we can establish it as false. Most obviously because the proposed situation would not be very good for women, and we shouldn't do something that harms half the human race unnecessarily.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-01-29T17:19:50.299Z · LW(p) · GW(p)

Assuming "should" is meant in a moral sense

Not at all and I don't see why would you assume a specific morality.

Bob says "We should paint all the buildings in town blue!" to mean that it would make him happier and he doesn't care at all about what other people around think about the idea.

Bob is not a utilitarian :-)

you can treat "We should make sure women remain barefoot and pregnant" as a claim in need of evidence

Exactly the same thing -- Charlie is not a utilitarian either. He thinks he will be better off in the world where women are barefoot and pregnant.

Replies from: blacktrance, hyporational
comment by blacktrance · 2014-01-29T17:46:15.948Z · LW(p) · GW(p)

But he says "We should" not "I want" because there is the implication that I should also paint the buildings blue. But if the only reason I should do so is because he wants me to, it raises the question of why I should do what he wants. And if he answers "You should do what I want because it's what I want", it's a tautology.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:50:59.306Z · LW(p) · GW(p)

Imagine Vladimir Putin visiting a Russian village and declaring "We should paint all the buildings blue!"

Suddenly "You should do what I want because it's what I want" is not a tautology any more but an excellent reason to get out your paint brush :-/

Replies from: blacktrance
comment by blacktrance · 2014-01-29T17:56:38.927Z · LW(p) · GW(p)

Putin has a way of adding his wants to my wants, through fear, bribes, or other incentives. But then the direct cause of my actions would be the fear/bribe/etc, not the simple fact that he wants it.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T18:00:02.869Z · LW(p) · GW(p)

And what difference does that make?

Replies from: blacktrance
comment by blacktrance · 2014-01-29T18:06:26.166Z · LW(p) · GW(p)

Presumably, Bob doesn't have a way of making me care about what he wants (beyond the extent to which I care about what a generic stranger wants). If he were to pay me, that would be different, but he can't make me care simply because that's his preference. When he says "We should paint the buildings blue" he's saying "I want the buildings painted blue" and "You want the buildings painted blue", but if I don't want the buildings painted blue, he's wrong.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T18:13:32.583Z · LW(p) · GW(p)

Presumably, Bob doesn't have a way of making me care about what he wants

Why not? Much of interactions in a human society are precisely ways of making others care what someone wants.

In any case, the original issue was whether Bob's preference for blue could be described as "correct" or "wrong". How exactly does Bob manage to get what he wants is neither here nor there.

he's saying ... "You want the buildings painted blue"

No, he is not saying that.

Replies from: blacktrance
comment by blacktrance · 2014-01-29T18:21:13.717Z · LW(p) · GW(p)

The original statement was "I like the color blue! We should paint all the buildings in town blue!" His preference for blue can neither be right nor wrong, but the second sentence is something that can be 'correct" or "wrong".

Replies from: Lumifer
comment by Lumifer · 2014-01-29T18:36:34.531Z · LW(p) · GW(p)

but the second sentence is something that can be 'correct" or "wrong".

Without specifying a particular value system, no, it can not.

Full circle back to the original.

Replies from: blacktrance
comment by blacktrance · 2014-01-29T18:47:17.678Z · LW(p) · GW(p)

There already is an existing value system - what Bob and I already value.

comment by hyporational · 2014-01-29T17:28:04.827Z · LW(p) · GW(p)

I think we're pretty close to someone declaring that egoism isn't a valid moral position, again.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:44:05.439Z · LW(p) · GW(p)

I wonder if that someone will make the logical step to insisting that moral egoists should be reeducated to make them change to a "valid" moral position :-/

comment by Eugine_Nier · 2014-01-29T03:20:50.567Z · LW(p) · GW(p)

In contrast, you can treat "We should make sure women remain barefoot and pregnant" as a claim in need of evidence, and in this case we can establish it as false. Most obviously because the proposed situation would not be very good for women

That's just looking at one of the direct consequences, accepting for the sake of argument that most women would prefer not to be "barefoot and pregnant". The problem is that, for these kinds of major social changes, the direct effects tend to be dominated by indirect effects and your argument makes no attempt to analyze the indirect effects.

Replies from: nshepperd, army1987, Lumifer
comment by nshepperd · 2014-01-29T04:14:59.018Z · LW(p) · GW(p)

Technically you are correct, so you can read my above argument as figuratively "accurate to one decimal place". The important thing is that there's nothing mysterious going on here in a linguistic or metaethical sense.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-30T02:47:56.039Z · LW(p) · GW(p)

But in a practical sense these things can't be computed from first principals, so it is necessary to rely on tradition at least to some extent.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-30T07:32:01.518Z · LW(p) · GW(p)

I partly agree, but a tradition that developed under certain conditions isn't necessarily optimal under different conditions (e.g. much better technology and medicine, less need for manual labour, fewer stupid people (at least for now), etc.).

Otherwise, we'd be even better off just executing our evolved adaptations, which had even more time to develop.

comment by A1987dM (army1987) · 2014-01-29T17:08:01.031Z · LW(p) · GW(p)

accepting for the sake of argument that most women would prefer not to be "barefoot and pregnant"

Revealed preferences of women buying shoes and contraception?

comment by Lumifer · 2014-01-29T17:23:51.142Z · LW(p) · GW(p)

accepting for the sake of argument that most women would prefer not to be "barefoot and pregnant".

Depends on the context :-D In China a few centuries ago a woman quite reasonably might prefer to be barefoot (as opposed to have her feet tightly bound to disfigure them) and pregnant (as opposed to barren which made her socially worthless).

comment by Sophronius · 2014-01-27T20:53:15.320Z · LW(p) · GW(p)

"I think women should be barefoot and pregnant" is a factual statement about Charlie's preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie's brain). Futhermore, the statement "We should make sure women remain barefoot and pregnant" follows logically from Charlie's previous statement about his preferences regarding women. I would expect you to say that Charlie is factually wrong. In which way is he factually wrong and Bob isn't?

Charlie is, presumably, factually correct in that he thinks that he holds that view. However, while preferences regarding colour are well established, I am sceptical regarding the claim that this is an actual terminal preference that Charlie holds. It is possible that he finds pregnant barefeeted women attractive, in which case his statement gives valid information regarding his preferences which might be taken into account by others: In this case it is meaningful. Alternatively, if he were raised to think that this is a belief one ought to hold then the statement is merely signalling politics and is therefore of an entirely different nature.

"I like blue and want the town to be painted blue" gives factual info regarding the universe. "Women ought to be pregnant because my church says so!" does not have the primary goal of providing info, it has the goal of pushing politics.

Imagine a person holding a gun to your head and saying "You should give me your money". Regardless of his use of the word "should", he is making an implicit logical argument:
1) Giving me your money reduces your chances of getting shot by me
2) You presumably do not want to get shot
3) Therefore, you should give me your money

If you respond to the man by saying that morality is relative, you are rather missing the point.

The statement "We should paint all the buildings in town blue!" is not a claim in need of evidence. It is a command, an expression of what Bob thinks should happen. It has nothing to do with how many people think the same.

I think you are missing the subtle hidden meanings of everyday discourse. Imagine Bob saying that the town should be painted blue. Then, someone else comes with arguments for why the town should not be painted Blue. Bob eventually agrees. "You are right", he says, "that was a dumb suggestion". The fact that exchanges like this happen all the time shows that Bob's statement is not just a meaningless expression, but rather a proposal relying on implicit arguments and claims. Specifically, it relies on enough people in the village sharing his preference for blue houses that the notion will be taken seriously. If Bob did not think this to be the case, he probably would not have said what he did.

Replies from: Nornagest, Lumifer
comment by Nornagest · 2014-01-27T21:30:36.825Z · LW(p) · GW(p)

I am sceptical regarding the claim that [Charlie's preference re: gender roles] is an actual terminal preference that Charlie holds. It is possible that he finds pregnant barefeeted women attractive [...] Alternatively, if he were raised to think that this is a belief one ought to hold then the statement is merely signalling politics and is therefore of an entirely different nature.

Okay, yeah, so belief in belief is a thing. We can profess opinions that we've been taught are virtuous to hold without deeply integrating them into our worldview; and that's probably increasingly common these days as traditional belief systems clank their way into some sort of partial conformity with mainstream secular ethics. But at the same time, we should not automatically assume that anyone professing traditional values -- or for that matter unusual nontraditional ones -- is doing so out of self-interest or a failure to integrate their ethics.

Setting aside the issues with "terminal value" in a human context, it may well be that post-Enlightenment secular ethics are closer in some absolute sense to a human optimal, and that a single optimal exists. I'm even willing to say that there's evidence for that in the form of changing rates of violent crime, etc., although I'm sure the reactionaries in the audience will be quick to remind me of the technological and demographic factors with their fingers on the scale. But I don't think we can claim to have strong evidence for this, in view of the variety of ethical systems that have come before us and the generally poor empirical grounding of ethical philosophy.

Until we do have that sort of evidence, I view the normative component of our ethics as fallible, and certainly not a good litmus test for general rationality.

Replies from: Sophronius
comment by Sophronius · 2014-01-28T09:49:33.720Z · LW(p) · GW(p)

Okay, yeah, so belief in belief is a thing. We can profess opinions that we've been taught are virtuous to hold without deeply integrating them into our worldview; and that's probably increasingly common these days as traditional belief systems clank their way into some sort of partial conformity with mainstream secular ethics. But at the same time, we should not automatically assume that anyone professing traditional values -- or for that matter unusual nontraditional ones -- is doing so out of self-interest or a failure to integrate their ethics.

On the contrary, I think it's quite reasonable to assume that somebody who bases their morality on religious background has not integrated these preferences and is simply confused. My objection here is mainly in case somebody brings up a more extreme example. In these ethical debates, somebody always (me this time, I guess) brings up the example of Islamic sub-groups who throw acid in the faces of their daughters. Somebody always ends up claiming that "well that's their culture, you know, you can't criticize that. Who are you to say that they are wrong to do so?". In that case, my reply would be that those people do not actually have a preference for disfigured daughters, they merely hold the belief that this is right as a result of their religion. This can be seen from the fact that the only people who do this hold more or less the same set of religious beliefs. And given that the only ones who hold that 'preference' do so as a result of a belief which is factually false, I think it's again reasonable to say: No, I do not respect their beliefs and their culture is wrong and stupid.

Setting aside the issues with "terminal value" in a human context, it may well be that post-Enlightenment secular ethics are closer in some absolute sense to a human optimal, and that a single optimal exists.

The point is not so much whether there is one optimum, but rather that some cultures are better than others and that progress is in fact possible. If you agree with that, we have already closed most of the inferential distance between us. :)

Replies from: NancyLebovitz, Nornagest
comment by NancyLebovitz · 2014-01-28T20:37:41.299Z · LW(p) · GW(p)

Even if people don't have fully integrated beliefs in destructive policies, their beliefs can be integrated enough to lead to destructive behavior.

The Muslims who throw acid in their daughters' faces may not have an absolute preference for disfigured daughters, but they may prefer disfigured daughters over being attacked by their neighbors for permitting their daughters more freedom than is locally acceptable-- or prefer to not be attacked by the imagined opinions (of other Muslims and/or of Allah) which they're carrying in their minds.

Also, even though it may not be a terminal value, I'd say there are plenty of people who take pleasure in hurting people, and more who take pleasure in seeing other people hurt.

Replies from: Sophronius
comment by Sophronius · 2014-01-28T20:51:45.743Z · LW(p) · GW(p)

Agreed on each count.

comment by Nornagest · 2014-01-28T18:08:42.901Z · LW(p) · GW(p)

Somebody always ends up claiming that "well that's their culture, you know, you can't criticize that. Who are you to say that they are wrong to do so?" [...] The point is not so much whether there is one optimum, but rather that some cultures are better than others and that progress is in fact possible.

There's some subtlety here. I believe that ethical propositions are ultimately reducible to physical facts (involving idealized preference satisfaction, although I don't think it'd be productive to dive into the metaethical rabbit hole here), and that cultures' moral systems can in principle be evaluated in those terms. So no, culture isn't a get-out-of-jail-free card. But that works both ways, and I think it's very likely that many of the products of modern secular ethics are as firmly tied to the culture they come from as would be, say, an injunction to stone people who wear robes woven from two fibers. We don't magically divorce ourselves from cultural influence when we stop paying attention to the alleged pronouncements of the big beardy dude in the sky. For these reasons I try to be cautious about -- though I wouldn't go so far as to say "skeptical of" -- claims of ethical progress in any particular domain.

The other fork of this is stability of preference across individuals. I know I've been beating this drum pretty hard, but preference is complicated; among other things, preferences are nodes in a deeply nested system that includes a number of cultural feedback loops. We don't have any general way of looking at a preference and saying whether or not it's "true". We do have some good heuristics -- if a particular preference appears only in adherents of a certain religion, and their justification for it is "the Triple Goddess revealed it to us", it's probably fairly shallow -- but they're nowhere near good enough to evaluate every ethical proposition, especially if it's close to something generally thought of as a cultural universal.

Islamic sub-groups who throw acid in the faces of their daughters [...] the only people who do this hold more or less the same set of religious beliefs.

The Wikipedia page on acid throwing describes it as endemic to a number of African and Central and South Asian countries, along with a few outside those regions, with religious cultures ranging from Islam through Hinduism and Buddhism. You may be referring to some subset of acid attacks (the word "daughter" doesn't appear in the article), but if there is one, I can't see it from here.

Replies from: Sophronius
comment by Sophronius · 2014-01-28T19:08:47.923Z · LW(p) · GW(p)

Fair enough. I largely agree with your analysis: I agree that preferences are complicated, and I would even go as far as to say that they change a little every time we think about them. That does make things tricky for those who want to build a utopia for all mankind! However, in every day life I think objections on such an abstract level aren't so important. The important thing is that we can agree on the object level, e.g. sex is not actually sinful, regardless of how many people believe it is. Saying that sex is sinful is perhaps not factually wrong, but rather it belies a kind of fundamental confusion regarding the way reality works that puts it in the 'not even wrong' category. The fact that it's so hard for people to be logical about their moral beliefs is actually precisely why I think it's a good litmus test of rationality/clear thinking: If it were easy to get it right, it wouldn't be much of a test.

The Wikipedia page on acid throwing describes it as endemic to a number of African and Central and South Asian countries, along with a few outside those regions, with religious cultures ranging from Islam through Hinduism and Buddhism.

Looking at that page I am still getting the impression that it's primarily Islamic cultures that do this, but I'll agree that calling it exclusively Islamic was wrong. Thanks for the correction :)

comment by Lumifer · 2014-01-27T21:07:33.869Z · LW(p) · GW(p)

I am sceptical regarding the claim that this is an actual terminal preference that Charlie holds

Given that you know absolutely nothing about Charlie, a player in a hypothetical scenario, I find your scepticism entirely unwarranted. Fighting the hypothetical won't get you very far.

So, is Charlie factually wrong? On the basis of what would you determine that Charlie's belief is wrong and Bob's isn't?

Imagine a person holding a gun to your head and saying "You should give me your money". ... If you respond to the man by saying that morality is relative, you are rather missing the point.

Why would I respond like that? What does the claim that morality is relative have to do with threats of bodily harm?

I think you are missing the subtle hidden meanings of everyday discourse.

In this context I don't care about the subtle hidden meanings. People who believe they know the Truth and have access to the Sole Factually Correct Set of Values tend to just kill others who disagree. Or at the very least marginalize them and make them third-class citizens. All in the name of the Glorious Future, of course.

Replies from: Sophronius
comment by Sophronius · 2014-01-28T09:37:03.025Z · LW(p) · GW(p)

Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference. I don't even know what it would mean for a preference to be wrong. Rather, his preferences might conflict with preferences of others, who might object to this state of reality by calling it "wrong", which seems like the mind-projection fallacy to me. There is nothing mysterious about this.

Similarly, the person in the original example of mine is not wrong to think men kissing each other is icky, but he IS wrong to conclude that there is therefore some universal moral rule that men kissing each other is bad. Again, just because rationality does not determine preferences, does not mean that logic and reason do not apply to morality!

In this context I don't care about the subtle hidden meanings. People who believe they know the Truth and have access to the Sole Factually Correct Set of Values tend to just kill others who disagree. Or at the very least marginalize them and make them third-class citizens. All in the name of the Glorious Future, of course.

I believe you have pegged me quite wrongly, sir! I only care about truth, not Truth. And yes, I do have access to some truths, as of course do you. Saying that logic and reason apply to morality and that therefore all moral claims are not equally valid (they can be factually wrong or entirely nonsensical) is quite a far cry from heralding in the Third Reich. The article on Less Wrong regarding the proper use of doubt seems pertinent here.

Replies from: Lumifer
comment by Lumifer · 2014-01-29T17:13:16.650Z · LW(p) · GW(p)

Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference.

I am confused. Did I misunderstand you or did you change your mind?

Earlier you said that "should" kind of questions have single correct answers (which means that other answers are wrong). A "preference" is more or less the same thing as a "value" in this context, and you staked out a strong position:

I reject your notion of a strict fact-value distinction: I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity. ... but questions about morality ... should ... be answered in a rational and factual manner all the same.

Since statements of facts can be correct or wrong and you said there is no "fact-value distinction", then values (and preferences) can be correct or wrong as well. However in the parent post you say

I don't even know what it would mean for a preference to be wrong.

If you have a coherent position in all this, I don't see it.

Replies from: Sophronius
comment by Sophronius · 2014-01-30T18:03:49.067Z · LW(p) · GW(p)

I think you misunderstood me. Of course I don't mean that the terms "facts" and "values" represent the same thing. Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It's a category error. When I say I reject a strict fact-value dichotomy I mean that I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts, in the same way that I reject the notion of faith inhabiting a separate magistrate from science (i.e. special pleading). So my position is that when someone makes a moral claim such as "don't murder", they better be able to reduce that to factual statements about reality or else they are talking nonsense.

For example, "sex is sinful!" usually reduces to "I think my god doesn't like sex", which is nonsense because there is no such thing. On the other hand, if someone says "Stealing is bad!", that can be reduced to the claim that allowing theft is harmful to society (in a number of observable ways), which I would agree with. As such I am perfectly comfortable labelling some moral claims as valid and some as nonsense.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T18:37:29.336Z · LW(p) · GW(p)

I don't see how this sentence

Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It's a category error.

is compatible with this sentence

I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts

Replies from: Sophronius
comment by Sophronius · 2014-01-31T11:13:18.400Z · LW(p) · GW(p)

I am distinguishing between X and statements regarding X. The statement "Cheese is wrong" is nonsensical. The statement "it's nonsensical to say cheese is wrong" is not nonsensical. Values and facts are not the same, but statements regarding values and facts should be treated the same way.

Similarly: Faith and Science are not the same thing. Nonetheless, I reject the notion that claims based on faith should be treated any differently from scientific claims.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-02-01T03:44:52.505Z · LW(p) · GW(p)

Similarly: Faith and Science are not the same thing. Nonetheless, I reject the notion that claims based on faith should be treated any differently from scientific claims.

Do you also reject the notion that claims about mathematics and science should be treated differently?

Replies from: Sophronius
comment by Sophronius · 2014-02-01T10:03:20.682Z · LW(p) · GW(p)

In the general sense that all claims must abide by the usual requirements of validity and soundness of logic, sure.

In fact, you might say that mathematics is really just a very pure form of logic, while science deals with more murky, more complicated matters. But the essential principle is the same: You better make sure that the output follows logically from the input, or else you're not doing it right.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-02-01T22:57:56.147Z · LW(p) · GW(p)

In the general sense that all claims must abide by the usual requirements of validity and soundness of logic, sure.

My point is that what constitutes "validity" and "soundness of logic" differs between the two domains.

comment by A1987dM (army1987) · 2014-01-30T20:04:34.599Z · LW(p) · GW(p)

My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what's between their legs.

I think it's more likely he was misusing the word “literally”/wearing belief as attire (in technical terms, bullshitting) than he actually really believed that. After all I guess he could tell boys and girl apart without looking between their legs, couldn't he?

comment by James_Miller · 2014-01-30T20:46:35.015Z · LW(p) · GW(p)

People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone. Is this contentious? It shouldn't be.

But you can always find harm if you allow for feelings of disgust, or take into account competition in sexual markets (i.e. if having sex with X is a substitute for having sex with Y then Y might be harmed if someone is allowed to have sex with X.)

Replies from: Sophronius
comment by Sophronius · 2014-01-31T11:31:40.608Z · LW(p) · GW(p)

Ok, that's a fair enough point. Sure, feelings do matter. However, I generally distinguish between genuine terminal preferences and mere surface emotions. The reason for this is that often it is easier/better to change your feelings than for other people to change their behaviour. For example, if I strongly dislike the name James Miller, you probably won't change your name to take my feelings into account.

(At the risk of saying something political: This is the same reason I don't like political correctness very much. I feel that it allows people to frame political discourse purely by being offended.)

comment by Vaniver · 2014-01-28T00:07:12.797Z · LW(p) · GW(p)

-People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone. Is this contentious? It shouldn't be.

The standard reply to this is that many people hurt themselves by their choices, and that justifies intervention. (Even if we hastily add an "else" after "anyone," note that hurting yourself hurts anyone who cares about you, and thus the set of acts which harm no one is potentially empty.)

comment by ChristianKl · 2014-01-29T20:24:18.738Z · LW(p) · GW(p)

-My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what's between their legs. I have heard similar claims from gender studies classes. That counts as obviously false, surely?

It's wrong on a biological level. From my physiology lecture: Woman blink twice as much as men. The have less water in their bodies.

-People should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone. Is this contentious? It shouldn't be.

So you are claiming either: "Children are no people" or "Pedophilia should be legal". I don't think any of those claims has societal approval let alone is a clear-cut issue.

But even if you switch the statement to the standard: "Consenting adults should be allowed to do in their bedroom whatever they want as long as it doesn't harm anyone" The phrases consenting (can someone with >1.0 promille alcohol consent?) and harm (emotional harm exists and not going tested for STD's and having unprotected sex has the potential to harm) are open to debate.

-A guy in college tried to convince me that literally any child could be raised to be Mozart. More generally, the whole "blank slate" notion where people claim that genes don't matter at all.

The maximal effect of a strong cognitive intervention might very will bring the average person to Mozart levels. We know relatively little about doing strong intervention to improve human mental performance.

But genes to matter.

-Women should be allowed to apply for the same jobs as men. Surely even people who think that women are less intelligent than men on average should agree with this?

It depends on what roles. If a movie producer casts actors for a specific role, gender usually matters a big deal.

A bit more controversial but I think there are cases where it's useful for men to come together in an environment where they don't have to signal stuff to females.

Replies from: nshepperd, Eugine_Nier
comment by nshepperd · 2014-01-30T06:34:56.094Z · LW(p) · GW(p)

So you are claiming either: "Children are no people" or "Pedophilia should be legal". I don't think any of those claims has societal approval let alone is a clear-cut issue.

I'd expect them to assert that paedophilia does harm. That's the obvious resolution.

Replies from: ChristianKl, army1987
comment by ChristianKl · 2014-01-30T08:46:15.786Z · LW(p) · GW(p)

I'd expect them to assert that paedophilia does harm. That's the obvious resolution.

Court are not supposed to investigate whether the child is emotionally harmed by the experience but whether he or she is under a certain age threshold. You could certainly imagine a legal system where psychologists are always asked whether a given child is harmed by having sex instead of a legal system that makes the decision through an age criteria.

I think a more reasonable argument for the age boundary isn't that every child gets harmed but that most get harmed and that having a law that forbids that behavior is preventing a lot of children from getting harmed.

I don't think you are a bad person to arguing that we should have a system that focuses on the amount of harm done instead of focusing on an arbitrary age boundary but that's not the system we have that's backed by societal consensus.

We also don't put anybody in prison for having sex with a 19-year old breaking her heart and watching as they commit suicide. We would judge a case like that as a tragedy but we wouldn't legally charge the responsible person with anything.

The concept of consent is pretty important for our present system. Even in cases where no harm is done we take a breach of consent seriously.

comment by A1987dM (army1987) · 2014-01-30T19:59:39.477Z · LW(p) · GW(p)

Actually I'm under the impression that the ‘standard’ resolution is not about the “harm” part but about the “want” part: it's assumed that people below a certain age can't want sex, to the point that said age is called the age of consent and sex with people younger than that is called a term which suggests it's considered a subset of sex with people who don't want it.

(I'm neither endorsing nor mocking this, just describing it.)

Replies from: Lumifer
comment by Lumifer · 2014-01-30T20:08:09.028Z · LW(p) · GW(p)

Actually I'm under the impression that the ‘standard’ resolution is not about the “harm” part but about the “want” part

I think your impression is mistaken.

it's assumed that people below a certain age can't want sex, to the point that said age is called the age of consent

Nope. It is assumed that people below a certain age cannot give informed consent. In other words, they are assumed to be not capable of good decisions and to be not responsible for the consequences. What they want is irrelevant. If you're below the appropriate age of consent, you cannot sign a valid contract, for example.

Below the age of consent you basically lack the legal capacity to agree to something.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-30T20:09:48.136Z · LW(p) · GW(p)

I assumed “want” to mean ‘consent’ in that sentence.

Replies from: Lumifer
comment by Lumifer · 2014-01-30T20:18:19.434Z · LW(p) · GW(p)

That's not what these words mean, not even close.

comment by Eugine_Nier · 2014-01-30T06:11:51.331Z · LW(p) · GW(p)

So you are claiming either: "Children are no people" or "Pedophilia should be legal". I don't think any of those claims has societal approval let alone is a clear-cut issue.

Well, I suppose Sophronius could argue that pedophilia should be legal, after all many things (especially related to sex) that were once socially unacceptable are now considered normal.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-30T12:53:03.582Z · LW(p) · GW(p)

I suppose Sophronius could argue that pedophilia should be legal

Even if he thinks that it should be legal, it's no position where it's likely that everyone will agree. Sophronius wanted to find examples where everyone can agree.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-31T02:45:49.424Z · LW(p) · GW(p)

No, he was listing political, i.e., controversial, questions with clear cut answers. I don't know what Sophronius considers clear cut.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-31T10:38:18.427Z · LW(p) · GW(p)

I don't know

Really? Gives his history I think the answer is pretty clear that he's not the kind of person who's out to argue that legalizing pedophila is a clear cut issue.

He also said something about wanting to avoid the kind of controversy that causes downvoting.

comment by Nornagest · 2014-01-27T18:22:37.364Z · LW(p) · GW(p)

In all of these cases, the people breaking with the conclusion you presumably believe to be obvious often do so because they believe the existing research to be hopelessly corrupt. This is of course a rather extraordinary statement, and I'm pretty sure they'd be wrong about it (that is, as sure as I can be with a casual knowledge of each field and a decent grasp of statistics), but bad science isn't exactly unheard of. Given the right set of priors, I can see a rational person holding each of these opinions at least for a time.

In the latter two, they might additionally have different standards for "should" than you're used to.

Replies from: Sophronius
comment by Sophronius · 2014-01-27T18:37:08.739Z · LW(p) · GW(p)

I'm not sure what you are trying to convince me of here. That people who disagree have reasons for disagreeing? Well of course they do, it's not like they disagree out of spite. The fact that they are right in their minds does not mean that they are in fact right.

And yes, they might have a different definition for should. Doesn't matter. If you talk to someone who believes that men kissing each other is "just plain wrong", you'll inevitably find that they are confused, illogical and inconsistent about their beliefs and are irrational in general. Do you think that just because a statement involves the word "should", you can't say that they are wrong?

Replies from: Nornagest
comment by Nornagest · 2014-01-27T18:43:54.629Z · LW(p) · GW(p)

The question I was trying to answer wasn't whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.

As to the correctness of normative claims -- well, that's a fairly subtle question. Deontological claims are often entangled with factual ones (e.g. the existence-of-God thing), so that's at least one point of grounding, but even from a consequential perspective you need an optimization objective. Rational actors may disagree on exactly what that objective is, and reasonable-sounding objectives often lead to seriously counterintuitive prescriptions in some cases.

Replies from: Sophronius
comment by Sophronius · 2014-01-27T19:24:23.783Z · LW(p) · GW(p)

The question I was trying to answer wasn't whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.

Oh, right, I see what you mean. Sure, people can disagree with each other without either being irrational: All that takes is for them to have different information. For example, one can rationally believe the earth is flat, depending on which time and place one grew up in.

That does not change the fact that these questions have a correct answer though, and it should be pretty clear which the correct answers are in the above examples, even though you can never be 100% certain of course. The point remains that just because a question is political does not mean that all answers are equally valid. False equivalence and all that.

comment by A1987dM (army1987) · 2014-01-28T06:47:57.167Z · LW(p) · GW(p)

Women should be allowed to apply for the same jobs as men.

Including as basso singers? ;-)

(As you worded your sentence, I would agree with it, but I would also add "But employers should be allowed to not hire them.")

comment by TheAncientGeek · 2014-01-29T18:24:38.530Z · LW(p) · GW(p)

I would have gone for "slavery is bad"

comment by ChristianKl · 2014-01-26T23:45:20.826Z · LW(p) · GW(p)

There is a question about it. It's the existential thread that's most feared among Lesswrongers. Bioengineered pandemics are a thread due to gene manipulated organisms.

If that's not what you want to know, how would you word your question?

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-28T06:31:48.529Z · LW(p) · GW(p)

I took "bioengineered" to imply 'deliberately' and "pandemic" to imply 'contagious', and in any event fear of > 90% of humans dying by 2100 is far from the only possible reason to oppose GMOs.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-28T13:38:42.715Z · LW(p) · GW(p)

any event fear of > 90% of humans dying by 2100 is far from the only possible reason to oppose GMOs.

I didn't advocate that it's the only reason. That's why I asked for a more precise question.

I took "bioengineered" to imply 'deliberately' and "pandemic" to imply 'contagious',

If the tools that you need to genmanipulate organisms are widely available it's much easier to deliberately produce a pandemic.

It's possible to make a bacteria immune to antibiotica by just giving them antibiotica and making not manipulating the genes directly. On the other hand I think that people fear bioengineered pandemics because they expect stronger capabilities in regards to manipulating organisms in the future.

comment by Eugine_Nier · 2014-01-26T21:15:24.179Z · LW(p) · GW(p)

My issue with GMOs is basically the same one Taleb describes in this quote.

comment by taryneast · 2014-02-09T05:14:13.910Z · LW(p) · GW(p)

"Time online per week seems plausible from personal experience, but I didn't expect the average to be so high."

I personally spend an average of 50 hours a week online.

That's because, by profession, I am a web-developer.

The percentage of LessWrong members in IT is clearly higher than that of the average population.

I postulate that the higher number of other IT geeks (who, like me, are also likely spending high numbers of hours online per week) is pushing up the average to a level that seems, to you, to be surprisingly high.

comment by taryneast · 2014-02-09T05:16:18.021Z · LW(p) · GW(p)

"The overconfidence data hurts, but as someone pointed out in the comments, it's hard to ask a question which isn't misunderstood."

I interpreted this poor level of calibration more to the fact that it's easier to read about what you should be doing than to actually go and practice the skill and get better at it.

comment by JacekLach · 2014-01-23T19:12:00.598Z · LW(p) · GW(p)

People, use spaced repetition! It's been studied academically and been shown to work brilliantly; it's really easy to incorporate in your daily life in comparison to most other LW material etc... Well, I'm comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities

I'm one of the people who have never used spaced repetition, though I've heard of it. I don't doubt it works, but what do you actually need to remember nowadays? I'd probably use it if I was learning a new language (which I don't really plan to do anytime soon)... What other skills work nicely with spaced repetition?

I just don't feel the need to remember things when I have google / wikipedia on my phone.

Replies from: memoridem, Nornagest
comment by memoridem · 2014-01-23T19:43:02.013Z · LW(p) · GW(p)

Isn't there anything you already know but wouldn't like to forget? SRS is for keeping your precious memory storage, not necessarily for learning new stuff. There are probably a lot of things that wouldn't even cross your mind to google if they were erased by time. Googling could also waste time compared to storing memories if you have to do it often enough (roughly 5 minutes in your lifetime per fact).

What other skills work nicely with spaced repetition?

In my experience anything you can write into brief flashcards. Some simple facts can work as handles for broader concepts once you've learned them. You could even record triggers for episodic memories that are important to you.

Replies from: JacekLach
comment by JacekLach · 2014-01-23T20:47:33.889Z · LW(p) · GW(p)

Isn't there anything you already know but wouldn't like to forget?

Yeah, that's pretty much the problem. Not really. I.e. there are stuff I know that would be inconvenient to forget, because I use this knowledge every day. But since I already use it every day, SR seems unnecessary.

Things I don't use every day are not essential - the cost of looking them up is minuscule since it happens rarely.

I suppose a plausible use case would be birth dates of family members, if I didn't have google calendar to remind me when needed.

Edit: another use case that comes to mind would be names. I'm pretty bad with names (though I've recently begun to suspect that probably I'm as bad with remembering names as anyone else, I just fail to pay attention when people introduce themselves). But asking to take someone's picture 'so that I can put it on a flashcard' seems awkward. Facebook to the rescue, I guess?

(though I don't really meet that many people, so again - possibly not worth the effort in maintaining such a system)

comment by Nornagest · 2014-01-23T20:09:53.067Z · LW(p) · GW(p)

I don't know what you work on, but many fields include bodies of loosely connected facts that you could in principle look up, but which you'd be much more efficient if you just memorized. In programming this might mean functions in a particular library that you're working with (the C++ STL, for example). In chemistry, it might be organic reactions. The signs of medical conditions might be another example, or identities related to a particular branch of mathematics.

SRS would be well suited to maintaining any of these bodies of knowledge.

Replies from: JacekLach
comment by JacekLach · 2014-01-23T20:45:09.524Z · LW(p) · GW(p)

I'm a software dev.

In programming this might mean functions in a particular library that you're working with (the C++ STL, for example)

Right. I guess I somewhat do 'spaced repetition' here, just by the fact that every time I interact with a particular library I'm reminded of its function. But that is incidental - I don't really care about remembering libraries that I don't use, and those that I use regularly I don't need SR to maintain.

I suppose medical conditions looks more plausible as a use case - you really need to remember a large set of facts, any of which is actually used very rarely. But that still doesn't seem useful to me personally - I can think of no dataset that'd be worth the effort.

I guess I should just assume I'm an outlier there, and simply keep SR in mind in case I ever find myself needing it.

Replies from: Antiochus
comment by Antiochus · 2014-01-24T18:45:13.021Z · LW(p) · GW(p)

I've used SRS to learn programming theory that I otherwise had trouble keeping straight in my head. I've made cards for design patterns, levels of database normalization, fiddly elements of C++ referencing syntax, etc.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-24T19:16:41.780Z · LW(p) · GW(p)

Do you have your design pattern cards formatted in a way that are likely to be useful for other people?

Replies from: Antiochus
comment by Antiochus · 2014-01-24T20:14:29.988Z · LW(p) · GW(p)

They're mostly copy-and-pasted descriptions from wikipedia, tweaked with added info from Design Patterns. I'm not sure they'd be very useful to other people. I used them to help prepare for an interview, so when I was doing my cards I'd describe them out loud, then check the description, then pop open the book to clarify anything I wasn't sure on.

edit: And I'd do the reverse, naming the pattern based on the description.

comment by Kaj_Sotala · 2014-01-19T05:56:30.722Z · LW(p) · GW(p)

Other answers which made Ozy giggle [...] "pirate,"

Not necessarily a joke.

Replies from: Creutzer
comment by Creutzer · 2014-01-19T09:43:51.198Z · LW(p) · GW(p)

The link contains a typo, it links to a non-existing article on the/a Pirate part instead of the Pirate Party.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-19T12:18:38.697Z · LW(p) · GW(p)

Fixed, thanks.

comment by AlexMennen · 2014-01-19T18:51:55.552Z · LW(p) · GW(p)

On average, effective altruists (n = 412) donated $2503 to charity, and other people (n = 853) donated $523 - obviously a significant result.

There could be some measurement bias here. I was on the fence about whether I should identify myself as an effective altruist, but I had just been reminded of the fact that I hadn't donated any money to charity in the last year, and decided that I probably shouldn't be identifying as an effective altruist myself despite having philosophical agreements with the movement.

1265 people told us how much they give to charity; of those, 450 gave nothing. ... In order to calculate percent donated I divided charity donations by income in the 947 people helpful enough to give me both numbers. Of those 947, 602 donated nothing to charity, and so had a percent donated of 0.

This is blasphemy against Saint Boole.

Replies from: wuncidunci
comment by wuncidunci · 2014-01-22T22:20:52.329Z · LW(p) · GW(p)

Did you mean Saint Boole?

And whence the blasphemy?

Replies from: Vaniver, AlexMennen
comment by Vaniver · 2014-01-22T22:24:51.757Z · LW(p) · GW(p)

And whence the blasphemy?

1265 people are in group A. 947 are in group B, which is completely contained in A. Of all the people in group A, 450 satisfy property C, whereas this is true for 602 people in group B, all of whom are also in group A. 602 is larger than 450, so something has gone wrong.

Replies from: wuncidunci
comment by wuncidunci · 2014-01-23T07:59:43.728Z · LW(p) · GW(p)

Ahh, thank you.

comment by AlexMennen · 2014-01-22T23:34:38.500Z · LW(p) · GW(p)

Yes, thanks. Fixed. I endorse Vaniver's explanation of the blasphemy.

comment by RRand · 2014-01-19T06:30:48.821Z · LW(p) · GW(p)

There's something strange about the analysis posted.

How is it that 100% of the general population with high (>96%) confidence got the correct answer, but only 66% of a subset of that population? Looking at the provided data, it looks like 3 out of 4 people (none with high Karma scores) who gave the highest confidence were right.

(Predictably, the remaining person with high confidence answered 500 million, which is almost the exact population of the European Union (or, in the popular parlance "Europe"). I almost made the same mistake, before realizing that a) "Europe" might be intended to include Russia, or part of Russia, plus other non-EU states and b) I don't know the population of those countries, and can't cover both bases. So in response, I kept the number and decreased my confidence value. Regrettably, 500 million can signify both tremendous confidence and very little confidence, which makes it hard to do an analysis of this effect.)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-19T15:52:10.376Z · LW(p) · GW(p)

How is it that 100% of the general population with high (>96%) confidence got the correct answer, but only 66% of a subset of that population?

What if it was divided into (typical-lw) (elite-lw) not (typical-lw (elite-lw))? That is, disjoint sets not subsets.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2014-01-19T16:08:09.620Z · LW(p) · GW(p)

I think it's more likely that I accidentally did 95-100 inclusive for one and 95-100 exclusive for the other.

comment by redlizard · 2014-01-19T04:38:44.990Z · LW(p) · GW(p)

Passphrase: eponymous haha_nice_try_CHEATER

Well played :)

Replies from: RRand
comment by RRand · 2014-01-19T06:08:08.057Z · LW(p) · GW(p)

True, though they forgot to change the "You may make my anonymous survey data public (recommended)" to "You may make my ultimately highly unanonymous survey data public (not as highly recommended)".

Replies from: lmm
comment by lmm · 2014-01-20T21:01:13.712Z · LW(p) · GW(p)

It'd be easy enough to claim the prize anonymously, no?

comment by Oscar_Cunningham · 2014-01-19T02:06:08.971Z · LW(p) · GW(p)

Nice work Yvain and Ozy, and well done to Zack for winning the MONETARY REWARD.

I continue to be bad at estimating but well calibrated.

(Also, I'm sure that this doesn't harm the data to any significant degree but I appear to appear twice in the data, both rows 548 and 552 in the xls file, with row 548 being more complete.)

comment by A1987dM (army1987) · 2014-01-21T17:22:04.137Z · LW(p) · GW(p)

I expected that the second word in my passphrase would stay secret no matter what and the first word would only be revealed if I won the game.

Well, thank goodness I didn't pick anything too embarrassing.

comment by Gunnar_Zarncke · 2014-01-19T10:37:09.871Z · LW(p) · GW(p)

Some thoughts on the correlations:

At first I saw that IQ seems to correlate with less children (a not uncommon observation):

Number of children/ACT score: -.279 (269)

Number of children/SAT score (2400): -.223 (345)

But then I noticed that number of children obviously correlate with age and age with IQ (somewhat):

Number of children/age: .507 (1607)

SAT score out of 1600/age: -.194 (422)

So it may be that older people just have lower IQ (Flynn effect).


Something to think about:

Time on Less Wrong/IQ: -.164 (492)

This can be read as smarter people stay shorter on LW. It seems to imply that over time LW will degrade in smarts. But it could also just mean that smarter people just turn over faster (thus also entering faster).

On the other hand most human endeavors tend toward the mean over time.


Time on Less Wrong/age: -.108 (1491)

Older people (like me ahem) either take longer to notice LW or the community is spreading from younger to older people slowly.


This made me laugh:

Number of current partners/karma score: .137 (1470)

Guess who does the voting :-)

Replies from: ChristianKl, taryneast, Omegaile, Vaniver
comment by ChristianKl · 2014-01-19T15:35:26.888Z · LW(p) · GW(p)

So it may be that older people just have lower IQ (Flynn effect).

In the data set older people have a significantly higher IQ than younger people. The effect however disappears if you start to control for whether someone lives in the US.

US LW users are on average more intelligent and older.

comment by taryneast · 2014-02-09T05:23:59.693Z · LW(p) · GW(p)

"Time on Less Wrong/IQ: -.164 (492)

This can be read as smarter people stay shorter on LW. It seems to imply that over time LW will degrade in smarts. But it could also just mean that smarter people just turn over faster (thus also entering faster)."

Alternatively: higher IQ people can get the same amount of impact out of less reading-time on the site, and therefore do not need to spend as much time on the site

comment by Omegaile · 2014-01-19T18:31:47.021Z · LW(p) · GW(p)

Time on Less Wrong/IQ: -.164 (492)

Wait, this means that reading less wrong makes you dumber!

Hmmm, there was something about correlation and causation... but I don't remember it well. I must be spending too much time on less wrong.

comment by Vaniver · 2014-01-19T20:53:00.937Z · LW(p) · GW(p)

So it may be that older people just have lower IQ (Flynn effect).

The 1600 SAT was renormed in 1994, and scores afterwards are much higher (and not directly comparable) to scores before. As well, depending on how the 'null' is interpreted, the youngest are unlikely to have a SAT score out of 1600, because it switched to 2400 in 2005. The line between having a score out of 1600 or not is probably at about 22 years old.

comment by Bayeslisk · 2014-01-19T04:06:21.285Z · LW(p) · GW(p)

I don't know if this is the LW hug or something but I'm having trouble downloading the xls. Also, will update with what the crap my passphrase actually means, because it's in Lojban and mildly entertaining IIRC.

EDIT: Felt like looking at some other entertaining passphrases. Included with comment.

sruta'ulor maftitnab {mine! scarf-fox magic-cakes!(probably that kind)}

Afgani-san Azerbai-chan {there... are no words}

DEFECTORS RULE

do mlatu {a fellow lojbanist!}

lalxu daplu {and another?}

telephone fonxa {and another! please get in contact with me. please.}

xagfu'a rodo {indeed! but where are all you people coming from, and why don't I know you?}

zifre dunda {OH COME ON WHERE ARE YOU PEOPLE]

eponymous haha_nice_try_CHEATER {clever.}

fart butt {I am twelve...}

FROGPENIS SPOOBOMB {... and so is a lot of LW.}

goat felching {good heavens}

I don't want the prize! Pick someone else please!

I dont care about the MONETARY REWARD but you shoudl know that

Irefuse myprize

No thanks

not interested

{a lot of refusers!}

I'm gay

john lampkin (note: this is not my name)

lookatme iwonmoney {nice try guy}

mencius suckedmoziwasbetter

mimsy borogoves {repeated!}

TWO WORD {repeated, and try harder next time}

octothorpe interrobang

SOYUZ NERUSHIMIY {ONWARD, COMRADE(note: person is apparently a social democrat.)}

TERRORISTS WIN

thisissuspiciouslylike askingforourpasswordmethodologies {I should think not.}

zoodlybop zimzamzoom {OH MY GODS BILL COSBY IS A LESSWRONGER.}

AND THAT'S ALL, FOLKS.

Replies from: SaidAchmiz, philh, sanxiyn
comment by Said Achmiz (SaidAchmiz) · 2014-01-19T06:30:48.375Z · LW(p) · GW(p)

SOYUZ NERUSHIMIY

Actual translation: INDESTRUCTIBLE UNION

(It's from the national anthem of the U.S.S.R.)

Replies from: Bayeslisk
comment by Bayeslisk · 2014-01-19T17:03:32.920Z · LW(p) · GW(p)

I know that. I was commenting that the LWer was apparently not a Communist as one might expect, which I found slightly funny.

comment by philh · 2014-01-19T11:50:13.768Z · LW(p) · GW(p)

The following passphrases were repeated (two occurances each, the only entry that occured more than twice was the blank one):

Bagel bites

EFFulgent shackles

Kissing bobbies

mimsy borogoves

SQUEAMISH OSSIFRAGE

If we go case-insensitive, there was also 'No thanks' and 'no thanks'; and 'TWO WORD' and 'Two Word'.

(The first three of those came next to each other, so they were probably just multiple entries.)

Replies from: FourFire, Bayeslisk
comment by FourFire · 2014-01-19T13:30:11.019Z · LW(p) · GW(p)

It is a datapoint that only one person apparently took up the offer of SQUEAMISH OSSIFRAGE

Replies from: Error, Bayeslisk
comment by Error · 2014-01-21T15:28:06.507Z · LW(p) · GW(p)

Possibly two; there's no guarantee the person who originally suggested it actually used it.

I, on the other hand, am one of those two. The humor appealed to me.

comment by Bayeslisk · 2014-01-19T17:02:27.583Z · LW(p) · GW(p)

I agree. This was clearly the object of furious guessing and second-guessing. :V

comment by Bayeslisk · 2014-01-19T17:02:42.661Z · LW(p) · GW(p)

Yes, and this was why I did not include them.

comment by sanxiyn · 2014-01-20T14:19:28.647Z · LW(p) · GW(p)

You missed lalxu daplu.

Replies from: Bayeslisk
comment by Bayeslisk · 2014-01-20T19:40:07.714Z · LW(p) · GW(p)

So I did! Edited.

comment by MTGandP · 2014-01-19T00:49:56.740Z · LW(p) · GW(p)

The links to the public data given at the end appear to be broken. They give internal links to Less Wrong instead of redirecting to Slate Star Codex. These links should work:

sav xls csv

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-01-19T01:52:36.889Z · LW(p) · GW(p)

Fixed.

comment by Scott Garrabrant · 2014-01-20T18:35:36.180Z · LW(p) · GW(p)

It looks like lots of people put themselves as atheist, but still answered the religion question as Unitarian Universalist, in spite of the fact that the question said to answer your religion only if you are theist.

I was looking forward to data on how many LW people are UU, but I have no way of predicting how many people followed the rules as written for the question, and how many people followed the rules as (I think they were) intended.

We should make sure to word that question differently next year, so that people who identify as atheist and religious know to answer the question.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-01-20T18:47:13.453Z · LW(p) · GW(p)

It looks like Judaism and Buddhism might have had a similar problem.

Replies from: hairyfigment
comment by hairyfigment · 2014-01-21T03:40:12.679Z · LW(p) · GW(p)

This is why (ISTR) I treated 'some religion is more or less right' as a broader category than theism.

comment by Kawoomba · 2014-01-20T10:56:21.116Z · LW(p) · GW(p)

The IQ numbers have time and time again answered every challenge raised against them and should be presumed accurate.

N.B.: Average IQ drops to 135 when only considering tests administered at an adult age -- those "IQ 172 at age 7" entries shouldn't be taken as authoritative for adult IQ.

comment by XiXiDu · 2014-01-19T13:33:48.161Z · LW(p) · GW(p)

Unfriendly AI: 233, 14.2%

Nanotech/grey goo: 57, 3.5%

Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?

I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly humans to cause a number of catastrophic risks (e.g. pandemics, nanotech wars, perfect global surveillance (an eternal tyranny) etc.).

Unfriendly AI, as imagined by MIRI, seems to be the end product of a developmental process that provides humans ample opportunity to wreck havoc.

I just don't see any good reason to believe that the tools and precursors to artificial general intelligence are not themselves disruptive technologies.

And in case you believe advanced nanotechnology to be infeasible, but unfriendly AI to be an existential risk, what concrete scenarios do you imagine on how such an AI could cause human extinction without nanotech?

Replies from: gjm, RobbBB, dspeyer, KnaveOfAllTrades, MugaSofer, Eugine_Nier
comment by gjm · 2014-01-19T14:07:29.342Z · LW(p) · GW(p)

Presumably many people fear a very rapid "hard takeoff" where the time from "interesting slightly-smarter-than-human AI experiment" to "full-blown technological singularity underway" is measured in at days (or less) rather than months or years.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-19T15:45:45.503Z · LW(p) · GW(p)

The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.

If you believe a "hard takeoff" to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?

In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?

Replies from: NancyLebovitz, TheOtherDave, gjm
comment by NancyLebovitz · 2014-01-26T01:03:13.795Z · LW(p) · GW(p)

My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.

Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.

comment by TheOtherDave · 2014-01-19T16:55:43.530Z · LW(p) · GW(p)

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ? It seems like it ought to be, but I'm genuinely unsure, as the wording of your question kind of confuses me.

If so, my answer would be that it depends on how intelligent I am, since I expect the second problem to get more difficult as I get more intelligent. If we're talking about the actual me... yeah, I don't have higher confidence either way.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-19T18:17:46.283Z · LW(p) · GW(p)

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ?

It is mostly equivalent. Is it easier to design an AI that can solve one specific hard problem than an AI that can solve all hard problems?

Expecting that only a fully-fledged artificial general intelligence is able to solve the protein-folding problem seems to be equivalent to believing the conjunction "an universal problem solver can solve the protein-folding problem" AND "an universal problem solver is easier to solve than the protein-folding problem". Are there good reasons to believe this?

ETA: My perception is that people who believe unfriendly AI to come sooner than nanotechnology believe that it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-19T20:23:40.235Z · LW(p) · GW(p)

it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Ah, this helps, thanks.

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all... we build a lot of tools that are better than our own brains at a lot of things. Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

Whether protein-folding is one of those problems, I have absolutely no idea. But it sounds like your position isn't unique to protein-folding.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-20T10:18:53.050Z · LW(p) · GW(p)

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all...

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem. And that's just mathematics...

Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

I do not disagree with this in theory. After all, evolution is an example of this. But it was not computationally simple for evolution to do so and it did do so by a bottom-up approach, piece by piece.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Again, in theory, all of this is fine. But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms b.) whose execution is faster than that of evolution c.) which can locate useful algorithms within the infinite space of programs and d.) that humans will discover this algorithm?

Some people here seem to be highly confident about this. How?

ETA: Maybe this post better highlights the problems I see.

Replies from: None, TheOtherDave
comment by [deleted] · 2014-01-21T18:41:41.735Z · LW(p) · GW(p)

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

Why did you interview Gowers anyway? It's not like he has any domain knowledge in artificial intelligence.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-21T19:35:27.386Z · LW(p) · GW(p)

Why did you interview Gowers anyway?

He works on automatic theorem proving. In addition I was simply curious what a topnotch mathematician thinks about the whole subject.

comment by TheOtherDave · 2014-01-20T14:50:30.308Z · LW(p) · GW(p)

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

All of mathematics? Dunno. I'm not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can't solve unaided, but which can be solved by tools we create.

I really don't understand how people can believe this. In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem.

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

I don't mean to put words in your mouth here, I just want to make sure I understood you.

If so... why do you believe that?

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

Yes, that's a fair paraphrase.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Nah, I'm not talking about speed.

But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms

Can you clarify what you mean by "simpler" here? If you mean in some objective sense, like how many bits would be required to specify it in a maximally compressed form or some such thing, I don't claim that. If you mean easier for humans to develop... well, of course I don't know that, but it seems more plausible to me than the idea that human brains happen to be the optimal machine for developing algorithms.

b.) whose execution is faster than that of evolution

We have thus far done pretty good at this; evolution is slow. I don't expect that to change.

c.) which can locate useful algorithms within the infinite space of programs

Well, this is part of the problem specification. A tool for generating useless algorithms would be much easier to build.

d.) that humans will discover this algorithm?

(shrug) Perhaps we won't. Perhaps we won't solve protein-folding, either.

Some people here seem to be highly confident about this. How?

Can you quantify "highly confident" here?

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

Replies from: XiXiDu
comment by XiXiDu · 2014-01-20T16:35:38.575Z · LW(p) · GW(p)

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

If you want an artificial agent to solve problems for you then you need to somehow constrain it, since there are an infinite number of problems. In this sense it is easier to specify an AI to solve a single problem, such as the protein-folding problem, rather than all problems (whatever that means, supposedly "general intelligence").

The problem here is that goals and capabilities are not orthogonal. It is more difficult to design an AI that can play all possible games, and then tell it to play a certain game, than designing an AI to play a certain game in the first place.

Can you clarify what you mean by "simpler" here?

The information theoretic complexity of the code of a general problem solver constrained to solve a specific problem should be larger than the constrain itself. I assume here that the constrain is most of the work in getting an algorithm to do useful work. Which I like to exemplify by the difference between playing chess and doing mathematics. Both are rigorously defined activities, one of which has a clear and simple terminal goal, the other being infinite and thus hard to constrain.

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

The more general the artificial algorithm-developer is, the less confident I am that it is easier to create than the specific algorithm itself.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-20T20:48:02.769Z · LW(p) · GW(p)

I agree that specialized tools to perform particular tasks are easier to design than general-purpose tools. It follows that if I understand a problem well enough to know what tasks must be performed in order to solve that problem, it should be easier to solve that problem by designing specialized tools to perform those tasks, than by designing a general-purpose problem solver.

I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.

I agree that for a randomly selected algorithm A2, and a randomly selected artificial algorithm-developer AD2, the more general AD2 is the more likely it is that A2 is easier to develop than AD2.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-21T10:02:38.881Z · LW(p) · GW(p)

I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.

What I meant is that if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function, respectively, in the case of AIXI, a substantial head start (the large multiplicative constant mentioned in Hutter's paper).

When producing e.g. a chair, an AI will have to either know the specifications of the chair (such as its size or the material it is supposed to be made of) or else know how to choose a specification from an otherwise infinite set of possible specifications. Given a poorly designed fitness function, or the inability to refine its fitness function, an AI will either (a) not know what to do or (b) will not be able to converge on a qualitative solution, if at all, given limited computationally resources.

In a sense it is therefore true that an universal problem solver is easier to design than any specialized expert system. But only if you ignore the constrain it takes to "focus" the universal problem solver sufficiently in order to make it solve the right problem efficiently. Which means that the time to develop the universal problem solver plus the time it takes to constrain it might be longer than to develop the specialized solver. Since constraining it means to already know a lot about the problem in question. ETA: Or take science as another example. Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?

Replies from: TheOtherDave, nshepperd
comment by TheOtherDave · 2014-01-21T15:27:33.379Z · LW(p) · GW(p)

if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function

I agree with this as well. That said, sometimes that fitness function is implicit in the real world, and need not be explicitly formalized by me.

Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?

As I've said a couple of times now, I don't have a dog in the race wrt the protein folding problem, but your argument seems to apply equally well to all conceivable problems. That's why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve. As I suggested then, I have no particular reason to think the protein-folding problem is harder (or easier) than the algorithm-design problem, but it seems really unlikely that no problem has this property.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-21T18:19:20.160Z · LW(p) · GW(p)

That's why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve.

The problem is that I don't know what you mean by "algorithm design". Once you solved "algorithm design", what do you expect to be able to do with it, and how?

Once you compute this "algorithm design"-algorithm, how will its behavior look like? Will it output all possible algorithms, or just the algorithms that you care about? If the latter, how does it know what algorithms you care about?

There is no brain area for "algorithm design". There is just this computational substrate that can learn, recognize patterns etc. and whose behavior is defined and constrained by its environmental circumstances.

Say you cloned Donald E. Knuth and made him grow up under completely different circumstances, e.g. as a member of some Amazonian tribe. Now this clone has the same algorithm-design-potential, but he lacks the right input and constrains to output "The Art of Computer Programming".

What I want to highlight is that "algorithm design", or even "general intelligence", is not a sufficient feature in order to get "algorithm that predicts protein structures from their sequences".

Solving "algorithm design" or "general intelligence" does not give you some sort of oracle. In the same sense as an universal Turing machine does not give you "algorithm design" or "general intelligence". You have to program the Turing machine in order to compute "algorithm design" or "general intelligence". In the same sense you have to define what algorithm you want, respectively what problem you want to be solved, in order for your "algorithm design" or "general intelligence" to do what you want.

Just imagine having a human baby, the clone of a 250 IQ eugenics experiment, and ask it to solve protein folding for you. Well, it doesn't even speak English yet. Even though you have this superior general intelligence, it won't do what you want it to do without a lot of additional work. And even then it is not clear that it will have the motivation to do so.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-21T18:26:40.773Z · LW(p) · GW(p)

Tapping out now.

comment by nshepperd · 2014-01-21T13:12:32.030Z · LW(p) · GW(p)

You have a good point, in the case of trying to make a narrow intelligence to solve the protein folding problem. Yes, to make it spit out solutions to protein folding (even if given a "general" intelligence), you first must give it a detailed specification of the problem, which may take much work to derive in the first place.

But a solution to the protein solving problem is a means to an end. Generally, through the subgoal of being able to manipulate matter. To put it simply, the information complexity of the "practical facet" of the protein folding problem is actually not that high, because other much more general problems ("the manipulating matter problem") point to it. An unfriendly AGI with general intelligence above a human's doesn't need us to do any work specifying the protein folding problem for them; they'll find it themselves in their search for solutions to "take over the world".

Conversely, while an AGI with a goal like the rearranging all the matter in the world a particular way might happen to solve the protein folding problem in the process of its planning, such a machine does not qualify as a useful protein-folding-solver-bot for us humans. Firstly because there's no guarantee it will actually end up solving protein folding (maybe some other method of rearranging matter turns out to be more useful). Secondly because it doesn't necessarily care to solve the entire protein solving problem, just the special cases relevant to its goals. Thirdly because it has no interest in giving us the solutions.

That's why writing an AGI doesn't violate information theory by giving us a detailed specification of the protein folding problem for free.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-21T15:38:11.684Z · LW(p) · GW(p)

An unfriendly AGI with general intelligence above a human's doesn't need us to do any work specifying the protein folding problem for them; they'll find it themselves in their search for solutions to "take over the world".

First of all, we have narrow AI's that do not exhibit Omohundro's “Basic AI Drives”. Secondly, everyone seems to agree that it should be possible to create general AI that does (a) not exhibit those drives or (b) only exhibit AI drives to a limited extent or (c) which focuses AI drives in a manner that agrees with human volition.

The question then - regarding whether a protein-folding solver will be invented before a general AI that solves the same problem for instrumental reasons - is about the algorithmic complexity of an AI whose terminal goal is protein-folding versus an AI that does exhibit the necessary drives in order to solve an equivalent problem for instrumental reasons.

The first sub-question here is whether the aforementioned drives are a feature or a side-effect of general AI. Whether those drives have to be an explicit feature of a general AI or if they are an implicit consequence. The belief around here seems the be the latter.

Given that the necessary drives are implicit, the second sub-question is then about the point at which mostly well-behaved (bounded) AI systems become motivated to act in unbounded and catastrophic ways.

My objections to Omohundro's “Basic AI Drives” are basically twofold: (a) I do not believe that AIs designed by humans will ever exhibit Omohundro's “Basic AI Drives” in an unbounded fashion and (b) I believe that AIs that do exhibit Omohundro's “Basic AI Drives” are either infeasible or require a huge number of constrains to work at all.

(a) The point of transition (step 4 below) between systems that do not exhibit Omohundro's “Basic AI Drives” and those that do is too vague to count as a non-negligible hypothesis:

(1) Present-day software is better than previous software generations at understanding and doing what humans mean.

(2) There will be future generations of software which will be better than the current generation at understanding and doing what humans mean.

(3) If there is better software, there will be even better software afterwards.

(4) Magic happens.

(5) Software will be superhuman good at understanding what humans mean but catastrophically worse than all previous generations at doing what humans mean.

(b) An AI that does exhibit Omohundro's “Basic AI Drives” would be paralyzed by infinite choice and low-probability hypotheses that imply vast amounts of expected utility.

There is an infinite choice of paperclip designs to choose from and a choosing a wrong design could have negative consequences that are in the range of -3^^^^3 utils.

Such an AI will not even be able to decide if trying to acquire unlimited computationally resources was instrumentally rational because without more resources it will be unable to decide if the actions that are required to acquire those resources might be instrumentally irrational from the perspective of what it is meant to do (that any terminal goal can be realized in an infinite number of ways, implies an infinite number of instrumental goals to choose from).

Another example is self-protection, which requires a definition of "self", or otherwise the AI risks destroying itself.

Replies from: nshepperd
comment by nshepperd · 2014-01-22T01:57:10.086Z · LW(p) · GW(p)

Well, I've argued with you about (a) in the past, and it didn't seem to go anywhere, so I won't repeat that.

With regards to (b), that sounds like a good list of problems we need to solve in order to obtain AGI. I'm sure someone somewhere is already working on them.

comment by gjm · 2014-01-19T17:13:47.731Z · LW(p) · GW(p)

I have no strong opinion on whether a "hard takeoff" is probable. (Because I haven't thought about it a lot, not because I think the evidence is exquisitely balanced.) I don't see any particular reason to think that protein folding is the only possible route to a "hard takeoff".

What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that's able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it's hardly the only way one can envisage.

comment by Rob Bensinger (RobbBB) · 2014-01-20T11:24:42.087Z · LW(p) · GW(p)

If I understand Eliezer's view, it's that we can't be extremely confident of whether artificial superintelligence or perilously advanced nanotechnology will come first, but (a) there aren't many obvious research projects likely to improve our chances against grey goo, whereas (b) there are numerous obvious research projects likely to improve our changes against unFriendly AI, and (c) inventing Friendly AI would solve both the grey goo problem and the uFAI problem.

Cheer up, the main threat from nanotech may be from brute-forced AI going FOOM and killing everyone long before nanotech is sophisticated enough to reproduce in open-air environments.

The question is what to do about nanotech disaster. As near as I can figure out, the main path into [safety] would be a sufficiently fast upload of humans followed by running them at a high enough speed to solve FAI before everything goes blooey.

But that's already assuming pretty sophisticated nanotech. I'm not sure what to do about moderately strong nanotech. I've never really heard of anything good to do about nanotech. It's one reason I'm not sending attention there.

Replies from: Kawoomba
comment by Kawoomba · 2014-01-20T12:05:36.583Z · LW(p) · GW(p)

Considering ... please wait ... tttrrrrrr ... prima facie, Grey Goo scenarios may seem more likely simply because they make better "Great Filter" candidates; whereas a near-arbitrary Foomy would spread out in all directions at relativistic speeds, with self-replicators no overarching agenty will would accelerate them out across space (the insulation layer with the sparse materials).

So if we approached x-risks through the prism of their consequences (extinction, hence no discernible aliens) and then reasoned our way back to our present predicament, we would note that within AI-power-hierachies (AGI and up) there are few distinct long-term dan-ranks (most such ranks would only be intermediary steps while the AI falls "upwards"), whereas it is much more conceivable that there are self-replicators which can e.g. transform enough carbon into carbon copies (of themselves) to render a planet uninhabitable, but which lack the oomph (and the agency) to do the same to their light cone.

Then I thought that Grey Goo may yet be more of a setback, a restart, not the ultimate planetary tombstone. Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.

Then again (round and round it goes), Grey Goo would still remain the better transient Great Filter candidate (and thus more likely than uFAI when viewed through the Great Filter spectroscope), simply because of the time scales involved. Assuming the Great Filter is in fact an actual absence of highly evolved civilizations in our neighborhood (as opposed to just hiding or other shenanigans), Grey Goo biosphere-resets may stall the Kardashev climb sufficiently to explain us not having witnessed other civs yet. Also, Grey Goo transformations may burn up all the local negentropy (nanobots don't work for free), precluding future evolution.

Anyways, I agree that FAI would be the most realistic long-term guardian against accidental nanogoo (ironically, also uFAI).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-20T23:47:13.266Z · LW(p) · GW(p)

My own suspicion is that the bulk of the Great Filter is behind us. We've awoken into a fairly old universe. (Young in terms of total lifespan, but old in terms of maximally life-sustaining years.) If intelligent agents evolve easily but die out fast, we should expect to see a young universe.

We can also consider the possibility of stronger anthropic effects. Suppose intelligent species always succeed in building AGIs that propagate outward at approximately the speed of light, converting all life-sustaining energy into objects or agents outside our anthropic reference class. Then any particular intelligent species Z will observe a Fermi paradox no matter how common or rare intelligent species are, because if any other high-technology species had arisen first in Z's past light cone it would have prevented the existence of anything Z-like. (However, species in this scenario will observe much younger universes the smaller a Past Filter there is.)

So grey goo creates an actual Future Filter by killing their creators, but hyper-efficient hungry AGI creates an anthropic illusion of a Future Filter by devouring everything in their observable universe except the creator species. (And possibly devouring the creator species too; that's unclear. Evolved alien values are less likely to eat the universe than artificial unFriendly-relative-to-alien-values values are, but perhaps not dramatically less likely; and unFriendly-relative-to-creator AI is almost certainly more common than Friendly-relative-to-creator AI.)

Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.

Probably won't happen before the heat death of the universe. The scariest thing about nanodevices is that they don't evolve. A universe ruled by nanodevices is plausibly even worse (relative to human values) than one ruled by uFAI like Clippy, because it's vastly less interesting.

(Not because paperclips are better than nanites, but because there's at least one sophisticated mind to be found.)

comment by dspeyer · 2014-01-20T05:03:30.239Z · LW(p) · GW(p)

Two reasons: uFAI is deadlier than nano/biotech and easier to cause by accident.

If you build an AGI and botch friendliness, the world is in big trouble. If you build a nanite and botch friendliness, you have a worthless nanite. If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies. And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread. What makes uFAI so dangerous is that it can outthink any responders. Grey goo doesn't do that.

Replies from: XiXiDu
comment by XiXiDu · 2014-01-20T09:37:30.878Z · LW(p) · GW(p)

This seems like a consistent answer to my original question. Thank you.

If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies.

You on the one hand believe that grey goo is not going to eat more than your lab before running out of steam and on the other hand believe that AI in conjunction with nanotechnology will not run out of steam, or only after humanity's demise.

And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread.

You further believe that AI can't be stopped but grey goo can.

Replies from: dspeyer
comment by dspeyer · 2014-01-23T01:05:02.115Z · LW(p) · GW(p)

Accidental grey goo is unlikely to get out of the lab. If I design a nanite to self-replicate and spread through a living brain to report useful data to me, and I have an integer overflow bug in the "stop reproducing" code so that it never stops, I will probably kill the patient but that's it. Because the nanites are probably using glucose+O2 as their energy source. I never bothered to design them for anything else. Similarly if I sent solar-powered nanites to clean up Chernobyl I probably never gave them copper-refining capability -- plenty of copper wiring to eat there -- but if I botch the growth code they'll still stop when there's no more pre-refined copper to eat. Designing truely dangerous grey goo is hard and would have to be a deliberate effort.

As for stopping grey goo, why not? There'll be something that destroys it. Extreme heat, maybe. And however fast it spreads, radio goes faster. So someone about to get eaten radios a far-off military base saying "help! grey goo!" and the bomber planes full of incindiaries come forth to meet it.

Contrast uFAI, which has thought of this before it surfaces, and has already radioed forged orders to take all the bomber planes apart for maintenance or something.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-23T02:20:04.313Z · LW(p) · GW(p)

Also, the larger the difference between the metabolisms of the nanites and the biosphere, the easier it is to find something toxic to one but not the other.

comment by KnaveOfAllTrades · 2014-01-19T14:15:31.007Z · LW(p) · GW(p)

I think a large part of that may simply be LW'ers being more familiar with UFAI and therefore knowing more details that make it seem like a credible threat / availability heuristic. So for example I would expect e.g. Eliezer's estimate of the gap between the two to be less than the LW average. (Edit: Actually, I don't mean that his estimate of the gap would be lower, but something more like it would seem like less of a non-question to him and he would take nanotech a lot more seriously, even if he did still come down firmly on the side of UFAI being a bigger concern.)

comment by MugaSofer · 2014-01-21T12:58:24.864Z · LW(p) · GW(p)

perfect global surveillance (an eternal tyranny)

Oooh, that would nicely solve the problem of the other impending apocalypses, wouldn't it?

comment by Eugine_Nier · 2014-01-20T03:48:15.296Z · LW(p) · GW(p)

How is grey goo realistically a threat, especially without a uFAI guiding it? Remember: grey goo has to out-compete the existing biosphere. This seems hard.

Replies from: Risto_Saarelma, Kawoomba, XiXiDu
comment by Risto_Saarelma · 2014-01-20T07:08:10.748Z · LW(p) · GW(p)

Gray goo designs don't need to be built up with miniscule steps, each of which makes evolutionary sense, like the evolved biosphere was. This might open up designs that are feasible to invent, very difficult to evolve naturally, and sufficiently different from anything in the natural biosphere to do serious damage even without a billion years of evolutionary optimization.

Replies from: None
comment by [deleted] · 2014-01-20T08:32:54.540Z · LW(p) · GW(p)

So far in the history of technology, deliberate design over a period of years has proven consistently less clever (in the sense of "efficiently capturing available mass-energy as living bodies") than evolution operating over aeons.

Replies from: Risto_Saarelma, Locaha, MugaSofer
comment by Risto_Saarelma · 2014-01-20T18:07:04.415Z · LW(p) · GW(p)

And so far the more clever biosphere design is getting its thermodynamical shit handed to it everywhere the hairless apes go and decide to start building and burning stuff.

If a wish to a genie went really wrong and switched the terminal goals of every human on earth into destroying the earth's biosphere in the most thorough and efficient way possible, the biosphere would be toast, much cleverer than the humans or not. If the wish gave you a billion AGI robots with the that terminal goal, any humans getting in their way would be dead and the biosphere would be toast again. But if the robots were really small and maybe not that smart, then we'd be entirely okay, right?

Replies from: None
comment by [deleted] · 2014-01-20T22:02:10.799Z · LW(p) · GW(p)

Think about it: it's the intelligence that makes things dangerous. Try and engineer a nanoscale robot that's going to be able to unintelligently disassemble all living matter without getting eaten by a bacterium. Unintelligently, mind you: no invoking superintelligence as your fallback explanation.

Replies from: CCC, Risto_Saarelma
comment by CCC · 2014-01-21T09:52:30.351Z · LW(p) · GW(p)

Make it out of antimatter? Say, a nanoscale amount of anticarbon - just an unintelligent lump?

Dump enough of those on any (matter) biosphere and all the living matter will be very thoroughly disassembled.

Replies from: None
comment by [deleted] · 2014-01-21T12:30:14.030Z · LW(p) · GW(p)

That's not a nanoscale robot, is it? It's antimatter: it annihilates matter, because that's what physics says it does. You're walking around the problem I handed you and just solving the "destroy lots of stuff" problem. Yes, it's easy to destroy lots of stuff: we knew that already. And yet if I ask you to invent grey goo in specific, you don't seem able to come up with a feasible design.

Replies from: CCC
comment by CCC · 2014-01-21T18:09:52.844Z · LW(p) · GW(p)

How is it not a nanoscale robot? It is a nanoscale device that performs the assigned task. What does a robot have that the nanoscale anticarbon lump doesn't?

I admit that it's not the sort of thing one thinks of when one thinks of the word 'robot' (to be fair, though, what I think of when I think of the word 'robot' is not nanoscale either). But I have found that, often, a simple solution to a problem can be found by, as you put it, 'walking around' it to get to the desired outcome.

comment by Risto_Saarelma · 2014-01-21T03:32:14.490Z · LW(p) · GW(p)

Humans aren't superintelligent, and are still able to design macroscale technology that can wipe out biospheres and that can be deployed and propagated with less intelligence than it took to design. I'm not taking the bet that you can't shrink down the scale of the technology and the amount of intelligence needed to deploy it while keeping around the at least human level designer. That sounds too much like the "I can't think of a way to do this right now, so it's obviously impossible" play.

Replies from: michaelsullivan, None
comment by michaelsullivan · 2014-01-22T18:14:23.000Z · LW(p) · GW(p)

It seems that very few people considered the bad nanotech scenario obviously impossible, merely less likely to cause a near extinction event than uFAI.

comment by [deleted] · 2014-01-21T07:42:43.492Z · LW(p) · GW(p)

In addition, to my best knowledge, trained scientists believe it impossible to turn the sky green and have all humans sprout spider legs. Mostly, they believe these things are impossible because they're impossible, not because scientists merely lack the leap of superintelligence or superdetermination necessary to kick logic out and do the impossible.

Replies from: CCC
comment by CCC · 2014-01-21T09:49:54.240Z · LW(p) · GW(p)

If I wanted to turn the sky green for some reason (and had an infinite budget to work with), then one way to do it would be to release a fine, translucent green powder in the upper atmosphere in large quantities. (This might cause problems when it began to drift down far enough that it can be breathed in, of course). Alternatively, I could encase the planet Earth in a solid shell of green glass.

Replies from: None
comment by [deleted] · 2014-01-21T12:27:54.809Z · LW(p) · GW(p)

In which case you have merely placed reflective green material in the atmosphere. You have not actually turned the sky green.

Replies from: CCC
comment by CCC · 2014-01-21T18:05:38.017Z · LW(p) · GW(p)

Please explain, then, without using the word 'sky', what exactly you mean by "turning the sky green".

I had parsed that as "ensuring that a person, looking upwards during the daytime and not seeing an intervening obstacle (such as a ceiling, an aeroplane, or a cloud) would honestly identify the colour that he sees as 'green'." It is now evident that this is not what you had meant by the phrase.

Replies from: Jiro
comment by Jiro · 2014-01-21T18:37:37.933Z · LW(p) · GW(p)

That would depend on whether a green shell of glass or a green particle counts as an intervening obstacle.

Replies from: CCC
comment by CCC · 2014-01-21T18:55:27.768Z · LW(p) · GW(p)

Do you know, I hadn't even thought of that?

You are perfectly correct, and I thank you for raising the question.

Replies from: Fermatastheorem
comment by Fermatastheorem · 2014-01-22T06:33:27.535Z · LW(p) · GW(p)

The only reason I see blue when I look up during the daytime at something higher than a ceiling, an airplane, or a cloud, is because the atmosphere is composed of reflective blue material (air) intervening between me and the darkness of space. I would still like an explanation from the great-great-grandparent as to what constitutes 'turning the sky green'.

comment by Locaha · 2014-01-20T09:06:59.950Z · LW(p) · GW(p)

I'll have to disagree here. Evolution operating over aeons never got to jet engines and nuclear weapons. Maybe it needs more time?

Replies from: None, CCC
comment by [deleted] · 2014-01-20T16:20:35.045Z · LW(p) · GW(p)

Category error: neither jet engines nor nuclear weapons capture available/free mass-energy as living (ie: self-reproducing) bodies. Evolution never got to those because it simply doesn't care about them: nuclear bombs can't have grandchildren.

Replies from: Locaha
comment by Locaha · 2014-01-20T17:07:20.093Z · LW(p) · GW(p)

You can use both jet engines and nuclear weapons to increase your relative fitness.

There are no living nuclear reactors, either, despite the vast potential of energy.

Replies from: Nornagest, None
comment by Nornagest · 2014-01-20T22:44:26.674Z · LW(p) · GW(p)

There are organisms that use gamma radiation as an energy source. If we lived in an environment richer in naturally occurring radioisotopes, I think I'd expect to see more of this sort of thing -- maybe not up to the point of criticality, but maybe so.

Not much point in speculating, really; living on a planet that's better than four billion years old and of middling metallicity puts something of a damper on the basic biological potential of that pathway.

Replies from: Locaha
comment by Locaha · 2014-01-21T07:14:52.607Z · LW(p) · GW(p)

Not much point in speculating, really; living on a planet that's better than four billion years old and of middling metallicity puts something of a damper on the basic biological potential of that pathway.

And yet humanity did it, on a much smaller time scale. This is what I'm saying, we are better than evolution at some stuff.

comment by [deleted] · 2014-01-20T22:03:09.230Z · LW(p) · GW(p)

You can use both jet engines and nuclear weapons to increase your relative fitness.

Which living beings created by evolution have done -- also known as us!

Replies from: Locaha
comment by Locaha · 2014-01-21T07:16:39.033Z · LW(p) · GW(p)

This would be stretching the definition of evolution beyond its breaking point.

comment by CCC · 2014-01-21T08:33:23.055Z · LW(p) · GW(p)

Evolution has got as far as basic jet engines; see the octopus for an example.

Interestingly, this page provides some interesting data; it seems that a squid's jet is significantly less energy-efficient than a fish's tail for propulsion. This implies that that's perhaps why we see so little jet propulsion in the oceans...

comment by MugaSofer · 2014-01-21T12:51:22.350Z · LW(p) · GW(p)

So far in the history of technology, deliberate design over a period of years has proven consistently less clever (in the sense of "efficiently capturing available mass-energy as living bodies")

... because we don't know how to build "living bodies". That's a rather unfair comparison, regardless of whether your point is valid.

Although, of course, we built factory farms for that exact purpose, which are indeed more efficient at that task.

And there's genetic engineering, which can leapfrog over millions of years of evolution by nicking (simple, at our current tech level) adaptations from other organisms - whereas evolution would have to recreate them from scratch. I reflexively avoid anti-GM stuff due to overexposure when I was younger, but I wouldn't be surprised if a GM organism could outcompete a wild one, were a mad scientist to choose that as a goal rather than a disaster to be elaborately defended against. (Herbicide-resistant plants, for a start.)

So I suppose it isn't even very good at biasing the results, since it can still fail - depending, of course, on how true of a scotsman you are, because those do take advantage of prexisting adaptations - and artificially induced ones, in the case of farm animals.

(Should this matter? Discuss.)

comment by Kawoomba · 2014-01-20T09:57:52.605Z · LW(p) · GW(p)

grey goo has to out-compete the existing biosphere. This seems hard.

Really? Von Neumann machines (the universal assembler self-replicating variety, not the computer architecture) versus regular ol' mitosis, and you think mitosis would win out?

I've only ever heard "building self-replicating machinery on a nano-scale is really hard" as the main argument against the immediacy of that particular x-risk, never "even if there were self-replicators on a nano-scale, they would have a hard time out-competing the existing biosphere". Can you elaborate?

Replies from: Vaniver
comment by Vaniver · 2014-01-20T18:16:01.060Z · LW(p) · GW(p)

As one of my physics professors put it, "We already have grey goo. They're called bacteria."

The intuition behind the grey goo risk appears to be "as soon as someone makes a machine that can make itself, the world is a huge lump of matter and energy just waiting to be converted into copies of that machine." That is, of course, not true- matter and energy and prized and fought over, and any new contender is going to have to join the fight.

That's not to say it's impossible for an artificial self-replicating nanobot to beat the self-replicating nanobots which have evolved naturally, just that it's hard. For example, it's not clear to me what part of "regular ol' mitosis" you think is regular, and easy to improve upon. Is it that the second copy is built internally, preventing it from attack and corruption?

Replies from: Kawoomba
comment by Kawoomba · 2014-01-20T18:50:39.471Z · LW(p) · GW(p)

Bacteria et al. are only the locally optimal solution after a long series of selection steps, each of which generally needed to be an improvement upon the previous step, i.e. the result of a greedy algorithm. There are few problems in which you'd expect a greedy algorithm to end up anywhere but in a very local optimum:

DNA is a hilariously inefficient way of storing partly superfluous data (all of which must undergo each mitosis), informational density could be an order/orders of magnitude higher with minor modifications, and the safety redundancies are precarious at best, compared to e.g. Hamming code. A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia. That's not to say that compared to our current macroscopic technology the informational feats of biological data transmission, duplication etc. aren't impressive, but that's only because we've not yet achieved molecular manufacturing (a necessity for a Grey Goo scenario). (We could go into more details on gross biological inefficiencies if you'd like.)

Would you expect some antibodies and phagocytosis to defeat an intelligently engineered self-replicating nanobot the size of a virus (but which doesn't depend on live cells and without the telltale flaws and tradeoffs of Pandemic-reminiscent"can't kill the host cell too quickly" etc.)?

To me it seems like saying "if you drowned the world in acid, the biosphere could well win the fight in a semi-recognizable form and claim the negentropy for themselves" (yes, cells can survive in extremely adverse environments and survive in some sort of niche, but I wouldn't exactly call such a pseudo-equilibrium winning, and self-replicators wouldn't exactly wait for its carbon food source to evolutionary adapt).

Replies from: Vaniver, Eugine_Nier
comment by Vaniver · 2014-01-20T19:36:13.184Z · LW(p) · GW(p)

A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia.

Killing one human is easier than converting the entire biosphere.

Would you expect some antibodies and phagocytosis to defeat an intelligently engineered self-replicating nanobot the size of a virus (but which doesn't depend on live cells and without the telltale flaws and tradeoffs of Pandemic-reminiscent"can't kill the host cell too quickly" etc.)?

Well, that depends on what I think the engineering constraints are. It could be that in order to be the size of a virus, self-assembly has to be outsourced. It could be that in order to be resistant to phagocytosis, it needs exotic materials which limit its growth rate and maximal growth.

To me it seems like saying "if you drowned the world in acid, the biosphere could well win the fight in a semi-recognizable form and claim the negentropy for themselves"

It's more "in order to drown the world in acid, you need to generate a lot of acid, and that's actually pretty hard."

comment by Eugine_Nier · 2014-01-21T03:52:25.776Z · LW(p) · GW(p)

A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia.

Yes, and you may have noticed that bioengineered pandemic was voted top threat.

comment by XiXiDu · 2014-01-20T09:33:02.450Z · LW(p) · GW(p)

How is grey goo realistically a threat, especially without a uFAI guiding it?

Is grey goo the only extinction type scenario possible if humans solve advanced nanotechnology? And do you really need an AI whose distance from an intelligence explosion is under 5 years in order to guide something like grey goo?

But yes, this is an answer to my original question. Thanks.

comment by Vaniver · 2014-01-19T05:07:37.389Z · LW(p) · GW(p)

So, I was going through the xls, and saw the "passphrase" column. "Wait, what? Won't the winner's passphrase be in here?"


Not sure if this is typos or hitting the wrong entry field, but two talented individuals managed to get 1750 and 2190 out of 1600 on the SAT.


I was curious about the breakdown of romance (whether or not you met your partner through LW) and sexuality. For "men" and "women," I just used sex- any blanks or others are excluded. Numbers are Yes/No/I didn't meet them through community but they're part of the community now:

Gay men: 2/36/3

Lesbian women: 0/2/0

Bi men: 4/111/9

Bi women: 12/32/7

Straight men: 29/1031/26

Straight women: 1/55/10

I'm not quite sure how seriously to take these numbers, though. If 29 straight guys found a partner through the LW community, and a total of 14 straight and bi women found partners through the community, we need to have men to be about twice as likely to take the survey as women. (Possible, especially if women are more likely to go to meetups and less likely to post, but I don't feel like looking that up for the group as a whole.)

But the results are clear: the yes/no ratio was way higher for bi women than anyone else. Bi women still win the yes+didn't/no ratio with .6, but straight women are next with .2, followed by gay men at .14 and bi men at .12.

So, uh, advertise LW to all the bi women you know?

Replies from: Nornagest, somervta
comment by Nornagest · 2014-01-19T05:44:22.543Z · LW(p) · GW(p)

I'm not quite sure how seriously to take these numbers, though. If 29 straight guys found a partner through the LW community, and a total of 14 straight and bi women found partners through the community, we need to have men to be about twice as likely to take the survey as women.

That seems fairly plausible to me, actually. My impression of the community is that the physical side of it is less gender-skewed than the online side, although both are mostly male.

There's also polyamory to take into account.

Replies from: Vaniver
comment by Vaniver · 2014-01-19T06:20:48.166Z · LW(p) · GW(p)

There's also polyamory to take into account.

True; didn't think to check that. Probably explains some of the effect.

comment by somervta · 2014-01-19T05:14:01.787Z · LW(p) · GW(p)

So, I was going through the xls, and saw the "passphrase" column. "Wait, what? Won't the winner's passphrase be in here?"

In a manner of speaking: eponymous hahanicetry_CHEATER

Replies from: Vaniver
comment by Vaniver · 2014-01-19T06:17:28.034Z · LW(p) · GW(p)

I know, that's why I mentioned it- I decided not to quote it to leave it as a surprise for people who decided to then go check. But I had missed that someone else posted it.

Replies from: Omegaile
comment by Omegaile · 2014-01-20T03:17:03.346Z · LW(p) · GW(p)

You know, it would be interesting if Yvain had put something else there just to see how many people would try to cheat.

comment by [deleted] · 2014-01-22T07:03:05.084Z · LW(p) · GW(p)

I've just noticed there was no Myers-Briggs question this year. Why?

comment by Xodarap · 2014-01-19T12:58:23.980Z · LW(p) · GW(p)

I found that 51% of effective altruists had given blood compared to 47% of others - a difference which did not reach statistical significance.

I gave blood before I was an EA but stopped because I didn't think it was effective. Does being veg*n correlate with calling oneself an EA? That seems like a more effective intervention.

Replies from: owencb, David_Gerard
comment by owencb · 2014-01-20T09:50:59.584Z · LW(p) · GW(p)

The question does ask whether people have ever given blood, though. You could consider people only among a sufficiently old cohort (so that they would have had a chance to give blood before they would likely have identified as EA), and see if there's any correlation.

comment by David_Gerard · 2014-01-19T13:06:13.171Z · LW(p) · GW(p)

The term refers to a specific subculture that calls itself "Effective Altruism".

Replies from: Xodarap
comment by Xodarap · 2014-01-19T23:10:42.994Z · LW(p) · GW(p)

I'm sorry, I'm not sure what you're saying? I'm aware of what "EA" stands for, if that's the confusion.

comment by Username · 2014-02-04T20:00:41.153Z · LW(p) · GW(p)

Some unique passphrases that weren't so unique (I removed the duplicates from people who took the survey twice). You won't want to reuse your passphrase for next year's survey!

  • animatronic animorphs / animatronic tapeworm
  • Apple pie / Apple Sauce
  • bunny nani / Bunny Mathematics
  • Dog Puck / dog vacuum
  • duck soup / Duck pie
  • Falling Anecdote / falling waffles
  • flightless shadow / Flightless Hitchhiker
  • fraggle frelling / fraggle poyoyonrock Huh?
  • google calico / google translate
  • green a@a@a2A3 / green prevybozbycbex
  • hello hello / hello lamppost
  • hen hao / hen piaoliang There were a couple pinyin passphrases.
  • infinite jest / infinite cohomology
  • john lampkin / John Brown
  • less wrong / less right
  • Meine Güte / Meine Kindergeburtstagsfeier
  • misty may / misty moop
  • Modest Mouse / Modest Brand
  • not rhinocerous / not interested
  • point conception / Point Break
  • rose Hulman / Rose Lac
  • SQUEAMISH OSSIFRAGE / SQUEAMISH OSSIFRAGE This is the one passphrase that was exactly the same. You did a good job of being memorable but poor job of being random.
  • Swedish Spitfire / Swedish Berries
  • Toad Man / Toad Hall
  • TWO WORD / Two Word
  • Unicorn Flask / UNICORN STARTUP
  • wingardium leviathan / wingardium avocado
  • yellow jacket / Yellow dart
Replies from: Pfft
comment by Pfft · 2014-02-23T02:32:16.824Z · LW(p) · GW(p)

I guess this is not a problem though: when the first word is announced two people will reply, but only one of them has the right answer. So the prize still goes to the right person.

comment by linkhyrule5 · 2014-01-23T08:52:51.668Z · LW(p) · GW(p)

Were there enough CFAR workshoppers to check CFAR attendance against calibration?

Replies from: chkno
comment by Mati_Roy (MathieuRoy) · 2014-01-22T18:22:11.397Z · LW(p) · GW(p)

P(Aliens in observable universe): 74.3 + 32.7 (60, 90, 99) [n = 1496] P(Aliens in Milky Way): 44.9 + 38.2 (5, 40, 85) [n = 1482]

There are (very probably around) 1.7x10^11 galaxies in the observable universe. So I don't understand how can P(Aliens in Milky Way) be so closed to P(Aliens in observable universe)? If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586. I know there are other factors that influence these numbers, but still, even if there's a only a very slight chance for P(Aliens in Milky Way), then P(Aliens in observable universe) should be almost certain. There are possible rational justifications for the results of this survey, but I think (0.95) most people were victim of a cognitive bias. Scope insensitivity maybe? because 1.7*10^11 galaxies is too big to imagine. What do you think?

Tendency to cooperate on the prisoner's dilemma was most highly correlated with items in the general leftist political cluster.

I wonder how many people cooperated only (or in part) because they knew the results would be correlated with their (political) views, and they wanted their "tribe"/community/group/etc. to look good. Maybe next year we could say that this result won't be compared to the other? So if less people cooperate, then it will indicate that maybe some people cooperate for their 'group' to look good. But if these people know that I/we want to compare the results we this year in order to verify this hypothesis, they will continue to cooperate. To avoid most of these, we should compare only the people that will have filled the survey for the first time next year. What do you think?

I ended up deleting 40 answers that suggested there were less than ten million or more than eight billion Europeans, on the grounds that people probably weren't really that far off so it was probably some kind of data entry error, and correcting everyone who entered a reasonable answer in individuals to answer in millions as the question asked.

I think you shouldn't have corrected anything. When I assign a probability to the correctness of my answer, I included a percentage for having misread the question or made a data entry error.

This year's results suggest that was no fluke and that we haven't even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!

Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we're improving.

Replies from: cousin_it, Vaniver, gwern, michaelsullivan, Lblack, ikajaste
comment by cousin_it · 2014-01-22T19:24:25.786Z · LW(p) · GW(p)

If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586.

Only if our uncertainties about the different galaxies are independent, and don't depend on a common uncertainty about the laws of nature or something. It's true that P2>P1, but they can be made arbitrarily close, I think.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2014-01-22T19:53:53.532Z · LW(p) · GW(p)

I agree. But I don't think they can be that strongly dependant (not even close). How could they be?

Replies from: Wes_W, private_messaging
comment by Wes_W · 2014-01-22T20:26:00.822Z · LW(p) · GW(p)

One way would be for most of the expectation of aliens to come from expectation that the Fermi Paradox is somehow illusionary. There are probably other ways, but I can't think of any at the moment.

Toy example:
Suppose that your credence in "aliens in an average galaxy" is split across 2 distinct hypotheses:
A. Life is very common across the universe, but for some reason we can't detect it. (with confidence 10^-4)
B. Life is not common, but any given galaxy has a 10^-16 chance to develop life.
Total confidence that alien life exists in any given galaxy: ~10^-4.

So your confidence in "aliens exist in the observable universe" is likewise split:
A. Life is very common across the universe, but for some reason we can't detect it. (with confidence 10^-4)
B. Life is not common, but 1.7*10^11 galaxies means a chance of 1-(1-10^-16)^(1.7*10^11) = ~10^-5
Total confidence that life exists in the observable universe: ~10^-4.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2014-01-26T16:27:40.862Z · LW(p) · GW(p)

EDIT 3: I retract the following paragraph because I now understand what Wes_W wrote.

I know, that's why I said "There are possible rational justifications". I mean your reasoning make sense mathematically. But why would your distribution be two deltas at 10^-4 and 10^-16 and not more continuous? It's not a rhetorical question, I want to know the answer -if there's one-, but I don't see how it could be that way. Do you think you are rationalizing your answer? (again, it's not a rhetorical question)

EDIT: After reading other comments, I think another way a discontinuity might be justify is like this: going faster than light speed is either possible or not.

A. if it is, then if there's a sufficiently advance civilisation (anywhere in the Observable Universe) it would probably be able to colonize most of the(ir) observable universe. (so the probability that there are aliens in the Milky Way is similar to the Observable Universe).

B. if it isn't, then it's the probability that there are aliens in the Milky Way is a lot lower than in the Observable Universe.

EDIT 2: Can you think of other reasons for the discontinuity? With what probability do you think the speed of light is the maximum speed one can transfer information/energy?

Replies from: Wes_W, Eugine_Nier
comment by Wes_W · 2014-01-26T17:11:28.512Z · LW(p) · GW(p)

I don't think I'm rationalizing an answer; I'm not even presenting an answer. I meant only to present a (very simplified) example of how such a conclusion might arise.

I'm totally willing to chalk the survey results up to scale insensitivity, but such results aren't necessarily nonsensical. It could just mean somebody started with "what credence do I assign that aliens exist and the Fermi Paradox is/isn't an illusion" and worked backwards from there, rather than pulling a number out of thin air for "chance of life developing in a single galaxy" and then exponentiating.

Since the latter method gives sharply differing results depending on whether you make up a probability a few orders of magnitude above or below 10^-11, I'm not sure working backwards is even a worse idea. At least working backwards won't give one 99.99999% credence in something merely because their brain is bad at intuitively telling apart 10^-8 and 10^-14.

Edit: I think some degree of dichotomy is plausible here. A lot of intermediate estimates are ruled out by us not seeing aliens everywhere.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2014-01-26T21:09:23.977Z · LW(p) · GW(p)

Sorry I misunderstood. (Oops) I agree (see my edits in the previous comment). A justify dichotomy is more probable than I initially thought, and probably less people made a scale insensitivity bias than I initially thought.

comment by Eugine_Nier · 2014-01-26T21:08:02.747Z · LW(p) · GW(p)

But why would your distribution be two deltas at 10^-4 and 10^-16 and not more continuous?

Because it's a toy example and it's easier to work out the math this way. You can get similar results with more continuous distributions, the math is simply more complicated.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2014-01-26T21:13:25.180Z · LW(p) · GW(p)

Ok right. I agree.

comment by private_messaging · 2014-01-22T20:08:11.119Z · LW(p) · GW(p)

There's two sorts of uncertainty here. The more physical kind: probability that life arises, intelligence evolves, etc etc.

And there's "our uncertainty" kind of probability - we don't know what it takes for the life to evolve - and this is common for all galaxies.

comment by Vaniver · 2014-01-22T18:43:21.297Z · LW(p) · GW(p)

Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we're improving.

There's both PredictionBook and the Good Judgment Project as venues for this sort of thing.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2014-01-26T17:06:45.857Z · LW(p) · GW(p)

Thank you.

EDIT: I just made my first (meta)prediction which is that I'm 50% sure that "I will make good predictions in 2014. (ie. 40 to 60% of my predictions with an estimate between 40 and 60% will be true.)"

comment by gwern · 2014-01-22T18:49:18.412Z · LW(p) · GW(p)

There are (very probably around) 1.7x10^11 galaxies in the observable universe. So I don't understand how can P(Aliens in Milky Way) be so closed to P(Aliens in observable universe)? If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586.

Perhaps this is explainable with reference to why the Great Silence / Fermi paradox is so compelling? That even with very low rates of expansion, the universe should be colonized by now if an advanced alien civilization had arisen at any point in the past billion years or so. Hence, if there's aliens anywhere, then they should well have a presence here too.

Replies from: elharo
comment by elharo · 2014-01-22T23:02:21.676Z · LW(p) · GW(p)

Intergalactic travel is much harder than intragalactic. It's conceivable that even civilizations that colonize their galaxy might not make it further.

Replies from: Lumifer
comment by Lumifer · 2014-01-23T00:50:43.729Z · LW(p) · GW(p)

Intergalactic travel is much harder than intragalactic.

Why would you think so?

If the speed of light is the limit, both are impractical. If it is not, I don't see why do you assume that physical distance matters at all.

Replies from: Wes_W
comment by Wes_W · 2014-01-23T05:23:02.242Z · LW(p) · GW(p)

Both are wildly impractical (at least, by modern-human-technology standards), but intergalactic is several orders of magnitude more so. The speed of light really isn't much of an obstacle within a single galaxy; travel at .01c or less is plenty to populate every solar system in "only" a few million years.

Replies from: elharo
comment by elharo · 2014-01-23T11:35:40.947Z · LW(p) · GW(p)

It's believable that a technologically advanced society can cross a galaxy by star hopping and colonization of successive planets, maybe even without generation ships or cryopreservation. E.g. after taking into account relativistic effects, constant acceleration/deceleration at 1g gets us from Earth to Alpha Centauri and back well within a human lifetime. But you can't star hop between galaxies. There's nowhere to pick up supplies aside from maybe hydrogen and helium. Even at full lightspeed you need ships that are capable of running for 100,000 years to reach even the nearest galaxy. Is it feasible to build a fire-and-forget colony ship that could survive 10E5 years in space and arrive in working shape? Maybe if you did it with some really robust panspermia or something, and were willing to lose 99% of the the ships you sent out. I.e. just maybe you could transmit biology, but I very much doubt intergalactic civilization is feasible.

Replies from: Locaha, Eugine_Nier
comment by Locaha · 2014-01-23T13:16:47.231Z · LW(p) · GW(p)

You assumption holds if constant acceleration/deceleration at 1g is vastly easier to achieve than generation ships or cryopreservation. If you assume the opposite, then you suddenly can colonize the entire universe, only very-very slowly. :-)

Replies from: elharo
comment by elharo · 2014-01-23T23:44:58.155Z · LW(p) · GW(p)

No, not really. Even if generation ships or cryopreservation are easier to achieve than 1g over intragalactic distances, it still doesn't seem likely that it's possible to make them work over the 100,000 lightyears minimum between galaxies. To plausibly ship living beings between galaxies you either have to invent science fictional fantasies like Niven's stasis fields or figure out how to send a lot of seeds very cheaply and accept that you'll lose pretty much all of them. I'm not sure even that's possible.

Replies from: Locaha, VAuroch
comment by Locaha · 2014-01-24T08:15:30.556Z · LW(p) · GW(p)

Even if generation ships or cryopreservation are easier to achieve than 1g over intragalactic distances, it still doesn't seem likely that it's possible to make them work over the 100,000 lightyears minimum between galaxies.

To me it seems likely that if if you can cryopreserve someone for a 1000 years, you can cryopreserve someone more or less indefinitely.

This discussion is pointless. What seems likely to me or you now has no connection to actual likelihood of the technology.

Replies from: elharo
comment by elharo · 2014-01-25T15:40:45.179Z · LW(p) · GW(p)

Entropy is a thing. Keeping a machine running for 10 years without regular maintenance is challenging. 100 years is very hard but within the realm of feasibility. 1000 years might be doable with advanced enough self-repairing technology and access to sufficient fuel. 100,000 years? There's no way any moving part of any kind is going to keep going for that long. maybe if you can figure out a way to eliminate all moving parts of any kind; but even then I suspect random radiation and micrometeorites might erode any ship beyond hope of recovery. Perhaps there's little enough of that in the intergalactic void that intergalaxy travel is possible, but I wouldn't rate it as likely.

comment by VAuroch · 2014-01-24T01:07:50.135Z · LW(p) · GW(p)

To grab another idea from Niven (specifically the Puppeteers), gravity manipulation to get a small traveling solar system would probably work, though it would take an enormous amount of time. I'm not an astrophysicist, but you could get solar wind to keep protecting you from small stray objects and presumably could watch the path ahead to protect yourself from other collisions.

comment by Eugine_Nier · 2014-01-25T03:25:51.065Z · LW(p) · GW(p)

Even at full lightspeed you need ships that are capable of running for 100,000 years to reach even the nearest galaxy.

100,000 years from the perspective of outside observers, the amount of subjective time can be made arbitrarily small.

Replies from: elharo
comment by elharo · 2014-01-25T15:42:37.881Z · LW(p) · GW(p)

Yes, but the closer you get to lightspeed the bigger problem you have with any collision with any small particle.

comment by michaelsullivan · 2014-01-25T03:38:32.223Z · LW(p) · GW(p)

On MIlky Way vs. Observable universe, I would expect a very high correlation between the results of different galaxies. So simple multiplication is misleading.

That said, even with a very high correlation anything over 1% for Milky way should get you to 99+ for universe.

I admit that I did not seriously consider the number of galaxies in the universe, or realize off the cuff that it was that high and give that enough consideration. I estimated a fairly high number for Milky way but gave only 95% to the universe, which was clearly a mistake.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-25T20:20:36.781Z · LW(p) · GW(p)

That said, even with a very high correlation anything over 1% for Milky way should get you to 99+ for universe.

Not necessarily, that depends on the nature of your unsertainty, as Wes_W pointed out elsewhere in the thread.

comment by Lucius Bushnaq (Lblack) · 2014-01-28T20:41:45.538Z · LW(p) · GW(p)

I remember my thought process going something like this:
P (Aliens in Milky way) ~0.75
P (Aliens) ~100
P (Answer pulled from anus on basis of half remembered internet facts is remotely correct) ~0,8

So:
P (Aliens) P (Anus) ~0,8
P (Milky aliens)
P (Anus) ~0,6

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-29T17:12:16.003Z · LW(p) · GW(p)It should have been P (Milky aliens) P (Anus) + P (!Milky aliens) P (!Anus) = 0.6 + 0.05.
comment by ikajaste · 2014-01-27T08:44:58.055Z · LW(p) · GW(p)

I wonder how many people cooperated only (or in part) because they knew the results would be correlated with their (political) views, and they wanted their "tribe"/community/group/etc. to look good.

I don't think the responses of people here would be so much affected by directly wanting to present their own social group as good. However (false) correlation between those two could happen just because of framing by other questions.

E.g. the answer to prisoner's dilemma question might be affected by whether you've just answered "I'm associated with the political left" or whether you've just answered "I consider rational calculations to be the best way to solve issues".

If that is the effect causing a false correlation, then adding the statment "these won't be correlated" woudn't do any good - in fact, it would only serve as a further activation for the person to enter the political-association frame.

This is a common problem with surveys that isn't very easy to mitigate. Individually randomizing question order and analyzing differences in correlations based on presented question order helps a bit, but the problem still remains, and the sample size for any such difference-in-correlation analysis becomes increasingly small.

comment by Brillyant · 2014-01-20T22:17:10.432Z · LW(p) · GW(p)

Things that stuck out to me:

HPMOR: - Yes, all of it: 912, 55.7% REFERRAL TYPE: Referred by HPMOR: 400, 24.4%

EY's Harry Potter fanfic is more popular around here than I'd thought.

PHYSICAL INTERACTION WITH LW COMMUNITY: Yes, all the time: 94, 5.7% Yes, sometimes: 179, 10.9%

CFAR WORKSHOP ATTENDANCE: Yes, a full workshop: 105, 6.4% A class but not a full-day workshop: 40, 2.4%

LESS WRONG USE: Poster (Discussion, not Main): 221, 12.9% Poster (Main): 103, 6.3%

~6% at the maximum "buy-in" levels on these 3 items. My guess is they are all made up of a similiar group of people?

I'd be curious to know of 6.3% aho have published articles in Main (and, to a lesser extent, of the 12.9% who have published in Discussion), how many unique user are there?

Replies from: None, dspeyer
comment by [deleted] · 2014-01-21T23:23:54.266Z · LW(p) · GW(p)

EY's Harry Potter fanfic is more popular around here than I'd thought.

Haven't you seen all those sprawling HPMOR discussion threads with >500 comments usually?

Replies from: Brillyant
comment by Brillyant · 2014-01-22T15:16:29.583Z · LW(p) · GW(p)

I hadn't paid attention, no.

It was the ~25% referral rate that was pretty shocking to me. And 55% of LWers have read all of it?! Wow.

Replies from: taryneast
comment by taryneast · 2014-02-09T05:34:32.753Z · LW(p) · GW(p)

I use it as a tool to encourage others to join. It's very good for that.

I tell people that if they get to the end of HP:MOR and want more MOR, then they should come try out LW.

comment by dspeyer · 2014-01-23T01:39:21.709Z · LW(p) · GW(p)
$ cat Desktop/lwpublic2013.csv |wc -l
1481
$ cat Desktop/lwpublic2013.csv | grep "Yes all the time" | wc -l
85
$ cat Desktop/lwpublic2013.csv | grep "Yes I have been to a full (3+ day) workshop" | wc -l
91
$ cat Desktop/lwpublic2013.csv | grep "Yes I have been to a full (3+ day) workshop" | grep "Yes all the time" |wc -l
37

The statistically expected number would be 5, so that's a strong correlation (p<10^-15), but I wouldn't call it "one group of people".

I couldn't find LessWrong Use in the csv data.

comment by William_Quixote · 2014-01-19T12:47:34.340Z · LW(p) · GW(p)

GLOBAL CATASTROPHIC RISK: Pandemic (bioengineered): 374, 22.8% Environmental collapse including global warming: 251, 15.3% Unfriendly AI: 233, 14.2% Nuclear war: 210, 12.8% Pandemic (natural) 145, 8.8% Economic/political collapse: 175, 1, 10.7% Asteroid strike: 65, 3.9% Nanotech/grey goo: 57, 3.5% Didn't answer: 99, 6.0%

For the second year in a row Pandemic is the leading cat risk. If you include natural and designed it has twice the support of the next highest cat risk.

Replies from: fubarobfusco, FiftyTwo
comment by fubarobfusco · 2014-01-19T18:00:36.758Z · LW(p) · GW(p)

For the second year in a row Pandemic is the leading cat risk.

That's because cats never build research stations.

comment by FiftyTwo · 2014-01-24T15:19:23.784Z · LW(p) · GW(p)

That surprised me slightly, more because I'm not particularly aware of discussion of bioengineered pandemics as an existential risk than that I don't think its plausible. Suppose this means a lot of people are worried about it but not discussing it?

comment by gjm · 2014-01-19T10:57:03.175Z · LW(p) · GW(p)

The correlations with number of partners seem like they confound two very different questions: "in a relationship or not?" and "poly or not, and if so how poly?". This makes correlations with things like IQ and age less interesting. It seems like it would be more informative to look at the variables "n >= 1" and "value of n, conditional on n >= 1".

(Too lazy to redo those analyses myself right now, and probably ever. Sorry. If someone else does I'll be interested in the results, though.)

comment by mgin · 2014-01-22T14:07:18.207Z · LW(p) · GW(p)

I find it odd that 66.2% of LWers are "liberal" or "socialist" but only 13.8% of LWers consider themselves affiliated with the Democrat party. Can anybody explain this?

Replies from: nshepperd, army1987, drethelin, None, taryneast
comment by nshepperd · 2014-01-22T14:12:31.460Z · LW(p) · GW(p)

First reason: by European standards, I imagine the Democrat party is still quite conservative. Median voter theorem and all that. Second reason: "affiliated" probably implies more endorsement than "it's not quite as bad as the other party". It could also be both of these together.

comment by A1987dM (army1987) · 2014-01-23T10:27:56.882Z · LW(p) · GW(p)

I'd interpret “affiliated” as ‘card-carrying’. If anything, it surprises me as high, but ISTR that in the US you need to be a registered member of a party to vote for their primaries, which would explain that.

Replies from: Nornagest
comment by Nornagest · 2014-02-09T05:54:35.424Z · LW(p) · GW(p)

I'd interpret “affiliated” as ‘card-carrying’.

It's probably meant to be interpreted as "registered". In the US, registering for a political party has significance beyond signaling affiliation, so it's fairly common: it allows you, in most states, to vote in your party's primary election (which determines the candidates sent by that party to the general election, which everyone can vote in). A few states choose their candidates with party caucuses, though, and California at one point allowed open primaries, though there were some questions about the constitutionality of that move and I don't remember how they were resolved.

Roughly two-thirds of Americans are registered with one of the two major parties.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-02-17T04:57:20.220Z · LW(p) · GW(p)

Roughly two-thirds of Americans are registered with one of the two major parties.

Do you have a source for that, or is this the same statistic you quoted from wikipedia about "identification"?

I think only half of eligible voters are even registered to vote, but I'd expect almost all registered voters to register in a party. Young people, like LW users, are less likely to be registered.

Replies from: Nornagest
comment by Nornagest · 2014-02-17T05:15:30.253Z · LW(p) · GW(p)

I honestly don't remember, but I was probably trying to point toward the Wikipedia stats, in which case I shouldn't have used "registered". A quick search for registration percentages turns up this, which cites slightly under 60% registration in the most recent election (it's been going slowly down over time; was apparently just over 70% in the late Sixties). I haven't been able to turn up party-specific registration figures; I suspect but cannot prove that you're underestimating the number of Americans registered as independent.

comment by drethelin · 2014-01-22T18:39:03.983Z · LW(p) · GW(p)

The democrat party is only socialist in the republican party's eyes.

comment by [deleted] · 2014-03-06T22:10:59.555Z · LW(p) · GW(p)

I was wondering about this word "liberal" -- when Will Wilkinson says he's a liberal, that means something entirely different from what you're describing. So, is it possible we have many right liberals here?

comment by taryneast · 2014-02-09T05:31:07.370Z · LW(p) · GW(p)

As somebody who most definitely identified as liberal, but did not affiliate with the Democrats:

Your question reveals a hidden assumption:

There is no "Democrat party" in (almost) every other country in the world apart from yours* ;)

*(I am assuming you come from the USA based on this underlying assumption)

Replies from: Nornagest
comment by Nornagest · 2014-02-09T05:52:16.447Z · LW(p) · GW(p)

This is easily tested by comparing against the country of origin question. As it turns out, a bit over half of LW comes from the US. Wikipedia claims that about 33% of Americans identify as Democrats (vs. 28% Republican and 38% other or independent), so we'd expect about 17.5% of LW to identify as Democratic if the base rate applied, up to 35% if every American LWer identifying as liberal or socialist also identified as Democratic.

Bearing this in mind, it seems that party members identified as such really are underrepresented here.

Replies from: taryneast
comment by taryneast · 2014-02-10T08:53:21.083Z · LW(p) · GW(p)

Cool stuff. Thanks for going and checking against the numbers :)

comment by Pablo (Pablo_Stafforini) · 2014-01-19T09:58:42.281Z · LW(p) · GW(p)

I would like to see how percent of positive karma, rather than total karma, correlates with the other survey responses. I find the former a more informative measure than the latter.

Replies from: gjm
comment by gjm · 2014-01-19T10:52:03.496Z · LW(p) · GW(p)

I agree that it would be interesting but I suspect that just as "total karma" is a combination of "comment quality" and "time on LW" (where for most purposes the former is more interesting, but the latter makes a big difference), so "percent positive karma" is a combination of "comment quality" and "what sort of discussions one frequents", where again the former is more interesting but the latter makes a big difference.

comment by Zvi · 2019-06-23T18:44:56.762Z · LW(p) · GW(p)

Noting that this was suggested to me by the algorithm, and presumably shouldn't be eligible for that.

Replies from: habryka4
comment by habryka (habryka4) · 2019-06-24T07:27:23.506Z · LW(p) · GW(p)

Indeed, moved to meta (which prevents things from showing up in recommendations).

comment by ChristianKl · 2014-01-19T22:40:02.319Z · LW(p) · GW(p)

What the best way to import the data into R without having to run as.numeric(as.character(...)) on all the numeric variables like the probabilities?

comment by shokwave · 2014-01-19T10:18:05.453Z · LW(p) · GW(p)

P(Supernatural): 7.7 + 22 (0E-9, .000055, 1) [n = 1484]

P(God): 9.1 + 22.9 (0E-11, .01, 3) [n = 1490]

P(Religion): 5.6 + 19.6 (0E-11, 0E-11, .5) [n = 1497]

I'm extremely surprised and confused. Is there an explanation for how these probabilities are so high?

Replies from: gjm, christopherj
comment by gjm · 2014-01-19T10:50:15.193Z · LW(p) · GW(p)

Well, we apparently have 3.9% of "committed theists", 3.2% of "lukewarm theists", and 2.2% of "deists, pantheists, etc.". If these groups put Pr(God) at 90%, 60%, 40% respectively (these numbers are derived from a sophisticated scientific process of rectal extraction) then they contribute 6.3% of the overall Pr(God) requiring an average Pr(God) of about 3.1% from the rest of the LW population. If enough respondents defined "God" broadly enough, that doesn't seem altogether crazy.

If those groups put Pr(religion) at 90%, 30%, 10% then they contribute about 4.7% to the overall Pr(religion) suggesting ~1% for the rest of the population. Again, that doesn't seem crazy.

So the real question is more or less equivalent to: How come there are so many committed theists on LW? Which we can frame two ways: (1) How come LW isn't more effective in helping people recognize that their religion is wrong? or (2) How come LW isn't more effective in driving religious people away? To which I would say (1) recognizing that your religion is wrong is really hard and (2) I hope LW is very ineffective in driving religious people away.

(For those who expect meta-level opinions on these topics to be perturbed by object-level opinions and wish to discount or adjust: I am an atheist; I don't remember what probabilities I gave but they would be smaller than any I have mentioned above.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-20T01:06:09.041Z · LW(p) · GW(p)

When it comes to a hypothesis as extreme as 'an irreducible/magical mind like the one described in various religions created our universe', I'd say that if 3% credence isn't crazy, 9% isn't either. I took shokwave to be implying that a reasonable probability would be orders of magnitude smaller, not 2/3 smaller.

Replies from: gjm
comment by gjm · 2014-01-20T01:36:45.062Z · LW(p) · GW(p)

The reason why I think ~3% for some kind of God and ~1% for some kind of religion aren't crazy numbers (although, I repeat, my own estimates of the probabilities are much lower) is that there is a credible argument to be made that if something is seriously believed by a large number of very clever and well informed people then you shouldn't assign it a very low probability. I don't think this argument is actually correct, but it's got some plausibility to it and I've seen versions of it taken very seriously by big-name LW participants. Accordingly, I think it would be unsurprising and not-crazy if, say, 10% of LW allowed a 10% probability for God's existence on the basis that maybe something like 10% of (e.g.) first-rate scientists or philosophers believe in God.

Replies from: Jiro
comment by Jiro · 2014-01-20T15:35:35.238Z · LW(p) · GW(p)

Personally I would discount "believed by a large number of clever people" if there are memetic effects. There are traits of beliefs that are well-known to increase the number of believers for reasons unrelated to their truth. For any belief that has such traits, whether it's shooting unbelievers, teaching them to your children before they reach the age when they are likely to think rationally, or sending out missionaries, the large number of people who believe it is not much use in assessing its truth.

I would also discount anything which fits into certain patterns known to take advantage of flaws in human thought processes, particularly conspiracy theories.

Replies from: simplicio, gjm
comment by simplicio · 2014-01-20T16:04:57.729Z · LW(p) · GW(p)

There are just too many ways to fool oneself here. I could talk for quite a while about "memetic effects" that make e.g. atheism appeal to (a certain group of) people independent of its truth. Typically one only notices these "memetic effects" in ideas one already disagrees with.

I think for standard outside view reasons, it's better to have an exceptionless norm that anything believed by billions of people is worth taking seriously for at least 5 minutes.

Replies from: Jiro
comment by Jiro · 2014-01-20T20:00:04.475Z · LW(p) · GW(p)

I think that it's fairly obvious that there wouldn't be even the relatively small percentage of seriously Christian scientists there are today if it had not been for centuries of proselytization, conversion by the sword, teaching Christianity to children from when they could talk, crusades, etc. I think it's also fairly obvious that this is not true of the percentage of scientists who are atheists. I also think it's obvious that it's not true for the percentage of scientists who think that, for instance, there are an infinite number of twin primes.

Typically one only notices these "memetic effects" in ideas one already disagrees with.

Really? I haven't heard anyone say "nobody would think there are infinitely many twin primes if they hadn't been taught that as a 4 year old and forced to verbally affirm the infinity of twin primes every Sunday for the next few decades". It just is not something that is said, or can sensibly be said, for any idea that one disagrees with.

Replies from: simplicio
comment by simplicio · 2014-01-20T23:29:14.492Z · LW(p) · GW(p)

Your choice of twin primes as an example is kind of odd; implicitly, we are discussing the cluster of ideas that are controversial in some ideological sense.

To be clear, I agree that ideas often spread for reasons other than their truth. I agree that because of this, if you are careful, you can use the history of religion as ancillary evidence against theism.

But in general, you have to be really, really careful not to use "memetic effects" as just another excuse to stop listening to people (LessWrong's main danger is that it is full of such excuses). Sometimes true ideas spread for bad reasons. Sometimes what looks like a bad reason appears so because of your own ideology.

I'm not saying become a theist, or read huge treatises on theology. I'm saying give theism the 5 minutes of serious consideration (e.g., listening to a smart proponent) owed a belief held by a very large fraction of the planet.

comment by gjm · 2014-01-20T16:36:58.683Z · LW(p) · GW(p)

This sort of thing is exactly why I don't think the argument in question is correct and why I'm comfortable with my own Pr(God) being orders of magnitude smaller than the fraction of theists in the educated population.

However, simplicio is right that by taking this sort of view one becomes more vulnerable to closed-mindedness. The price of being more confident when right is being less persuadable when wrong. I think simplicio's second paragraph has it pretty much exactly right: in cases where you're disagreeing starkly with a lot of smart people, don't adjust your probabilities, adjust your behaviour and give the improbable hypothesis more consideration and more time than your current estimate of its probability would justify on its own.

comment by christopherj · 2014-01-28T07:38:37.946Z · LW(p) · GW(p)

I'm extremely surprised and confused. Is there an explanation for how these probabilities [P(Supernatural), P(God), P(Religion)] are so high?

Our universe came from somewhere. Can you be 100% sure that no intelligence was involved? If there was an intelligence involved, it would probably qualify as supernatural and god, even if it was something technically mundane (such as the author of the simulation we call reality, or an intelligent race that created our universe or tweaked the result, possibly as an attempt to escape the heat death of their universe). Eg if you ask our community, "What are the odds that in the next million years humans be able to create whole world simulations?" I suspect they'll answer "very high".

For extra fun, you can wonder if the total number of simulated humans is expected to outnumber the total number of real humans.

comment by ChrisHallquist · 2014-01-19T05:24:32.164Z · LW(p) · GW(p)

I wonder if I can claim credit for either of the Freethought Blogs referrals.

(I'm an ex-FTBer. I think Zinnia Jones is the only other current or former FTBer involved in LessWrong.)

Replies from: palladias, RobbBB
comment by palladias · 2014-01-19T20:31:33.535Z · LW(p) · GW(p)

Totally caper every year when I see what my referral numbers are.

comment by Rob Bensinger (RobbBB) · 2014-01-19T06:03:04.091Z · LW(p) · GW(p)

Could be. Looking at the data, a person who's been here for 3 years wrote in 'Freethought Blogs', one who's been here for 1.5 years wrote in 'either FreethoughtBlogs or Atheist Ethicist', and one who's been here 0 years wrote in 'Brute Reason on Freethought Blogs'.

There have been more recent LW-relevant FtB posts by Richard Carrier, Kate Donovan, and Miri Mogilevsky of the aforementioned Brute Reason. Miri and Kate also have permanent blogroll links to LW.

comment by ancientcampus · 2014-10-31T02:54:57.299Z · LW(p) · GW(p)

Some things that took me by surprise:

People here are more favorable of abortion than feminism. I always thought the former as secondary to the latter, though I suppose the "favorable" phrasing makes the survey sensitive to opinion of the term itself.

Mean SAT (out of 1600) is 1474? Really, people? 1410 is 96th percentile, and it's the bottom 4th quartile. I guess the only people who remembered their scores were those who were proud of them. (And I know this is right along with the IQ discussion)

Replies from: Vaniver
comment by Vaniver · 2014-10-31T20:49:17.353Z · LW(p) · GW(p)

Mean SAT (out of 1600) is 1474? Really, people? 1410 is 96th percentile, and it's the bottom 4th quartile. I guess the only people who remembered their scores were those who were proud of them.

This would imply that LW is about as selective as a top university (like Harvey Mudd). That doesn't seem that implausible to me- but I definitely agree that we should expect the true mean to be lower than the self-reported mean (both because of inflated memories and selective memories).

comment by Elund · 2014-10-25T04:18:11.440Z · LW(p) · GW(p)

It looks like you created the 2014 survey before I got around to posting my comment for this one. Oh well. Hopefully you will still find my comment useful. :)


Some answer choices from the survey weren't included in the results, without any explanation as to why. Does that mean no one selected them? If so, I suggest editing the post to make that clear.

I noticed that 13.6% of respondents chose not to answer the "vegetarian" question. I think it would have helped if you provided additional choices for "vegan" and "pescatarian".

Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference.

I have some doubts as to how good of a gauge this question is for altruism. People may choose to defect if they have immediate pressing needs for money, if they think their charity is superior to what most other people would have chosen, or if they don't see a net altruistic benefit in taking more money away from the prize-giver just to give it to a randomly selected survey-taker. I suppose if they bothered to think through it carefully they might have reasoned that all else being equal you'd prefer them to cooperate, which is why you're willing to give them more money for it. However, it could have also been that you saw the promise of extra money as a necessary sacrifice in order to set up the dilemma properly, but secretly wished for most people to defect. (Which one was it, by the way, if you don't mind me asking? :P)

I don't know for sure that Mensa is on the level, so I tried again deleting everyone who took a Mensa test - leaving just the people who could name-drop a well-known test or who knew it was administered by a psychologist in an official setting. This caused a precipitous drop all the way down to 138.

I think I know why removing the Mensa tests from the IQ results brought down the average. It's not because the Mensa test is unreliable, but because the people who bothered to take it are likely to have relatively higher IQs, in which case it would make sense to remove them from the sample to remove the bias.

People who spend more time on Less Wrong have lower IQs.

My guess is that lower IQ people may spend more time on LW because they derive more benefit from reading posts about rationality. Perhaps higher-IQ people are more likely to efficiently limit their time on LW to reading only the top-rated interesting-looking posts and the top-rated comments.

Height is, bizarrely, correlated with belief in the supernatural and global catastrophic risk.

Your data actually showed that height is anti-correlated with belief in the supernatural, unless that minus sign wasn't supposed to be there.

Thanks for posting these surveys and survey results, by the way. They are very fascinating. :)

comment by polymathwannabe · 2014-01-28T18:27:24.593Z · LW(p) · GW(p)

You mention a "very confused secular humanist." What other answers did that person provide that mark him/her/zer as confused?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-31T13:26:04.491Z · LW(p) · GW(p)

People were supposed to fill out the religion field if they are theist. If a secular humanist field out that field it suggest that he's confused.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-01-31T14:29:47.564Z · LW(p) · GW(p)

That dichotomy leaves no space for non-theistic religions. What if a secular humanist simpathizes with Taoism or Buddhism?

Replies from: Jayson_Virissimo, ChristianKl
comment by Jayson_Virissimo · 2014-01-31T15:09:18.163Z · LW(p) · GW(p)

That dichotomy leaves no space for non-theistic religions. What if a secular humanist simpathizes with Taoism or Buddhism?

Or non-religious theists, for that matter.

comment by ChristianKl · 2014-01-31T14:47:33.301Z · LW(p) · GW(p)

In that case he would have put Taoism or Buddhism into the box instead of secular humanist. But you are right that the question is formed in a way to discourage non-theistic religions from being reported.

comment by jobe_smith · 2014-01-22T16:27:20.619Z · LW(p) · GW(p)

I don't understand how P(Simulation) can be so much higher than P(God) and P(Supernatural). Seems to me that "the stuff going on outside the simulation" would have to be supernatural by definition. The beings that created the simulation would be supernatural intelligent entities who created the universe, aka gods. How do people justify giving lower probabilities for supernatural than for simulation?

Replies from: TheOtherDave, timujin
comment by TheOtherDave · 2014-01-22T17:59:29.478Z · LW(p) · GW(p)

At least part of it is that a commonly endorsed local definition of "supernatural" would not necessarily include the beings who created a simulation. Similarly, the definition of "god" around here is frequently tied to that definition of supernatural.

I am not defending those usages here, just observing that they exist.

comment by timujin · 2014-01-22T16:35:59.411Z · LW(p) · GW(p)

The word "supernatural" often means "something that is not describable by physics" (ugly definition, I know) or "mental phenomena that is not reducible to non-mental phenomena". Both definitions are such that it is hard to imagine a world in which there exists something they describe. "Simulation" is, on the other hand, at least imaginable.

Replies from: private_messaging
comment by private_messaging · 2014-01-22T16:47:28.441Z · LW(p) · GW(p)

A simulator permits interventions that do not follow from the laws of simulated physics, and arising outside the 'natural' from the point of view of the simulation, hence supernatural. Likewise, mental phenomena in a simulation may not be reducible to non-mental phenomena within the same simulation. A simulation postulates existence of a specific type of higher domain, super-natural relatively to our nature. And the creators of the simulation are a specific kind of gods.

I think it is a sort of conjunction fallacy, where very specific supernatural and theological beliefs are incorrectly deemed more probable than more general forms of such, because the specific beliefs come with an easy to imagine narrative while the general beliefs leave creation of such narrative as an exercise for the reader. When presented with an abstract, general concept, people are unable to enumerate and sum specific possibilities to assign it the probability consistent with the probabilities they give to individual specific possibilities.

edit: not that I think conjunction fallacy has much to do with conjunctions per se. E.g. if I ask you what is the probability that there is a coin in my pocket, or I ask you what is the probability that there is 1 eurocent coin from 2012 in my pocket, the probability that there is a coin as described may legitimately be higher conditional on me giving a more specific description of the coin.

Replies from: timujin
comment by timujin · 2014-01-22T17:03:57.461Z · LW(p) · GW(p)

Mostly, the "not describable by physics" part, as I and maybe many others see it, is a logical impossibility, because physics is what describles real things. Laws of simulated physics can be manipulated, but it will still be within the 'real' physics of the real reality. Thus, not supernatural. At least, not in the sense that I understood the question when I answered it.

As for

Likewise, mental phenomena in a simulation may not be reducible to non-mental phenomena within the same simulation.

Can you expand this one?

Replies from: private_messaging, TheAncientGeek
comment by private_messaging · 2014-01-22T18:06:20.115Z · LW(p) · GW(p)

Mostly, the "not describable by physics" part, as I and maybe many others see it, is a logical impossibility, because physics is what describles real things.

It's sort of like answering the question about multiverse based on the sophism that multiverse is logically impossible because "universe" is meant to include everything. Clever, but if you're seeing a "logical impossibility" you probably missed the point.

From wikipedia:

Physics (from Greek φυσική (ἐπιστήμη), i.e. "knowledge of nature", from φύσις, physis, i.e. "nature"[1][2][3][4][5]) is the natural science that involves the study of matter[6] and its motion through space and time, along with related concepts such as energy and force.[7] More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves.

In the context of the simulation, it may be impossible for the simulated beings to conduct any form of study of the laws of the parent universe. It definitely has been impossible for us so far, if we are in a simulation.

Furthermore, those who run the simulator can break the internally-deducible laws of physics - e.g. a donut can appear in front of you in the air in a way that is not even theoretically predictable through any studying of nature that you can possibly do. Thus, super-natural, not ever describable by physics as it is defined by dictionary.

Can you expand this one?

Bots in any videogame are not reducible to some contraptions built within the game. Most game worlds do not even allow contraptions complex enough to replicate some bot AI's behaviour.

As for reducibility in the superior universe, reducibility is sort of like Earth being on a turtle, which is standing on an elephant... eventually you will get down to something that's not reducible. In our universe, the low level objects that are not further reducible are rather simple (or so it seems), but that needs not be true of the parent universe. Needs not be false, either.

Replies from: timujin
comment by timujin · 2014-01-22T18:30:47.854Z · LW(p) · GW(p)

In the context of the simulation, it may be impossible for the simulated beings to conduct any form of study of the laws of the parent universe. It definitely has been impossible for us so far, if we are in a simulation.

Of course, when I say 'laws of physics', I don't mean 'human study of laws of physics'. I mean the real laws that govern stuff. Even if a donut appears in front of my face, that just means that The Rules say not "physics", but "what humans know is physics but what is actually arbitrary rules written by beings controlled by their own, this time really physics".

Anyway, that's just arguing definitions. The original point goes like that:

You: Why do people assign higher probability to 'simulation' than to 'supernatural'?

Me: I don't know about other people, but I can say why did I do that, and suppose that I am not the only one. My line of thinking at that moment (sorta):

When I am asked to assign a probability to 'simulation', I imagine a world when 'simulation' is true (our universe is run on a computer, and I can anticipate stuff like donuts appearing in front of my face or Morpheus texting me about the white rabbit), then I imagine a world where it is not true (our laws of physics are true laws of physics, I cannot ever anticipate any of violations of them), occam-conjure priors for both, see what fits my expierences better, yada yada, and decide on the balance of probability between those two.

When I am asked to assign a probability to 'supernatural', I try to imagine the world in which it is true, which means that there happens some stuff that True Rules of the Universe and Everything say can not happen. But if the stuff happens nevertheless, then they are not true. Smells like a logical contradiction, and I wholeheartedly assign it the same probability as I assign to 2+2=5, which, given the restrictions of the test, is equivalent to punching in 0.

So, even if the reasoning is not valid, or if author of the question had another thing completely in mind when he said 'supernatural', that's the explanation why I, personally, assigned a higer probability to 'simulation' than to 'supernatural'. Hope this can give you a hint why the average lesswronger did so.

comment by TheAncientGeek · 2014-01-22T17:17:58.981Z · LW(p) · GW(p)

, the "not describable by physics"

Means tnot describable by the pseudo-phsyics within the simulation.

comment by Bendini (bendini) · 2014-01-20T21:19:33.944Z · LW(p) · GW(p)

The "did not answer" option seems to be distorting the perception of the results. Perhaps structuring the presentation of the data with those percentages removed would be more straightforward to visualise.

PRIMARY LANGUAGE: English: 1009, 67.8% German: 58, 3.6% Finnish: 29, 1.8% Russian: 25, 1.6% French: 17, 1.0% Dutch: 16, 1.0% Did not answer: 15.2%

Percentages including the non respondents is misleading, at first glance you could be mistaken for thinking there is a significant population of Non-English speakers as less than 70% of people who completed the survey answered English.

Non-respondents removed:

English: 1009, 87% German: 58, 5% Finnish: 29, 3% Russian: 25, 2% French: 17, 2% Dutch: 16, 1% 15.2% of the sample did not answer

This seems like it would be a better representation of the data which could be applied to the other questions.

comment by Adam Zerner (adamzerner) · 2014-01-20T00:06:32.589Z · LW(p) · GW(p)

I would be interested to see Eliezer's responses.

comment by EHeller · 2014-09-09T05:23:13.030Z · LW(p) · GW(p)

So in 2012 we started asking for SAT and ACT scores, which are known to correlate well with IQ and are much harder to get wrong. These scores confirmed the 139 IQ result on the 2012 test. But people still objected that something must be up.

Not quite. The averages might roughly work, but the correlations appear off. For instance this:

SAT score out of 1600/IQ: .369

Is about half of what you'd expect.

Replies from: nshepperd
comment by nshepperd · 2014-09-09T06:45:31.743Z · LW(p) · GW(p)

Maybe this is as expected?

comment by buybuydandavis · 2014-09-05T07:37:23.610Z · LW(p) · GW(p)

B. Can we finally resolve this IQ controversy that comes up every year?

The story so far - our first survey in 2009 found an average IQ of 146. Everyone said this was stupid, no community could possibly have that high an average IQ,

Why not? If we're such smarty pants, maybe we should learn how to shut up and multiply. There are lots of people. Let's go with the 146 value. That's roughly 1 in a 1000 people have IQ >= 146. That high IQ people congregate at a rationality site shouldn't shock anyone. The site is easily accessible to all of the Anglosphere, which not so coincidentally, is 3/4 of the members.

One in a thousand just isn't that special of a snowflake for a special interest site.

Replies from: private_messaging
comment by private_messaging · 2014-09-05T10:09:26.497Z · LW(p) · GW(p)

Keep in mind that to get an average of 146 you need an implausibly huge number of >146 IQ people to balance the <146 people.

This is just ridiculous. It is well known and well documented that values such as IQ (or penis size) are incorrectly self reported. Furthermore - I do not have a link right now - the extent of exaggeration is greater when reporting old values than when reporting recently obtained values (and people here did take iqtest.dk , getting a lower number)

Replies from: buybuydandavis
comment by buybuydandavis · 2014-09-08T01:40:33.376Z · LW(p) · GW(p)

Keep in mind that to get an average of 146 you need an implausibly huge number of >146 IQ people to balance the <146 people.

No, because there aren't an implausibly large number of people on the list. The world is a big place. The main issue in maintaining a high average isn't in getting the numbers of high IQ people, but in repelling the lower IQ people. But apparently, Mission Accomplished.

Note further that I was taking the 146 number as the highest reported estimate, to get the most "implausible" number, which was a mere 1/1000, and not really that rare. The 2013 survey had 138, which is 1/177, which is thoroughly unexciting as implausibly rare snowflakes.

It is well known and well documented that values such as IQ (or penis size) are incorrectly self reported.

Is that documented for the 146+ crowd?

I'm going by numbers I had in highschool, on two IQ tests in consecutive years which gave the same result, along with an SAT result which mapped even higher (I reported the IQ score).

It's not too hard to remember a number, and people interested, and indeed, proud of their results, are likely paying more attention.

The one real issue I see is sampling bias - only around a third of respondents gave an IQ or SAT score, and I would expect those giving scores to skew higher.

Then again, there are probably biases associated with posting and being active as well, with the higher IQ being more confident and willing to post.

Replies from: private_messaging
comment by private_messaging · 2014-09-09T04:53:15.678Z · LW(p) · GW(p)

but in repelling the lower IQ people.

The time on LW correlated negatively with IQ... (and getting the high IQ people to come is difficult). You don't get to invite the whole world.

Note further that I was taking the 146 number as the highest reported estimate, to get the most "implausible" number, which was a mere 1/1000, and not really that rare.

It is still rarer than many other things, e.g. extremely overinflated self assessment is not very rare.

The 2013 survey had 138, which is 1/177, which is thoroughly unexciting as implausibly rare snowflakes.

Well, yeah.

Is that documented for the 146+ crowd?

One can always special plead their ways out of any data. There's two types of IQ score, one of them is about mental age, by the way.

Replies from: Vaniver
comment by Vaniver · 2014-09-09T14:03:45.405Z · LW(p) · GW(p)

The time on LW correlated negatively with IQ... (and getting the high IQ people to come is difficult).

I thought we discovered this was driven by outliers in people who spent very little time on LW. (I'm on my phone, or I would check.)

Replies from: buybuydandavis, private_messaging
comment by buybuydandavis · 2014-09-10T22:53:41.592Z · LW(p) · GW(p)

Do you have any recollections on the source for that discovery?

Is the full survey data available, so that we could look at the distribution?

Replies from: Vaniver
comment by Vaniver · 2014-09-11T02:42:26.820Z · LW(p) · GW(p)

Yes; the OP has a link to the 2013 survey data in the last line. Also note survey results for 2012, 2011, and 2009. Here's my comment on this year's describing what happened last year, and while this is relevant I have a memory of looking at the data, making a graph, and calling it 'trapzeoidal,' but I don't know where that is, and I don't see the image uploaded where I probably would have uploaded it- so I guess I never published that analysis. Anyway, I recommend you take a look at it yourself.

comment by private_messaging · 2014-09-10T11:41:00.048Z · LW(p) · GW(p)

Dunno, maybe. In any case 'repelling lower IQ people' hypothesis seems like it ought to yield a corresponding correlation between IQ and participation, but the opposite or no correlation is observed. (albeit the writing clarity here is quite seriously low - using private terminology instead of existing words, etc. which many may find annoying and perhaps inaccessible)

comment by RobertWiblin · 2014-03-24T18:19:53.695Z · LW(p) · GW(p)

"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."

Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.

comment by FiftyTwo · 2014-01-24T15:21:26.424Z · LW(p) · GW(p)

Were there any significant differences between lurkers and posters? Would be interesting to see if that indicates any entry barriers to commenting.

Replies from: ikajaste
comment by ikajaste · 2014-01-27T08:23:41.709Z · LW(p) · GW(p)

I wonder what would be the possible indications about entry barriers? I would think they'd be much easier to address by direct survey query to lurkers about that specific issue.

While of course very interesting, I'm afarid trying to find any such specific and interpretation-inclined results from a general survey will probably just lead to false paths.

... which, I guess, is rather suitable as a first comment of a lurker. :)

comment by Viliam_Bur · 2014-01-20T10:22:11.947Z · LW(p) · GW(p)

Formatting: I find the reports a bit difficult to scan, because each line contains two numbers (absolute numbers, relative percents), which are not vertically aligned. An absolute value of one line may be just below the value of another line, and the numbers may similar, which makes it difficult to e.g. quickly find a highest value in the set.

I think this could be significantly improved with a trivial change: write the numbers at the beginning of the line, that will make them better aligned. For even better legibility, insert a separator (wider than just a comma) between absolute and relative numbers.

Now:

Yes, all the time: 94, 5.7%
Yes, sometimes: 179, 10.9%
No: 1316, 80.4%
Did not answer: 48, 2.9%

Proposed:

94 = 5.7% - Yes, all the time
179 = 10.9% - Yes, sometimes
1316 = 80.4% - No
48 = 2.9% - Did not answer

For example in the original version it is easy to see something like "94.5, 179, 80.4, 48.2" when reading carelessly.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-26T01:41:38.804Z · LW(p) · GW(p)

Two more possibilities with things really lined up. I think the first is somewhat better. The dots are added so Markdown doesn't destroy the spacing.

Yes, all the time........94......5.7%
Yes, sometimes......179......10.9%
No.........................1316.....80.4%
Did not answer...........48......2.9%

...94 = 5.7%.......Yes, all the time
.179 = 10.9%......Yes, sometimes
1316 = 80.4%......No
....48 = 2.9%......Did not answer

comment by Adam Zerner (adamzerner) · 2014-01-20T00:07:20.421Z · LW(p) · GW(p)

We should have an answer wiki with ideas for next survey.

comment by asuffield · 2014-01-19T13:49:42.796Z · LW(p) · GW(p)

I'd like to advance an alternative hypothesis for the effective altruism/charitable donations data:

  • People who donate more money to charity spend more time thinking about how effectively that money is used, and hence are more interested in effective altruism
  • People who have more money donate more money

Aside from reversing the suggested causality (which we obviously can't test from this survey), the difference is pretty narrow, I don't really know enough about statistics to analyse how well the data supports one hypothesis over the other, and while I would be interested in knowing the answer, I'm not sufficiently interested to go and learn how to do that kind of analysis (if it's even possible from this data, which I'm unsure of). Is anybody able to come up with something?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-19T15:30:11.939Z · LW(p) · GW(p)

It seems like the effect of effective altruism on charity donations is relatively independent from income.

If I do a straight linear model with predicts charity donation from effective altruism, the effect is 1851 +- 416 $. If I add income into the model the effect shrinks to 1751+-392.

Furthermore being a effective altruist doesn't have a significant effect on income (I tried a few different ways to control it).

comment by Pablo (Pablo_Stafforini) · 2014-01-19T09:17:42.949Z · LW(p) · GW(p)

NUMBER OF CURRENT PARTNERS:

0: 797, 48.7%

1: 728, 44.5%

2: 66, 4.0%

3: 21, 1.3%

4: 1, .1%

6: 3, .2

Why is there no data for respondents who stated they had 5 partners?

Replies from: Pablo_Stafforini, gwern, Eliezer_Yudkowsky
comment by Pablo (Pablo_Stafforini) · 2014-01-19T09:43:26.000Z · LW(p) · GW(p)

I should have looked at the data set. The answer is that zero people reported having 5 partners.

comment by gwern · 2014-01-20T01:40:05.904Z · LW(p) · GW(p)

Presumably for the same reason there is no data on people with 7, 8, 9, 10...n partners: no one claimed to have them. Since there was only 1 person who claimed 4 partners, and 3 people who claimed 6, perfectly plausible that there simply was no such respondent.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-01-20T11:05:28.902Z · LW(p) · GW(p)

"Five partners" was one of the options that respondents could pick. My assumption was that the survey results listed the number of respondents that picked each option, even if this number was zero.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-20T01:31:17.415Z · LW(p) · GW(p)

Wow, 48.7% of us have 797 partners? That's a lot!

comment by Taurus_Londono · 2014-01-20T20:00:44.555Z · LW(p) · GW(p)

"People were really really bad at giving their answers in millions. I got numbers anywhere from 3 (really? three million people in Europe?) to 3 billion (3 million billion people = 3 quadrillion)."

Two-thirds have a college degree and roughly one third are European citizens. Does this bode well for the affirmation about self-reported IQ?

"...so it was probably some kind of data entry error..." "Computers (practical): 505, 30.9%"

If people lie about IQ, why not just check Wikipedia and cheat on the Europe question? I lied about IQ, but I did not cheat for the Europe question. I suspect that I am not alone.

IQ is arguably as direct a challenge to self-appraisal as you can put to anyone who would self-select for an LW survey. Because mean for HBD was 2.7, many of the respondents may feel that IQ does not fall into predictable heritability patterns by adulthood (say, 27.4 years old). Could it be intertwined with self-attribution bias and social identity within a community devoted to rational thinking? Perhaps they don't realize that rational decision-making =/= improved performance on Raven's Progressive Matrices.

If I was a member of a health club for 2.62 years, ipso facto, would I be inclined to self-report as physically fit/strong/healthy (especially if I thought I had control over said variable, and that it wasn't largely the result of inheritance and environmental factors in a seemingly distant childhood)?

Self-reported IQ data via an online survey: robust? C'mon, you're smarter than that...

Replies from: ArisKatsaris, ChristianKl
comment by ArisKatsaris · 2014-01-21T09:37:23.826Z · LW(p) · GW(p)

I am a member of this population, and I lied.

Helpful for letting us know there are bad people out there that will seek to sabotage the value of a survey even without any concrete benefit to themselves other than the LOLZ of the matter. But I think we are already aware of the existence of bad people.

As for your "I suspect that I am not alone", I ADBOC (agree denotationaly but object connotationaly). Villains exist, but I suspect villains are rarer than they believe themselves to be, since in order to excuse their actions they need imagine the whole world populated with villains (while denying that it's an act of villainy they describe).

"Two-thirds have a college degree and roughly one third are European citizens. Does this bode well for the affirmation about self-reported IQ?"

Well, I'm also a European (with a Master's Degree in Computer Science ) who didn't give my number in millions, and I could have my MENSA-acceptance letter scanned and posted if anyone disbelieves me on my provided IQ.

So bollocks on that. You are implying that people like me are liars just because we are careless readers or careless typists. Lying is a whole different thing than mere carelessness.

Replies from: Taurus_Londono
comment by Taurus_Londono · 2014-02-22T00:27:23.889Z · LW(p) · GW(p)

Have you read Correspondence Bias?

Replies from: Jiro
comment by Jiro · 2014-02-22T17:08:30.998Z · LW(p) · GW(p)

The survey was not meant to include non-official tests. If you respond to a question about official tests with the result of a non-official test, not only have you lied, you have lied in an important way. Certainly you could argue that the non-official test is as good as measuring IQ as the acceptable tests, but that argument's not up to you to make--the creator of the survey obviously didn't think so and it's his survey The design of the survey reflects his decision about what sources of error are acceptable, not yours. He gets to decide that, not you, regardless of whether you can argue for your position or not.

comment by ChristianKl · 2014-01-21T16:25:21.312Z · LW(p) · GW(p)

Did you select cooperate or defect on the prisoner dilemma question?

Replies from: Taurus_Londono
comment by Taurus_Londono · 2014-02-22T00:43:42.705Z · LW(p) · GW(p)

I selected to cooperate.

If I'd thought the financial incentive to defect was greater, I may have been tempted to do so... ...but isn't it interesting that even a modest material reward didn't have the same effect as the incentive to lie about IQ?

comment by Locaha · 2014-01-20T08:38:08.523Z · LW(p) · GW(p)

Tentative plan of action:

  1. Establish presence on Pinterest.
  2. ???
  3. Less gender bias.
  4. Profit.
Replies from: None, Eugine_Nier
comment by [deleted] · 2014-01-22T07:30:44.653Z · LW(p) · GW(p)

"I have been to several yoga classes. The last one I attended consisted of about thirty women, plus me (this was in Ireland; I don’t know if American yoga has a different gender balance).

We propose two different explanations for this obviously significant result.

First, these yoga classes are somehow driving men away. Maybe they say mean things about men (maybe without intending it! we’re not saying they’re intentionally misandrist!) or they talk about issues in a way exclusionary to male viewpoints. The yoga class should invite some men’s rights activists in to lecture the participants on what they can do to make men feel comfortable, and maybe spend some of every class discussing issues that matter deeply to men, like Truckasaurus.

Second, men just don’t like yoga as much as women. One could propose a probably hilarious evolutionary genetic explanation for this (how about women being gatherers in the ancestral environment, so they needed lots of flexibility so they could bend down and pick small plants?) but much more likely is just that men and women are socialized differently in a bunch of subtle ways and the interests and values they end up with are more pro-yoga in women and more anti-yoga in men. In this case a yoga class might still benefit by making it super-clear that men are welcome and removing a couple of things that might make men uncomfortable, but short of completely re-ordering society there’s not much they can do to get equal gender balance and it shouldn’t be held against them that they don’t.

The second explanation seems much more plausible for my yoga class, and honestly it seems much more plausible for the rationalist community as well."

A RESPONSE TO APOPHEMI ON TRIGGERS. Part IV.

Replies from: RobbBB, ygert
comment by Rob Bensinger (RobbBB) · 2014-01-22T07:52:45.869Z · LW(p) · GW(p)

Could you say how this is relevant? If the problem is that women are socialized poorly, that doesn't make it a good idea for us to stop caring about solving (or circumventing) the problem. Empirically, women both get socialized to avoid STEM and academia and get driven out by bad practices when they arrive. This is called the 'leaky pipeline' problem, and I haven't seen evidence that we're immune. You can find good discussion of this here.

Replies from: None
comment by [deleted] · 2014-01-22T08:54:23.827Z · LW(p) · GW(p)

Could you say how this is relevant? If the problem is that women are socialized poorly ...

Here:

[maybe] men just don’t like yoga as much as women ... [and] short of completely re-ordering society there’s not much they can do to get equal gender balance and it shouldn’t be held against them that they don’t.

[This] explanation seems much more plausible for my yoga class, and honestly it seems much more plausible for the rationalist community as well.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-24T03:15:24.692Z · LW(p) · GW(p)

Thanks for clarifying. Using Scott's analogy, I'd respond by pointing to

In this case a yoga class might still benefit by making it super-clear that men are welcome and removing a couple of things that might make men uncomfortable

At present, going by the survey results, 9.8% of LessWrongers identify as female. (And 9.9% as women.) Quoting Wikipedia:

Women’s representation in the computing and information technology workforce has been falling from a peak of 38% in the mid-1980s. From 1993 through 1999, NSF’s SESTAT reported that the percentage of women working as computer/information scientists (including those who hold a bachelor’s degree or higher in an S&E field or have a bachelor’s degree or higher and are working in an S&E field) declined slightly from 33.1% to 29.6% percent while the absolute numbers increased from 170,500 to 185,000. Numbers from the Bureau of Labor Statistics and Catalyst in 2006 indicated that women comprise 27-29% of the computing workforce. A National Public Radio report in 2013 stated that about 20% of all US computer programmers are female.

I don't think either hypothesis ('women are socialized to be less interested in computer science' and 'women interested in computers get driven out by differential treatment by computer science authorities and communities') predicts that we'd be doing worse at gender representativeness over time. We'd expect both causes for inequality to be lessening over time, as society becomes more progressive / feminist / egalitarian. It is clear, however, that something we're doing is responsible for the rarity of women in such communities, and that this something can shift fairly rapidly from decade to decade. So, whatever the mechanism is, it looks plausibly susceptible to interventions.

If we grant that LessWrong has the power to improve its gender ratio without degrading the quality of discussion, then the only question is whether we prefer to retain a less diverse community. And it would be surprising to me if we have no power to move things in that direction. If we became merely as welcoming as computer science in general is today, we'd double the proportion of women at LessWrong. from 10% to 20%; if we became as attractive as computing and IT were in the '80s, or as economics is today, we'd rise to 30% or 40%; and if we had proportionally as many women as there are in psychology today, we'd be up to 70% women and have the opposite problem!

When we're doing worse than the worst of the large fields that can be claimed to have seeded LW, it's probably time to think seriously about solutions. (And, no, 'hey what if MIRI started a Pinterest account' does not qualify as 'thinking seriously about gender inclusivity'.)

Overall, I agree with Ben Kuhn's points on this issue.

Replies from: army1987, None, V_V
comment by A1987dM (army1987) · 2014-01-24T20:41:38.731Z · LW(p) · GW(p)

Looks like this kind of stuff also varies geographically: physics is not 89% male where I am, more like 65% I'd guess (and yoga more like 25% than 3%).

comment by [deleted] · 2014-01-24T18:14:12.229Z · LW(p) · GW(p)

If we grant that LessWrong has the power to improve its gender ratio without degrading the quality of discussion

I don't think it has a lot of power, because (1) males have higher IQ variability (so, apparently, males are two times more likely to have an IQ of 130, and average IQ on LW is 138, which should create even bigger gender imbalance), and (2.1) according to 2012 survey, LW is ~80% Myers-Briggs NT, (2.2) NT is much more prevalent in males (somewhere around 2:1), (2.3) apparently, NT's have very high average intelligence.

My guess is that we can move it a little without lowering content quality, but I doubt if anything significant is possible.

Basically, we just need to find out gender ratio of individuals with average IQ of 135-140 who are also NT's.

Btw, Yvain posted a huge comment to Ben Kuhn's post.

comment by V_V · 2014-01-24T19:32:52.594Z · LW(p) · GW(p)

I can't see why gender imbalance is supposed to be a problem.

Replies from: Eliezer_Yudkowsky, CCC
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-25T17:36:24.145Z · LW(p) · GW(p)

Note to anyone reading this who was disturbed by that comment: V_V is a known troll on LW.

RobbBB, please take that into account when deciding whether LW needs an explicit post on whether it's good qua good to improve gender ratio if it's otherwise cost-free to do so.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-25T20:27:30.475Z · LW(p) · GW(p)

You know, it's starting to seem that your definition of "troll" is "someone who dares disagree with Eliezer's firmly held beliefs".

Replies from: V_V, ArisKatsaris
comment by V_V · 2014-01-26T14:07:07.225Z · LW(p) · GW(p)

Don't feed the troll. :D

comment by ArisKatsaris · 2014-01-28T17:57:53.650Z · LW(p) · GW(p)

Well, to the defense of those of us who think V_V a troll, if I remember correctly he marked his early existence in LessWrong by accusing Anna Salamon of being a spammer (when she made a post announcing some activity of CFAR). Since then he's done the occasional "you're all cultists" thing, and smirked about it in Rationalwiki.

In my mind, I've grouped him with Dmytry/private_messaging and Alexander Kruel/Xixidu. It's not about people who disagree, it's about people who have acted like assholes, who seem to enjoy to make fun of people with different opinions, who don't actually care whether what they say has truth-content in it, etc, etc -- and who after spending so much time in LessWrong, they then go somewhere else and insult the people who spend time in LessWrong.

comment by CCC · 2014-01-24T19:55:04.654Z · LW(p) · GW(p)

Let us assume that there is some activity A (either posting on LessWrong, or participating in a yoga class, or whatever).

Activity A is either beneficial, or detrimental, to take part in. (It may be beneficial to some people but detrimental to others; let as assume for the moment that the amount of benefit recieved, as with most activities, is not directly tied to gender).

Note that the gender ratio of humanity is pretty close to 50/50.

If activity A is detrimental, then it is to the benefit of people in general for no-one to attempt activity A.

If activity A is beneficial, then it is to the benefit of people in general for as many people as possible to attempt activity A.

If activity A is beneficial to x% of people and detrimental to (100-x)% of people, then we should expect, in a perfect world, to see x% of males and x% of females attempting activity A.

In all cases, this is a 50/50 male:female ratio. When we do not see this, it is usually evidence of a detrimental gender-based bias; possibly cultural, possibly due to some unintended error that is driving away one gender more than the other. It seems especially sensible to ask about the cause of such a strong bias in a community which puts such effort into understanding its own biases, but it's still a good idea whenever sufficiently strong evidence of this sort of bias becomes apparent...

Replies from: V_V, army1987
comment by V_V · 2014-01-24T20:16:41.229Z · LW(p) · GW(p)

(It may be beneficial to some people but detrimental to others; let as assume for the moment that the amount of benefit recieved, as with most activities, is not directly tied to gender).

How do you support that assumption?

If activity A is beneficial to x% of people and detrimental to (100-x)% of people, then we should expect, in a perfect world, to see x% of males and x% of females attempting activity A. In all cases, this is a 50/50 male:female ratio.

Again, assuming that whatever makes activity A beneficial to some people and not other people isn't correlated with gender. But lots of psychological traits are correlated with gender, hence this seems a highly questionable assumption.

Moreover, even if the particular activity A gives equal benefits to both genders, comparative advantage may make one of them more interested in performing A than the other is.

Replies from: CCC
comment by CCC · 2014-01-26T04:07:47.916Z · LW(p) · GW(p)

(It may be beneficial to some people but detrimental to others; let as assume for the moment that the amount of benefit recieved, as with most activities, is not directly tied to gender).

How do you support that assumption?

That depends on the nature of activity A.

If, for example, activity A consists of posting on (and reading posts and sequences on) LessWrong, then I would support it on the basis that I cannot see any evidence that the benefits of rationality are at all correlated to gender. If activity A consists of posting on, and reading posts on, a website dedicated to (say) breastfeeding, then there is clearly a greater benefit for female readers and my assumption becomes invalid.

For most activities, however, I tend to default to this assumption unless there is a clear causal chain showing how a difference in gender changes the benefit received.

Again, assuming that whatever makes activity A beneficial to some people and not other people isn't correlated with gender. But lots of psychological traits are correlated with gender, hence this seems a highly questionable assumption.

My immediate question is how many of those psychological traits correlated with gender are due, not to gender, but to the cultural perception of gender?

Replies from: V_V
comment by V_V · 2014-01-26T14:55:04.819Z · LW(p) · GW(p)

If, for example, activity A consists of posting on (and reading posts and sequences on) LessWrong, then I would support it on the basis that I cannot see any evidence that the benefits of rationality are at all correlated to gender.

And health benefits of yoga are probably not strongly correlated to gender.

Yet people engage in leisure activities generally not because of possible long-term benefits, but because they find these activivities intrinsicaly rewarding. And since different people have different preferences, the activities they find rewarding differ.
Add the fact that leisure time is a limited resource, hence a tradeoff between available activities must be done, and these competing activities, and their enjoyablility differs from one person to the other.

Personal preferences correlate with gender.

My immediate question is how many of those psychological traits correlated with gender are due, not to gender, but to the cultural perception of gender?

Gender correlates to many, if not most, objectively measurable physiological traits.
As for psychological traits, we know for sure that sexual hormones affect brain development during fetal stage, and brain activity during adult life.
Whether each particular psychological trait correlates to gender due to a biological cause, or a cultural one, or a combination of both, is a matter of research.

But I don't think the nature vs nurture question really matters here: different people have different preferences, whatever the cause, and I don't see why we should try to engineer them to achieve some arbitrary ideal.

Replies from: CCC
comment by CCC · 2014-01-27T09:53:23.658Z · LW(p) · GW(p)

And health benefits of yoga are probably not strongly correlated to gender.

This is why it surprises me that there is a gender imbalance in people going to yoga classes.

Yet people engage in leisure activities generally not because of possible long-term benefits, but because they find these activivities intrinsicaly rewarding. And since different people have different preferences, the activities they find rewarding differ. Add the fact that leisure time is a limited resource, hence a tradeoff between available activities must be done, and these competing activities, and their enjoyablility differs from one person to the other.

Personal preferences correlate with gender.

Here, again, I think that a large part of the difference between personal preferences with gender is more cultural than biological. Consider, for example; culturally, over a large part of the world, it is considered acceptable for a woman to wear a skirt, but frowned on for a man. As a result, few men wear skirts; if you were to pick a random man and ask for his opinion on wearing a skirt, it is likely that he would not wish to do so. However, if one considers a slightly different culture for a little (for example, the Scottish kilt), one finds a similar garment being worn by many men. So a person's preferences are affected by culture.

But I don't think the nature vs nurture question really matters here: different people have different preferences, whatever the cause, and I don't see why we should try to engineer them to achieve some arbitrary ideal.

I don't see it so much as reaching an arbitrary ideal; I see it more as avoiding a known failure mode.

I have noticed that, throughout history, there have been cases where people were divided into separate groups; whether by race, gender, religion, or other means. In most of those cases, one group managed to achieve some measure of power over all the other groups; and then used that measure of power to oppress all the other groups, whether overtly or not.

This leads to all sorts of problems.

One means of maintaining such a division, is by creating a further, artificial divide, and using that to widen the gap between the groups. For example, if significantly more men than women own land in a given society, then restricting the ability to vote to landowners will tend to exacerbate any official pro-male bias. (This works the other way around, as well).

Therefore, when I see a major statistical imbalance for no adequately explained reason (such as the noted gender bias on LessWrong, or the imbalance in yoga classes) I find it a cause for slight concern; enough to at least justify trying to find and explain the reason for the imbalance.

Replies from: V_V
comment by V_V · 2014-01-27T13:25:50.337Z · LW(p) · GW(p)

Consider, for example; culturally, over a large part of the world, it is considered acceptable for a woman to wear a skirt, but frowned on for a man. As a result, few men wear skirts; if you were to pick a random man and ask for his opinion on wearing a skirt, it is likely that he would not wish to do so. However, if one considers a slightly different culture for a little (for example, the Scottish kilt), one finds a similar garment being worn by many men. So a person's preferences are affected by culture.

So, should we campaign to increase the number of men who wear skirts and the number of women who wear traditional Scottish kilts? Or the number of non-Scottish people who wear kilts? Or the number of Scottish people who wear pants? I don't know, what is the proper PC ideal here?

One means of maintaining such a division, is by creating a further, artificial divide, and using that to widen the gap between the groups. For example, if significantly more men than women own land in a given society, then restricting the ability to vote to landowners will tend to exacerbate any official pro-male bias. (This works the other way around, as well).

I don't think anybody proposed a restriction of voting rights based on the partecipation to LessWrong or yoga classes, thus this seems to be a slippery slope argument.

Please don't take this personally, but trying to "re-educate" people to change their preferences in order to socially engineer an utopian society, is the hallmark of totalitarianism.
I think that, as long as people get along peacefully, it's better to recognize, acknowledge and respect diversity.

comment by A1987dM (army1987) · 2014-01-24T20:38:04.595Z · LW(p) · GW(p)

If activity A is beneficial to x% of people and detrimental to (100-x)% of people, then we should expect, in a perfect world, to see x% of males and x% of females attempting activity A.

Suppose A is beneficial to 80% of males and 40% of females, and detrimental to 20% of males and 60% of females; why would you expect, in a perfect world, to see 60% of males and 60% of females attempting activity A?

comment by ygert · 2014-01-22T07:48:35.927Z · LW(p) · GW(p)

This is a very appropriate quote, and I upvoted. However, I would suggest formatting the quote in markdown as a quote, using ">".

Something like this

In my opinion, this quote format is better: it makes it easier to distinguish it as a quote.

In any case, I'm sorry for nitpicking about formatting, and no offence is intended. Perhaps there is some reason I missed that explains why you put it the way you did?

Replies from: None
comment by [deleted] · 2014-01-22T08:51:25.802Z · LW(p) · GW(p)

No, you're right. I'm just not used to lesswrong comments.

And sure there's no offense, because Crocker's Rules.

comment by Eugine_Nier · 2014-01-21T04:25:34.019Z · LW(p) · GW(p)

You're also missing step 3½.

Replies from: Locaha
comment by Locaha · 2014-01-21T07:12:45.211Z · LW(p) · GW(p)

I don't understand why people are down-voting this. Pinterest has a huge female population. Drawing members from there would be a great strategy.

Replies from: RobbBB, None
comment by Rob Bensinger (RobbBB) · 2014-01-22T07:42:46.702Z · LW(p) · GW(p)

I assume you're getting downvoted because liking Pinterest is not one of the most salient things about women in general, nor about the class of women we'd like frequenting this site. If part of the reason talented women don't end up here is that women are stereotyped as vapid, then appealing to a site at the low end of the intellectual spectrum as your prototype for 'place we can find women' only exacerbates the problem.

Replies from: Locaha
comment by Locaha · 2014-01-22T08:06:38.599Z · LW(p) · GW(p)

site at the low end of the intellectual spectrum

I don't think Pinterest is any lower on intellectual spectrum than Twitter or Facebook, for example. It's simply one of the big social networks that happens to have more women than men.

comment by [deleted] · 2014-01-22T07:05:10.481Z · LW(p) · GW(p)

What would be the average IQ next year if we succeeded at this? :)

Replies from: Locaha
comment by Locaha · 2014-01-22T07:15:39.838Z · LW(p) · GW(p)

I hope it will be lower, it would mean the population is not so anal about IQ. :)

comment by baiter · 2014-01-24T16:40:58.902Z · LW(p) · GW(p)

Have no children, don't want any: 506, 31.3%

Have no children, uncertain if want them: 472, 29.2%

I'm horrified by this. Actually it's baseline irony at it's best -- here you've got a group of people infinitely more concerned with the future then most, yet many of them are against the lowest-hanging-fruit contribution one could make towards a better future. (I hope some of the shockingly high numbers are a by-product of the low average age and high amount of males, but, anyways, the inverse relationship between IQ and birthrate has been observed for a long time.)

Another angle from which to view this should appeal to the many people here who identified as Liberal, Progressive, Socialist, and Social-Justice-loving: class equality. If the current birthrates and demographic trends continue, we're looking at even greater social inequality than exists today: a tiny cognitive/financial elite that runs society, and a massive underclass that... does whatever else. A nation's economic inequality is apparently associated with all sorts of social ills.

Everyone who doesn't want to have kids (as many as they can, within reason) is both missing a major point of life and complicit in creating a dysgenic society -- which, btw, should be included on the list of existential risks.

Obligatory Idiocracy clip

Replies from: ikajaste, Locaha, private_messaging, satt, ChristianKl
comment by ikajaste · 2014-01-27T08:53:29.147Z · LW(p) · GW(p)

[children are] the lowest-hanging-fruit contribution one could make towards a better future

Lowest-hanging? I consider having children to be quite a huge investment of my personal resources. How is that a low-hanging fruit?

comment by Locaha · 2014-01-27T07:13:10.936Z · LW(p) · GW(p)

Not everybody see their lives as a big genetic experiment where their goal is to out-breed the opponents.

Everyone who doesn't want to have kids (as many as they can, within reason) is both missing a major point of life and complicit in creating a dysgenic society -- which, btw, should be included on the list of existential risks.

^ See this? This is one of the reasons this forum is 90% male.

Replies from: MugaSofer, Eugine_Nier, ikajaste, army1987, CAE_Jones
comment by MugaSofer · 2014-01-27T10:46:09.120Z · LW(p) · GW(p)

Not everybody see their lives as a big genetic experiment where their goal is to out-breed the opponents.

In fact, most people don't - judging by those numbers.

comment by Eugine_Nier · 2014-01-29T02:33:25.146Z · LW(p) · GW(p)

Not everybody see their lives as a big genetic experiment where their goal is to out-breed the opponents.

This isn't about out-breeding opponents. This is about the consequences of dysgenic selection against intelligence.

^ See this? This is one of the reasons this forum is 90% male.

As Yvain pointed out in his post on a similar topic, far more women than men go to church across all denominations, including ones that don't even let women in leadership positions. I recommend you update your model about what kinds of things drive off women.

Replies from: Locaha
comment by Locaha · 2014-01-29T07:17:16.736Z · LW(p) · GW(p)

As Yvain pointed out in his post on a similar topic, far more women than men go to church across all denominations, including ones that don't even let women in leadership positions.

People who go to church are unlikely to visit this forum to begin with.

Replies from: CCC, army1987
comment by CCC · 2014-01-29T12:54:48.575Z · LW(p) · GW(p)

People who go to church are unlikely to visit this forum to begin with.

Perhaps, but there is always the odd statistical outlier. I go to church every week, and I visit this forum, for example.

Replies from: qemqemqem
comment by Andrew Keenan Richardson (qemqemqem) · 2014-01-29T14:18:48.239Z · LW(p) · GW(p)

I also go to church regularly. Albeit it is a Unitarian Universalist church, and I am an atheist.

comment by A1987dM (army1987) · 2014-01-29T13:05:36.493Z · LW(p) · GW(p)

Then again, medicine doesn't disproportionately drive off women either, and I'm not under the impression that doctors are less likely to be atheistic/rationalistic/high-Openness/etc. than the general population (indeed, they include 1.9% of LW survey respondents, which is about one order of magnitude higher than my out-of-my-ass^WFermi estimate for the general population).

Replies from: memoridem
comment by memoridem · 2014-01-29T13:52:01.427Z · LW(p) · GW(p)

I'm not under the impression that doctors are less likely to be atheistic/rationalistic/high-Openness/etc. than the general population

Not much more likely either it seems. Doctors are a very diverse population, probably not many generalizations you can make about rationalism on that front.

comment by ikajaste · 2014-01-27T08:48:36.915Z · LW(p) · GW(p)

Everyone who doesn't want to have kids (as many as they can, within reason) is both missing a major point of life and complicit in creating a dysgenic society -- which, btw, should be included on the list of existential risks.

^ See this? This is one of the reasons this forum is 90% male.

Hmm. Why does a comment like that lead to a preference to males?

Replies from: Locaha
comment by Locaha · 2014-01-27T09:16:55.088Z · LW(p) · GW(p)

Hmm. Why does a comment like that lead to a preference to males?

A comment like that comes from a person who isn't even trying to imagine himself in a place of someone who is actually going to conceive and carry to term all those as many as they can children. A woman who reads this will correctly conclude that this isn't a place where she is considered a person.

It goes beyond that. The idea that children should be made as means for a cause is equally disgusting.

Replies from: CAE_Jones, ikajaste, Vaniver, ikajaste, Eugine_Nier
comment by CAE_Jones · 2014-01-27T09:59:28.280Z · LW(p) · GW(p)

While I think you're making a good point, and LW should definitely listen to it, this:

A woman who reads this will correctly conclude that this isn't a place where she is considered a person.

Is phrased a bit strongly, and

disgusting

Is a word I almost never see outside of a mindkilled context, though at least it's in a sentence, here. (People who use "Disgusting" as the entirety of a sentence are basically wearing a giant "I AM MINDKILLED" flag as a coat, in my experience.)

Replies from: Locaha
comment by Locaha · 2014-01-27T10:05:34.482Z · LW(p) · GW(p)

Is a word I almost never see outside of a mindkilled context, though at least it's in a sentence, here. (People who use "Disgusting" as the entirety of a sentence are basically wearing a giant "I AM MINDKILLED" flag as a coat, in my experience.)

baiter used the word "horrified" in his original post.

What do you think about horror?

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-27T17:59:33.781Z · LW(p) · GW(p)

My thoughts on "horrifying" are pretty much the same, but that word hasn't stuck out to me as much before. And your comments struck me as more likely to be downvoted for tone, even though the content is generally good. (Disclaimer: commenting based on my impressions of such things has failed me in the past.)

Replies from: Locaha
comment by Locaha · 2014-01-27T19:24:35.876Z · LW(p) · GW(p)

My thoughts on "horrifying" are pretty much the same, but that word hasn't stuck out to me as much before.

So you argue against mentioning emotions in general?

And your comments struck me as more likely to be downvoted for tone

It is kinda funny how a forum which prides itself on not discussing politics is based on a political system (the anonymous democracy of karma). Every time a poster stops to consider whether his post will be upvoted or downvoted, he is engaging in politics.

Replies from: ArisKatsaris, Nornagest, Sophronius
comment by ArisKatsaris · 2014-01-29T10:00:20.183Z · LW(p) · GW(p)

So you argue against mentioning emotions in general?

I think that people should feel free to mention their emotions, but they should also express them in a manner that recognizes said emotions are two place words. X is horrified/disgusted by Y.

Something may be 'disgusting' you, and that's a useful datapoint, but to say that something is 'disgusting' as if it's an inherent characteristic of the thing pretty much puts a stopper to the conversation. What could be the response "No, it's not"?

How would you feel about someone who said things like "Homosexuality is disgusting." as opposed to someone saying something like "Homosexuality icks me out."? I think you would probably see the latter sentence as less of a conversation-killer than the former.

Replies from: Locaha
comment by Locaha · 2014-01-29T12:15:31.469Z · LW(p) · GW(p)

Something may be 'disgusting' you, and that's a useful datapoint, but to say that something is 'disgusting' as if it's an inherent characteristic of the thing pretty much puts a stopper to the conversation. What could be the response "No, it's not"?

OK, I see your point. Agree, phrasing my original post as "using children as means for an end disgusts me equally" would have been better.

comment by Nornagest · 2014-01-27T19:44:42.609Z · LW(p) · GW(p)

It is kinda funny how a forum which prides itself on not discussing politics is based on a political system (the anonymous democracy of karma). Every time a poster stops to consider whether his post will be upvoted or downvoted, he is engaging in politics.

Politics as in "politics is the mind-killer" doesn't mean "involvement with the polis"; it means "entanglement with factional identity". We routinely touch on the former; insofar as "raising the sanity waterline" can be taken as a goal, for example, it's inextricably political in that sense. But most of the stuff we've historically talked about here isn't strongly factionalized in the mainstream.

If you're posting on something that is and you stop to consider its reception, of course, you're engaging in politics in both senses. But that's the exception here, not the rule.

comment by Sophronius · 2014-01-27T20:07:25.056Z · LW(p) · GW(p)

I agree with your point that the karma system very much encourages blue/green type of thinking. After all, "what will other people think of me?" is a primal instinct that makes it hard enough already to post your honest beliefs, without the karma system compounding it by showing a number above every post that basically says "X people think you have shut up and not said anything" after you say something controversial.

On the other hand, you have to consider that calling someone's point of view "horrifying" accomplishes the exact same thing. So I have to agree with others that it's better to use a more neutral tone when disagreeing.

comment by ikajaste · 2014-01-27T09:28:27.311Z · LW(p) · GW(p)

Valid point. Thanks for the clarification.

Though to my experience, even women seem to think the the part that comes after is in fact more laborous that the carrying part - and that part can be equally shared between genders. Of course, it usually/traditionally isn't, so I guess that's a point towards male bias too.

Replies from: Locaha
comment by Locaha · 2014-01-27T09:37:03.655Z · LW(p) · GW(p)

And pregnancy itself is a personal existential risk.

Replies from: Eugine_Nier, army1987
comment by Eugine_Nier · 2014-01-29T02:53:41.474Z · LW(p) · GW(p)

With modern medicine not in any meaningful sense.

comment by Vaniver · 2014-01-28T01:17:18.990Z · LW(p) · GW(p)

A comment like that comes from a person who isn't even trying to imagine himself in a place of someone who is actually going to conceive and carry to term all those as many as they can children.

While I understand the sentiment here (and I know a number of women who share it), I'm not sure this is correct. I was under the impression that eugenic impulses and pro-natalism were close to evenly split among the genders, and if there was an imbalance, it was that women were more likely to be interested in having babies and in having good babies. It may be easier to convince the marginal man than the marginal woman that they should have children, because the marginal man might have lower cost to do so, but that doesn't imply that the arguing is mostly being done by men. (And if this particular argument looks focused on men, well, baiter did just look at the survey results!)

A woman who reads this will correctly conclude that this isn't a place where she is considered a person.

"Considered a person" is a phrase that can mean a lot of things. I think the meaning you're going for here is something like "bodily autonomy is respected," but one of the other ways to interpret it is something like "desires are validated." And I think that being harsh to natalism is one way to invalidate the desires of a lot of people, and I suspect that burden falls disproportionally on women.

Consider this baby announcement, where a significant portion of the response was 'your baby is off-topic,' which reminded me of rms. I don't think that LW should have sections for people to talk about anything people want to be on-topic; I think specialization is a good idea. But I think that viewing these sorts of impulses and arguments as explicitly or implicitly anti-women is a mistake: imagine being one of James_Miller's students who thought it was really sweet and humanizing for him to include a relaxing, personally relevant picture on the final exam, and then coming to LW and discovering that a highly upvoted response to that is 'well, don't satisfy those values, that would be condescending.' Well, thanks.

Replies from: army1987
comment by A1987dM (army1987) · 2014-02-01T17:32:16.421Z · LW(p) · GW(p)

A comment like that comes from a person who isn't even trying to imagine himself in a place of someone who is actually going to conceive and carry to term all those as many as they can children.

While I understand the sentiment here (and I know a number of women who share it), I'm not sure this is correct. I was under the impression that eugenic impulses and pro-natalism were close to evenly split among the genders, and if there was an imbalance, it was that women were more likely to be interested in having babies and in having good babies.

FWIW, the percentage of people who have no children and don't want any is pretty much the same among cis women (39/124 = 31.5%) as among all survey respondents, and so is that of people who don't have children and are uncertain (38/124 = 30.6%).

comment by ikajaste · 2014-01-27T09:53:05.005Z · LW(p) · GW(p)

It goes beyond that. The idea that children should be made as means for a cause is equally disgusting.

Yes, but I woudn't expect that sentiment to really be all that gender-biased, though.

Replies from: Locaha
comment by Locaha · 2014-01-27T10:12:05.136Z · LW(p) · GW(p)

Yes, but I woudn't expect that sentiment to really be all that gender-biased, though.

Historically at least, I would expect that sentiment to be gender-biased. It's easier to think of children as objects when you aren't the one who spends your whole day with them.

Replies from: ikajaste
comment by ikajaste · 2014-01-27T13:24:05.026Z · LW(p) · GW(p)

Historically at least, I would expect that sentiment to be gender-biased.

Oh, historically sure! But I think these days in western culture, especially(1) among the group being discussed (people interested in this site), I wouldn't expect to see a large gender bias to that sentiment.

(1) [possible projection fallacy going on here, hard to know]

Replies from: Locaha
comment by Locaha · 2014-01-27T14:10:08.651Z · LW(p) · GW(p)

Explicitly, if you ask people in this site how the burden of raising children should be divided between partners, most people of both genders will say it should be divided equally. But when musing about grand strategies, I think the males are still more likely to propose bullshit like "we the smart people totally should out-breed the stupid people" without giving it a second thought.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-29T02:43:56.749Z · LW(p) · GW(p)

Explicitly, if you ask people in this site how the burden of raising children should be divided between partners, most people of both genders will say it should be divided equally.

That depends on what you mean by "divided equally". I think it should be divided based on comparative advantage.

comment by Eugine_Nier · 2014-01-29T02:39:01.034Z · LW(p) · GW(p)

A woman who reads this will correctly conclude that this isn't a place where she is considered a person.

What definition of "considered a person" are you using that makes the above even a remotely valid deduction.

The idea that children should be made as means for a cause is equally disgusting.

If you have problems with doing things as a means to an end, might I recommend a forum where consequentialism isn't the default moral theory.

Replies from: Locaha
comment by Locaha · 2014-01-29T07:14:25.447Z · LW(p) · GW(p)

If you have problems with doing things as a means to an end, might I recommend a forum where consequentialism isn't the default moral theory.

Oh dear me! Was I supposed to sign any papers before posting on this forum, proclaming my adherence to consequentialism? Will I get arrested now???

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-30T02:50:27.828Z · LW(p) · GW(p)

Was I supposed to sign any papers before posting on this forum, proclaming my adherence to consequentialism?

No, but simply declaring an instance of it disgusting is not an argument.

Replies from: Locaha
comment by Locaha · 2014-01-30T06:44:21.031Z · LW(p) · GW(p)

Neither is telling me to leave.

comment by A1987dM (army1987) · 2014-01-30T20:12:37.863Z · LW(p) · GW(p)

Not everybody see their lives as a big genetic experiment where their goal is to out-breed the opponents.

Just because they don't see their lives like that doesn't mean their opponents won't outbreed them.

Replies from: blacktrance
comment by blacktrance · 2014-01-30T20:17:32.587Z · LW(p) · GW(p)

But it does mean that if they don't care about outbreeding their opponents, they shouldn't try.

comment by CAE_Jones · 2014-01-27T08:58:14.873Z · LW(p) · GW(p)

You're both right[1], and you were both at -1 when I got here. I assume it's because you both use emotionally-charged statements and it sounds kinda political.

[1] I'm not sure if this means "right to the best of my understanding", or "right as in 'I agree'". I'm worried that I have to think about this for more than five seconds.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-27T09:06:02.813Z · LW(p) · GW(p)

[1] I'm not sure if this means "right to the best of my understanding", or "right as in 'I agree'". I'm worried that I have to think about this for more than five seconds.

If you don't agree with the best of your understanding, that's itself worrying. ;-)

Replies from: Locaha
comment by Locaha · 2014-01-27T09:40:43.714Z · LW(p) · GW(p)

If you don't agree with the best of your understanding, that's itself worrying. ;-)

Only if you think of yourself as a singleton.

comment by private_messaging · 2014-01-25T22:52:21.917Z · LW(p) · GW(p)

here you've got a group of people infinitely more concerned with the future then most,

The issue is that impact of actions on the future is progressively harder to predict over longer timespans, and the ignorance of even the sign of the true utility difference due to an action makes the expected utility differences small. Thus unusual concerns with the grande future leave people free to pick what ever actions make them feel good about themselves, with no real direction towards any future good; such actions are then easily rationalized.

comment by satt · 2014-02-01T17:17:47.746Z · LW(p) · GW(p)

I'm horrified by this. Actually it's baseline irony at it's best -- here you've got a group of people infinitely more concerned with the future then most, yet many of them are against the lowest-hanging-fruit contribution one could make towards a better future.

Gamete donation is lower-hanging fruit.

comment by ChristianKl · 2014-01-25T21:17:18.103Z · LW(p) · GW(p)

Everyone who doesn't want to have kids (as many as they can, within reason) is both missing a major point of life and complicit in creating a dysgenic society -- which, btw, should be included on the list of existential risks.

Could you explain how a dysgenic society could result in 90% of the human population dying by 2100? To me that seems widely overblown.

Replies from: private_messaging, army1987
comment by private_messaging · 2014-01-25T22:23:41.860Z · LW(p) · GW(p)

And just plain ridiculous - if it results in 90% of the human population dying, that's some serious evolutionary pressure right there.

comment by A1987dM (army1987) · 2014-01-25T21:57:12.717Z · LW(p) · GW(p)

Sure, dysgenics is unlikely to result in a bang (in this terminology), but it can sure result in a crunch. (Some people have argued that's already happened in places such as inner-city Detroit.)

Replies from: satt
comment by satt · 2014-02-01T17:33:50.521Z · LW(p) · GW(p)

Bostrom's definition of a crunch ("The potential of humankind to develop into posthumanity[7] is permanently thwarted although human life continues in some form") isn't coextensive with ChristianKI's "90% of the human population dying by 2100", and dysgenics seems far less likely to cause the latter than the former. (I find it still more unlikely that dysgenics was the key cause of Detroit's decline, given that that happened in ~3 generations.)

I can think of scenarios where dysgenics might kill 90% of humanity by 2100, but only (1) in combination with some other precipitating factor, like if dysgenics meant a vital unfriendly-AI-averting genius were never born, or (2) if dysgenics were deliberately amplified by direct processes like embryo selection.

Replies from: army1987
comment by A1987dM (army1987) · 2014-02-01T18:16:05.791Z · LW(p) · GW(p)

Bostrom's definition of a crunch ("The potential of humankind to develop into posthumanity[7] is permanently thwarted although human life continues in some form") isn't coextensive with ChristianKI's "90% of the human population dying by 2100", and dysgenics seems far less likely to cause the latter than the former.

I agree. I guess that ChristianKl guessed that by “the list of existential risks” baiter meant the one in the survey, but I was charitable to baiter and assumed he meant it in a more abstract sense.