[Book Review] "The Bell Curve" by Charles Murray

post by lsusr · 2021-11-02T05:49:22.198Z · LW · GW · 134 comments

Contents

  Stratification
    Educational Stratification
    Occupational Stratification
    Social Partitioning
  Heritability of Intelligence
  Negative Effects of Low Intelligence
    Poverty
    High School Graduation
    Labor Force Participation
    Family
    Welfare Dependency
    Parenting
    Crime
  Ethnicity and Cognition
    Black and White Americans
    Africa
    Black-White Trends
    How much is genetic?
  Race and Employment
  Projecting Demography
  Raising Cognitive Ability
  Affirmative Action
    College Affirmative Action
    Workplace Affirmative Action
  Conclusion
None
135 comments

Factor analysis is a mathematical method of inferring simple correlations between observations. It's the foundation of the Big Five personality traits. It's also behind how we define intelligence.

A person's ability to perform one cognitive task is positively correlated with basically every other cognitive task. If you collect a variety of cognitive measures you can use linear algebra to extract a single measure which we call . Intelligence quotient (IQ) is a test specifically designed to measure . IQ isn't a perfect measure of but it's convenient and robust.

Here are six conclusions regarding tests of cognitive ability, drawn from the classical tradition, that are now beyond significant technical dispute:

  1. There is such thing as a general factor of cognitive ability on which human beings differ.

  2. All standardized tests of academic aptitude or achievement measure this general factor to some degree, but IQ tests expressly designed for that purpose measure it most accurately.

  3. IQ scores match, to a first degree, whatever it is that ordinary people mean when they use the word intelligent or smart in ordinary language.

  4. IQ scores are stable, although not perfectly so, over much of a person's life.

  5. Properly administered IQ tests are not demonstrably biased against social, economic, ethnic, or racial groups.

  6. Cognitive ability is substantially heritable, apparently no less than 40 percent and no more than 80 percent.

Charles Murray doesn't bother proving the above points. These facts are well established among scientists. Instead, The Bell Curve: Intelligence and Class Structire in American Life is about what means to American society.

Stratification

Educational Stratification

Smarter people have always had an advantage. The people who go to college have always been smarter than average. The correlation between college and intelligence increased after WWII. Charles Murray argues that the competitive advantage of intelligence is magnified in a technological society. I agree that this has been the case so far and that the trend has continued between 1994 when Murray published his book and 2021 when I am writing this review.

SAT scores can be mapped to IQ. The entering class of Harvard in 1926 had a mean IQ of about 117. IQ is defined to have an average of 100 and a standard deviation of 15. Harvard in 1926 thus hovered around the 88th percentile of the nation's youths. Other colleges got similar scores. The average Pennsylvania college was lower with an IQ of 107 (68th percentile). Elite Pennsylvania colleges had students between the 75th and 90th percentiles.

By 1964, the average student of a Pennsylvania college had an IQ in the 89th percentile. Elite colleges' average freshmen were in the 99th percentile.

Charles Murray uses a measure called median overlap to quantify social stratification. Median overlap indicates what proportion of IQ scores the lower-scoring group matched or exceeded the median score in the higher-scoring group. Two identical groups would have a median overlap of 50%.

Groups Being Compared Median Overlap
High school graduates with college graduates 7%
High school graduates with Ph.D.s, M.D.s, or LL.B.s 1%
College graduates with Ph.D.s, M.D.s, and LL.B.s 21%

College graduates are not representative of the population. If most of your social circle is (or will be) a college graduate then your social circle is smarter than the population mean.

The national percentage of 18-year-olds with the ability to get a score of 700 or above on the SAT-Verbal test is in the vicinity of one in three hundred. Think about the consequences when about half of these students are going to universities in which 17 percent of their classmates also had SAT-Vs in the 700s and another 48 percent had scores in the 600s. It is difficult to exaggerate how different the elite college population is from the population at large—first in its level of intellectual talent, and correlatively in its outlook on society, politics, ethics, religion, and all the other domains in which intellectuals, especially intellectuals concentrated into communities, tend to develop their own conventional wisdoms.

Occupational Stratification

You can arrange jobs by their relative status. Job status tends to run in families. This could be because of social forces or it could be because of heritable . We can test which hypothesis is true via an adoptive twin study. A study in Denmark tracked several hundred men and women adopted before they were one year old. "In adulthood, they were compared with both their biological siblings and their adoptive siblings, the idea being to see whether common genes or common home life determined where they landed on the occupational ladder. The biologically related siblings resembled each other in job status, even though they grew up in different homes. And among them, the full siblings had more similar job status than the half siblings. Meanwhile, adoptive siblings were not significantly correlated with each other in job status."

High-status jobs have become much more cognitively demanding over the last hundred years. Charles Murray uses a bunch of data to prove this. I'll skip over his data because the claim it's so obviously to someone living in the Internet age. Even being an marketer is complicated these days.

Credentialism is a real thing. Could it be that IQ causes education which causes high status jobs but cognitive ability doesn't actually increase job performance? Or does sheer intellectual horsepower have market value? We have data to answer this question.

The most comprehensive modern surveys of the use of tests for hiring, promotion, and licensing, in civilian, military, private, and government occupations, repeatedly point to three conclusions about worker performance, as follows.

  1. Job training and job performance in many common occupations are well predicted by any broadly based test of intelligence, as compared to narrower tests more specifically targeted to the routines of the job. As a corollary: Narrower tests that predict well do so largely because they happen themselves to be correlated with tests of general cognitive ability.

  2. Mental tests predict job performance largely via their loading on .

  3. The correlations between tested intelligence and job performance are higher than had been estimated prior to the 1980s. They are high enough to have economic consequences.

IQ tests frequently measure one's ability to solve abstract puzzles. Programming interview algorithm puzzles are tests of a person's abstract problem-solving ability. I wonder how much of Google's algorithm interview tests predictive power comes from factor. Some of it must. The question is: How much? If the answer is "a lot" then these tests could be a de facto workaround for the 1971 Supreme Court case Griggs v. Duke Power Co. which found that IQ-based employment constituted employment discrimination under disparate impact theory.

An applicant for a job as a mechanics should be judged on how well he does on a mechanical aptitude test while an applicant for a job as a clerk should be judged on tests measuring clerical skills, and so forth. So decreed the Supreme Court, and why not? In addition to the expert testimony before the Court favoring it, it seemed to make good common sense…. The problem is that common sense turned out to be wrong.

The best experiments compel lots of people people to do things. The US military compels a lots of people to do things. Thus, some of our best data on 's relationship to job performance comes from the military.

Enlisted Military Skill Category Percentage of Training Success Explained by Percentage of Training Success Explained by Everything Else
Nuclear weapons specialist 77.3 0.8
Air crew operations specialist 69.7 1.8
Weather specialist 68.7 2.6
Intelligence specialist 66.7 7.0
Fireman 59.7 0.6
Dental assistant 55.2 1.0
Security police 53.6 1.4
Vehicle maintenance 49.3 7.7
Maintenance 28.4 2.7

"[T]he explanatory power of was almost thirty times greater than of all other cognitive factors in ASVAB combined." In addition, the importance of was stronger for more complicated tasks. Other military studies find similar results to this one.

There's no reason to believe civilian jobs are any less dependent on than military jobs. For cognitively-demanding jobs like law, neurology and research in the hard sciences, we should expect the percentage of training success explained by to be well over 70%. Similar results appear for civilian jobs.

If we measure civilian job performance instead of military training success we get a smaller (but still large) impact of . Note that the measures below probably contain significant overlap. Part of college grades' predictive power comes from them being an imperfect measure of .

Predictor Validity Predicting Job Performance Ratings
Cognitive test score .53
Biographical data .37
Reference checks .26
Education .22
Interview .14
College grades .11
Interest .10
Age -.01

Charles Murray's data shows that a secretary or a dentist who is one standard deviation better than average is worth a 40% premium in salary. Such jobs undersell the impact of worker variation among job performers. Jobs with leverage have a disproportionate impact on society. Anyone who has worked in a highly-technical field with leverage (like software developers, scientists or business executives) knows that someone one standard deviation above average is worth much more than 40% more.

As technology advances, the number of highly-technical jobs with leverage increases. This drives up the value of which increases income inequality.

Social Partitioning

The cognitive elite usually partition ourselves off into specialized neighborhoods. For example, I live in Seattle. Seattle is one of the most software-heavy cities in the world. Seattle contains headquarters of Microsoft and Amazon are here. You can barely throw a router without hitting a programmer. You'd expect highschools to be full of technical volunteers. But that's only in the rich neighborhoods. I, weirdly, live in a poor, dangerous[1] neighborhood where I volunteer as a coach for the local high school's robotics club. If I wasn't around there would be no engineers teaching or coaching at the highschool. None of my friends live here. They all live in the rich, safe neighborhoods.

Heritability of Intelligence

The most modern study of identical twins reared in separate homes suggests a heritability for general intelligence if .75 and .80, a value near the top of the range found in contemporary technical literature. Other direct estimates use data on ordinary siblings who were raised apart or on parents and their adopted-away children. Usually the heritability estimates from such data are lower but rarely below .4.

The heritability of intelligence combines with cognitive stratification to increase IQ variance. The average husband-wife IQ correlation is between .2 and .6. Whatever the number used to be, I expect it has increased in the 27 years since The Bell Curve was published. Technically-speaking, elite graduates have always married each other. However, the concentration of cognitive ability among elites increases the genetic impact of this phenomenon.

Negative Effects of Low Intelligence

All the graphs in this section control for race by including only white people.

Poverty

Is poverty caused by IQ or by one's parents' social class? What would you bet money that the answer is?

Parental social economic status matters but the impact is small compared to IQ.

The black lines intersect at an IQ of 130. I think that once you pass a high enough threshold of intelligence, school stops mattering because you can teach yourself things faster than schools can teach you. Credentials don't matter either because exceptional people are wasted in cookie-cutter roles.

High School Graduation

There was no IQ gap between high school dropouts and graduates in the first half of the 20th century, before graduating high school became the norm. After high school became the norm, dropouts became low IQ.

IQ Percentage of Whites Who Did Not Graduate of Pass a High School Equivalency Exam
>125 0
110-125 0 (actually 0.4)
90-110 6
75-90 35
<75 55

In this case, IQ is even more predictive than parental social economic status. However, for temporary dropouts, social economic status matters a lot. (In terms of life outcomes, youths with a GED look more like dropouts than high school graduates.)

The image I (and Charles Murray) get is of dumb rich kids who get therapists, private tutors, special schools—the works. Highschool is easier to hack than college (and work, as we'll get to later). The following graph is, once again, white youths only.

Labor Force Participation

Being smart causes you to work more. Being born rich causes you to work less.

Being smart reduces the likelihood of a work-inhibiting disability.

No. of White Males per 1,000 Who Reported Being Prevented from Working by Health Problems IQ No. of White Males Per 1,000 Who Reported Limits in Amount or Kind of Work by Health Problems
0 >125 13
5 110-125 21
5 90-110 37
36 75-90 45
78 <75 62

Lower-intelligence jobs tend to involve physical objects which can injure you. However, this fails to account for the whole situation. "[G]iven that both men have blue-collar jobs, the man with an IQ of 85 has double the probability of a work disability of a man with an IQ of 115…the finding seems to be robust." It could be that dumb people are more likely to injure themselves or that they misrepresent their reasons not working or both.

Technically, unemployment is different from being out of the labor force. Unemployment also shows that being smart is negatively correlated with being unemployed in 1989.

Parental socioeconomic status had no measurable effect on unemployment. All that money spent on buying a high school diploma does not transfer to increased employment status. The following graph is of white men.

Family

Young white women with lower IQ are much more likely to give birth to an illegitimate baby in absolute terms and relative to legitimate births. How much more?

IQ Percentage of Young White Women Who Have Given Birth to an Illegitimate Baby Percentage of Births that are Illegitimate
>125 2 7
110-125 4 7
90-110 8 13
75-90 75-90 17
<75 32 42

Not only are children of mothers in the top quartile of intelligence…more likely to be born within marriage, they are more likely to have been conceived within marriage (no shotgun wedding).

As usual, IQ outweighs parental socioeconomic status. The following graph is for white women.

Remember that IQ correlates with socioeconomic status. "High socioeconomic status offered weak protection against illegitimacy once IQ had been taken into account."

Welfare Dependency

Charles Murray gives a bunch of graphs and charts about how IQ affects welfare dependency. I bet you can guess what kind of a relationship they show.

Parenting

A low IQ [of the mother] is a major risk factor [for a low birth weight baby], whereas the mother's socioeconomic background is irrelevant.

Surprisingly to me, the mother's age at birth of the child did not affect her changes of giving birth to a low-birth-weight baby. Poverty didn't matter either. I suspect this is because America has a calorie surplus. I predict poverty was a very important factor in extremely poor pre-industrial societies.

A mother's socioeconomic background does have a large effect (independent of the mother's IQ) of her child's chances of spending the first years of its life in poverty. This isn't to say IQ doesn't matter. It's just the first result in our entire analysis where IQ doesn't dominate all other factors.

Mother IQ does have a big impact on the quality of her childrens' home life.

IQ Percentage of Children Growing Up in Homes in the Bottom Decile of the HOME Index
>125 0
110-125 2
90-110 6
75-90 11
<75 24

The children of mothers with low IQs have worse temperaments (more difficulty and less friendliness), worse motor & social development and more behavior problems. (There's a bump in some worse outcomes for the smartest mothers, but this might just be an artifact of the small sample size.) The mother's socioeconomic background has a large effect on childrens' development problems, though not quite as high as the mother's IQ.

If you want smart kids then a smart mother is way more important than the mother's socioeconomic background. By now, this should come as no surprise.

Crime

High IQ correlates with not getting involved with the criminal justice system. Move along.

Ethnicity and Cognition

Different ethnic groups vary on cognitive ability.

Jews—specifically, Ashkenazi Jews of European origins—test higher than any other ethnic group…. These tests results [sic] are matched by analyses of occupational and scientific attainment by Jews, which consistently show their disproportionate level of success, usually by orders of magnitude, in various inventories of scientific and artistic achievement.

"Do Asians Have Higher IQs than Whites? Probably yes, if Asian refers to the Japanese and Chinese (and perhaps also Koreans), whom we will refer here as East Asians." Definitely yes if "Asian" refers to Chinese-Americans. This can be entirely explained by US immigration policy. It is hard to get into the USA if you are an East Asian. The United States has discriminated against Asian immigrants for most of its history and continues to do so. The United States is a desirable place to life. If you're an Asian and you want to get into the US then it helps to be smart. If would be weird if Asian-Americans weren't smarter than other immigrants. ("Other immigrants" includes all non-Asian, non-Native Americans.) Since intelligence is significantly heritable and people tend to intermarry among our own ethnic groups (often because the alternative was illegal[2]), a founder effect can be expected to persist across the handful of generations the United States has existed for.

The Bell Curve is mostly about America. It's disconcerting to me when he suddenly compares American students to students from Japan and Hong Kong. When he says "black" he uses a sample of African-American (and not Africa-African) but when he says "Japanese" he uses a sample of Japan-Japanese (and not Japanese-American). When he says "Jews" he includes the whole global diaspora and not (I presume) Latino converts.

I think Charles Murray fails to realize that Asian-Americans are such a biased sample of Asians that the two must be separated when you're studying . Fortunately, Asia-Asians are not a critical pillar of Murray's argument. Charles Murray tends to bucket Americans into black and white and somtimes Latino.

Black and White Americans

These differences are statistical. They apply to populations.

People frequently complain of IQ tests being biased. It is possible to determine whether a test is biased.

"If the SAT is biased against blacks, it will underpredict their college performance. If tests were biased in this way, blacks as a group would do better in college than the admissions office expected based on just their SATs." In either case "[a] test biased against blacks does not predict black performance in the real world in the same way that it predicts white performance in the real world. The evidence of bias is external in the sense that it shows up in differing validities for blacks and whites. External evidence of bias has been sought in hundreds of studies. It has been evaluated relative to performance in elementary school, in secondary school, in the university, in the armed forces, in unskilled and skilled jobs, in the professions. Overwhelmingly, the evidence is that the major standardized tests used to help make school and job decisions do not underpredict black performance, nor does the expert community find that other general or systematic difference in the predictive accuracy of tests for blacks and whites."

IQ tests often involve language. A smart Russia-Russian genius who does not speak English would fail an IQ test given in English. "For groups that have special language considerations—Latinos and American Indians, for example—some internal evidence of bias has been found, unless English is their native language." Native language is not an issue for African-Americans because African-Americans are native English speakers.

What about cultural knowledge? "The [black-white] difference is wider on items that appear to be culturally neutral than on items that appear to be culturally loaded. We italicise this point because it is both so well established empirically yet comes as such a surprise to most people who are new to this topic."

What about test-taking ability and motivation? We can test whether testing itself is behind a black-white difference by comparing standard IQ tests to tests of memorizing digits. Reciting digits backwards takes twice as much as reciting them forward. This experiment controls for test-taking ability and motivation because the forward and backward recitations are given under identical conditions. The black-white difference is about twice as great concerning reciting digits backwards as it is concerning reciting digits forwards.

Reaction correlates strongly with but movement time is less correlated. Whites consistently beat blacks on reaction time even tests though black movement time is faster than white movement time.

Any explanation for a the black-white IQ difference based on culture and society must explain the IQ difference, the number recitation difference, the reaction time difference, the movement time similarity and the difference in every cognitive measures of performance and achievement.

Lead in the water or epigenetic effects of slavery would constitute such an explanation. Such explanations would throw into doubt whether the difference is genetic but would also prove biological determinism.

What about socioeconomic status? The size of the black-white IQ gaps shrinks when socioeconomic status is controlled for. However, socioeconomic status is at least partially a result of cognitive ability. "In terms of the numbers, a reasonable rule of thumb is that controlling for socioeconomic status reduces the overall B/W difference by about third."

We can test for whether socioeconomic status causes the IQ difference by comparing blacks and whites of equal socioeconomic status. If the black-white IQ difference was caused by socioeconomic status then blacks and whites of equal socioeconomic status would have similar IQs. This is not what we observe.

It might be that the black-white difference comes from a mix of socioeconomic status plus systemic racism.

Africa

Charles Murray's analysis of Africa-Africans bothers me for the same reason his analysis of Asians bothers me. In this case, he assumes African-Americans are representative of Africa-Africans. For instance, he discusses how difficult it is "to assemble data on the average African black" even though African-Americans are mostly from West Africa. Given pre-historical human migration patterns, it is my understanding that West Africans are more genetically distant from East Africans than White people are from Asians. If I am right about Africa-African diversity then Africa-Africans are too broad of a reference class. He should be comparing African-Americans to West Africans[3].

Charles Murray believes scholars are reluctant to discuss Africa-African IQ scores because they are so low. I think he means to imply that African-African and African-American IQs are genetically connected. I think such a juxtaposition undersells the Flynn Effect. Industrialization improves the kind of abstract reasoning measured by IQ tests. Fluid and crystallized intelligence have both increased in the rich world in the decades following WWII. The increase happened too fast for it to be because of evolution. It might be due to better health or it could be because our environment is more conducive to abstract thought. I suspect the Flynn Effect comes from a mix of both. The United States and Africa are on opposite ends of the prosperity spectrum. Charles Murray is careful to write "ethnicity" instead of "race", but his classification system is closer to how I think about race than how I think about ethnicity. African-Americans and Africa-Africans are of the same race but different ethnicities.

African blacks are, on average, substantially below African-Americans in intelligence scores. Psychometrically, there is little reason to think that these results mean anything different about cognitive functioning than they mean in non-African populations. For our purposes, the main point is that the hypothesis about the special circumstances of American blacks depressing their test scores is not substantiated by the African data.

I disagree with Charles Murray's logic here. Suppose (in contradiction to first-order genetic pre-history) that Africa-Africans and the African diaspora were genetically homogeneous. A difference in IQ between African-Americans and Africa-Africans would imply that which society you live in substantially influences IQ. If America is segregated in such a way that kept African-Americans living in awful conditions then we would expect African-Americans' IQs to be depressed. Jim Crow laws were enforced until 1965. Martin Luther King Jr. was shot in 1968, a mere 26 years before the publication of The Bell Curve. Blacks and whites continue to be de facto racially segregated today in 2021. Even if racism ended in 1965 (it didn't), 29 years is not enough time to complete erase the damage caused by centuries of slavery and Jim Crow.

Charles Murray does acknowledge the possible effect of systemic racism. "The legacy of historic racism may still be taking its toll on cognitive development, but we must allow the possibility that it has lessened, at least for new generations. This too might account for some narrowing of the black-white gap."

The black-white gap narrowed in the years leading up to the publication of The Bell Curve. This is exactly what we would expect to observe if IQ differences are caused by social conditions because racism has been decreasing over the decades.

Charles Murray acknowledges that rising standards of living increase the intelligence of the economically disadvantaged because improved nutrition, shelter and health care directly removes impediments to brain development. The biggest increase in black scores happened at the low end of the range. This is evidence that improved living conditions of life improved IQ because the lowest hanging fruit hangs from the bottom end of the socioeconomic ladder.

How much is genetic?

Just because something is heritable does not mean the observed differences are genetic in origin. "This point is so basic, and so commonly misunderstood, that it deserves emphasis: That a trait is genetically transmitted in individuals does not mean that group differences in that trait are also genetic in origin." For example, getting skinny early can be caused by genetics or it can be caused by liposuction. The fact that one population is fat and another population is skinny does not mean that the difference was caused by genetics. It could just be that one group has better access to liposuction.

As demonstrated earlier, socioeconomic factors do not influence IQ much. For the black-white difference to be explained by social factors, those factors would have to exclude socioeconomic status.

Recall further that the B/W difference (in standardized units) is smallest at the lowest socioeconomic levels. Why, if the B/W difference is entirely environmental, should the advantage of the "white" environment compared to the "black" be greater among the better-off and better-educated blacks and whites? We have not been able to think of a plausible reason. An appeal to the effects of racism to explain ethnic differences also requires explaining why environments poisoned by discrimination and racism for some other groups—against the Chinese or the Jews in some regions of America, for example—have left them with higher scores than the national average.

One plausible reason is that Chinese-Americans and Jews value academic success stronger than whites and blacks. African-Americans' African culture was systematically destroyed by slavery. They never got the academic cultural package. We could test the cultural values hypothesis by examining at what happens when Chinese or Jewish kids are raised by white families and vice versa. The Bell Curve doesn't have this particular data but it does have white-black data. An examination of 100 adopted children of black, white and mixed racial ancestery found that "[t]he bottom line is that the gap between the adopted children with two black parents and the adopted children with two white parents was seventeen points, in line with the B/W difference customarily observed. Whatever the environmental impact may have been, it cannot have been large." This is evidence against the cultural transmission hypothesis—at least when comparing blacks and whites. Several other studies "tipped toward some sort of mixed gene-environment explanation of the B/W difference without saying how much of the difference is genetic and how much environmental…. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate…. In any case, you are not going to learn tomorrow that all the cognitive differences between races are 100 percent genetic in origin, because the scientific state of knowledge, unfinished as it is, already gives ample evidence that environment is part of the story."

For Japanese living in Asia, a 1987 review of the literature demonstrated without much question that the verbal-visuospatial difference persists even in examinations that have been thoroughly adapted to the Japanese language and, indeed, in tests developed by the Japanese themselves. A study of a small sample of Korean infants adopted into white families in Belgium found the familiar elevated visuospatial scores.

The study of Korean infants seems like the right way to answer this question. The only issue is the small sample size.

What's especially interesting to me, personally, is that "East Asians living overseas score about the same or slightly lower than whites on verbal IQ and substantially higher on visuospatial IQ." This suggests to me that the stereotype of white managers supervising Asian engineers might reflect an actual difference in abilities. (If anyone has updated evidence which contradicts this, please put it in the comments.)

"This finding has an echo in the United States, where Asian-American students abound in engineering, in medical schools, and in graduate programs in the sciences, but are scarce in laws schools and graduate programs in the humanities and the social sciences." I agree that unfamiliarity with the English and American culture is not a plausible explanation for relatively subpar Asian-American linguistic performance. Asian-Americans born in the United States are fluent English speakers. However, I offer an alternative explanation. It could be that engineering, medicine and the sciences are simply more meritocratic than law, the humanities and the social sciences.

Interestingly, "American Indians and Inuit similarly score higher visuospatial than verbally; their ancestors migrated to the Americas from East Asia hundreds of centuries ago. The verbal-visuospatial discrepancy goes deeper than linguistic background." This surprised me since the Inuit are descended form the Aleut who migrated to America around 10,000 years ago—well before East Asian civilization. It's not obvious to me what environmental pressures would encourage higher visuospatial ability for Arctic Native Americans compared to Europeans.

Charles Murray dismisses the hypothesis that East Asian culture improves East Asians' visuospatial abilities.

Why do visuospatial abilities develop more than verbal abilities in people of East Asian ancestry in Japan, Hong Kong, Taiwan, mainland China, and other Asian countries and in the United States and elsewhere, despite the differences among the cultures and languages in all of those countries? Any simple socioeconomic, cultural, or linguistic explanation is out of the question, given the diversity of living conditions, native languages, educational resources, and cultural practices experienced by Hong Kong Chinese, Japan in Japan or the United States, Koreans in Korea or Belgium, and Inuit or American Indians.

I don't know what's going on with the Native Americans or exactly what "other Asian countries" includes (I'm betting it doesn't include Turks) but people from East Asia and the East Asian disapora have cultures that consistently value book learning. Japan, Hong Kong, Taiwan and mainland China eat similar foods, write in similar ways, (except, perhaps, for Korea) are all cultural descendants of the Tang Dynasty.

If Native Americans have high IQ and high IQ improves life outcomes then why aren't Native Americans overrepresented in the tech sector? I was so suspicious of the Native Americans connection that I looked up the their IQ test scores. According to this website, Native American IQ is below average for the US and Canada. Native Americans seem to me like the odd ones out of this group. Sure, they might have relatively high visualspatial abilities compared to linguistic abilities. But Native American IQs are below East Asians'. I think Charles Murray is once again using too big of a bucket. East Asians and Native Americans should not be lumped together.

We are not so rash as to assert that the environment or the culture os wholly irrelevant to the development of verbal and visuospatial abilities, but the common genetic history of racial East Asians and their North American or European descendents on the one hand, and the racial Europeans and their North American descendents, on the other, cannot plausibly be dismissed as irrelevant.

I think the common history of East Asians and Native Americans can (in this context[4]) be totally dismissed as irrelevant. Just look at alcohol tolerance. Native Americans were decimated when Europeans introduced alcohol. Meanwhile, East Asians have been drinking alcohol long enough to evolve the Asian flush. These populations have been separate for so long that one of them adapted to civilization in a way the other one didn't. Charles Murray proved that high visuospatial abilities help people rise to the top of a technically-advanced civilization. It would not surprise me one group that has competed against itself inside of the world's most technologically-advanced civilization for hundreds of generations had a higher visuospatial ability than another group which hasn't.

Race and Employment

Lots of (but not all) racial differences in life outcomes can be explained by controlling for IQ.

Projecting Demography

The higher the education, the fewer the babies.

Different immigrant populations have different IQs. Richard Lynn assigned "means of 105 to East Asians, 91 to Pacific populations, 84 to blacks, and 100 to whites. We assign 91 to Latinos. We know of no data for Middle East or South Asian populations that permit even a rough estimate." I like how the data here breaks Asians down into smaller groups. The average "works out to about 95" seems like a bad omeb but immigrants tend to come from worse places than the United States. I expect the Flynn effect will bring their descendents' average up.

So what if the mean IQ is dropping by a point or two per generation? One reason to worry is that the drop may be enlarging ethnic differences in cognitive ability at a time when the nation badly needs narrowing differences. Another reason to worry is that when the mean shifts a little, the size of the tails of the distribution changes a lot.

While this makes sense on paper, we need to acknowledge a technical point about statistics. The Bell Curve is named after the Gaussian distribution. IQ is a Gaussian distribution. But that doesn't necessarily reflect a natural phemonenon. IQ tests are mapped to a Gaussian distribution by fiat. Charles Murray never proved that IQ is actually a Gaussian distribution. Many real-world phenomena are long-tailed. (Though biological phenomena like height are often Gaussian.) It is a perfectly reasonable prior that small changes to the mean could result in large effects at the tails. Ashkenazi Jewish history small changes to the mean do cause a massive impact on the tails. But I don't think the evidence presented in The Bell Curve is adequate to prove that is Gaussian distributed.

Raising Cognitive Ability

Recent studies have uncovered other salient facts about the way IQ scores depend on genes. They have found, for example, that the more general the measure of intelligence—the closer it is to —the higher is the heritability. Also, the evidence seems to say that the heritability of IQ rises as one ages, all the way from early childhood to late adulthood…. Most of the traditional estimates of heritability have been based on youngsters, which means that they are likely to underestimate the role of genes later in life.

If better measures of have higher heritability that's a sign that it's the worse measures of are easier to hack. If the heritability of IQ goes up as one ages that suggests youth interventions are just gaming the metrics—especially when youth interventions frequently produce only short-term increases in measured IQ.

Once a society has provided basic schooling and eliminated the obvious things like malnutrition and lead in the water, the best way to increase will be eugenics. (I am bearish on AI parenting.) I am not advocating a return to the unscientific policies of the 20th century. Forcibly imposing eugenic policies is horrific and counterproductive. Rather, I predict that once good genetic editing technology is available, parents will voluntarily choose the best genes for their children. There will[5] come a day when not giving your kids the best genes will be seen by civilized people as backwards and reactionary. The shift in societal ethics will happen no later than a two generations (forty years) after the genetic editing of human zygotes becomes safe and affordable.

Besides cherry-pick our descendants' genotypes, is there anything else we can do? Improved nutrition definitely increases cognitive ability, but there is diminishing returns. Once you have adequate nutrition, getting more adequate nutrition doesn't do anything.

Having school (verses no school) does raise IQ. Thus, "some of the Flynn effect around the world is explained by the upward equalization of schooling, but a by-product is that schooling in and of itself no longer predicts adult intelligence as strongly…. The more uniform a country's schooling is, the more correlated the adult IQ is with childhood IQ." Increasing access to schooling increases the strength of natural differences on IQ because when you eliminate societally-imposed inequality all that's left is natural variation.

A whole bunch of programs purport to increase IQ but none of them show a significant long-term effect after many years. It seems to me like they're just gaming the short-term metrics. "An inexpensive, reliable method of raising IQ is not available."

Affirmative Action

College Affirmative Action

I'm not going to dive deep into Charles Murrays thoughts on affirmative action because they're incontrovertible. Affirmative action in college admissions prioritizes affluent blacks over disadvantaged whites. It's also anti-Asian.

The edge given to minority applicants to college and graduate school is not a nod in their favor in the case of a close call but an extremely large advantage that puts black and Latino candidates in a separate admissions competition. On elite campuses, the average black freshman is in the region of the 10th to 15th percentile of the distribution of cognitive ability among white freshman. Nationwide, the gap seems to be at least that large, perhaps larger. The gap does not diminish in graduate school. If anything, it may be larger.

In the world of college admissions, Asians are a conspicuously unprotected minority. At the elite schools, they suffer a modest penalty, with the average Asian freshman being at about the 60th percentile of the white cognitive ability distribution. Our data from state universities are too sparse to draw conclusions. In all of the available cases, the difference between white and Asian distributions is small (either plus or minus) compared to the large differences separating blacks and Latinos from whites.

The edge given to minority candidates could be more easily defended if the competition were between disadvantaged minority youths and privileged white youths. But nearly as large a cognitive difference separates disadvantaged black freshmen from disadvantaged white freshmen. Still more difficult to defend, blacks from affluent socioeconomic backgrounds are given a substantial edge over disadvantaged whites.

Racist admissions harm smart blacks and Latinos.

In society at large, a college degree does not have the same meaning for a minority graduate and a white one, with consequences that reverberate in the workplace and continue throughout life.

Workplace Affirmative Action

[A]fter controlling for IQ, it is hard to demonstrate that the United States still suffers from a major problem of racial discrimination in occupations and pay.

Conclusion

Charles Murray ends with a chapter on where we're going, which he followed it up later with an entire book on class stratification among white Americans.

What worries us first about the emerging cognitive elite is its coalescence into a class that views American society increasingly through a lens of its own.

The problem is not simply that smart people rise to the top more efficiently these days. If the only quality that CEOs of major corporations and movie directors and the White House inner circle had in common were their raw intelligence, things would not be so much different now than they have always been, for some degree the most successful have always been drawn disproportionally from the most intelligent. But the invisible migration of the twentieth century has done much more than let the most intellectually able succeed more easily. It has also segregated them and socialized them. The members of the cognitive elite are likely to have gone to the same kinds of schools, live in similar neighborhoods, go to the same kinds of theaters and restaurants, read the same magazines and newspapers, watch the same television programs, even drive the same makes of cars.

They also tend to be ignorant of the same things.


  1. I was robbed at gunpoint last weekend. ↩︎

  2. Interracial marriage was illegal in nearly every state before 1888. It remained illegal in 15 states all the way until 1967 when the laws were overturned by the Supreme Court ruling Loving v. Virginia. ↩︎

  3. Unless Charles Murray believes that forces outlined in Jared Diamond's Guns, Germs and Steel (which, ironically, was written in opposition to race-based heritable theories of achievement differences) caused Eurasians to evolve higher than their African forbears. While writing this footnote, I realized that the hypothesis is worth considering. Rice-based peoples evolved alcohol intolerance. Indian, Iraqi, Chinese and Japanese men evolved small penises. Software advances faster than hardware. It would be weird if civilization didn't cause cognitive adaptations too. I want to predict that cognitive adaptations to happen faster than physiological adaptations but I don't know how they can be compared. ↩︎

  4. In Jared Diamond's Guns, Germs, and Steel context, Native Americans' sister relationship to East Asians does matter. ↩︎

  5. As usual, this prediction is conditional on neither the singularity nor a civilizational collapse occurring. ↩︎

134 comments

Comments sorted by top scores.

comment by JenniferRM · 2021-11-02T17:01:39.313Z · LW(p) · GW(p)

I read the book years ago "to find out what all the fuss was about" and I was surprised to find that the book was only about white America for the most part.

After thinking about it, my opinion was that Murray should have left out the one chapter about race (because that discussion consumed all the oxygen and also) because the thing I was very surprised by, and which seemed like a big deal, and which potentially was something that could be changed via policy, and thus probably deserved most of the oxygen, was the story where:

the invisible migration of the twentieth century has done much more than let the most intellectually able succeed more easily. It has also segregated them and socialized them.

The story I remember from the book was that colleges entrance processes had become a sieve that retained high IQ whites while letting low IQ whites pass through and fall away.

Then there are low-IQ societies where an underclass lives with nearly no opportunity to see role models doing similar things in substantially more clever ways.

My memory is that the book focused quite a bit on how this is not how college used to work in the 1930s or so, and that it was at least partly consciously set up through the adoption of standardized testing like the SAT and ACT as a filter to use government subsidies to pull people out of their communities and get them into college in the period from 1950 to 1980 or so.

Prior to this, the primary determinant of college entry was parental SES and wealth, as the "economic winners with mediocre children" tried to pass on their social position to their children via every possible hack they could cleverly think to try. 

(At a local personal level, I admire this cleverness, put in service first to their own children, where it properly belongs, but I worry that they had no theory of a larger society, and how that might be structured for the common good, and I fear that their local virtue was tragically and naively magnified into the larger structure of society to create a systematic class benefit that is not healthy for the whole of society, or even for the "so-called winners" of the dynamic...)

My memory of Murray's book is that he pointed out that if you go back to the 1940s, and look at the IQ distribution of the average carpenter, you'd find a genius here or there, and these were often the carpenters who the other carpenters looked up to, and learned carpentry tricks from... 

...but now, since nearly all people have taken the SAT or ACT in high school, the smart "potential carpenters" all get snatched up and taken away from the community that raised them. This levels out the carpentry IQ distribution, by putting a ceiling on it, basically.

If you think about US immigration policy, "causing brain drain from other countries" is a built in part of the design. (Hence student visas for example.) 

If this is helpful for the US, then it seems reasonable that it would be harmful to the other countries...

...but international policy is plausibly is a domain where altruism should have less sway, unless mechanisms to ensure reciprocity exist... 

...and once you have mechanistically ensured reciprocity are you even actually in different legal countries anymore? It is almost a tautology then, that "altruism 'should' be less of a factor for an agent in a domain where reciprocal altruism can't be enforced".

So while I can see how "brain drain vs other countries" makes some sense as a semi-properly-selfish foreign policy (until treaties equalize things perhaps) it also makes sense to me that enacting a policy of subsidized brain drain on "normal america" by "the parts of america in small urban bubbles proximate to competitive universities" seems like... sociologically bad? 

So it could be that domestic brain drain is maybe kind of evil? Also, if it is evil then the beneficiaries might have some incentives to try to deny is happening by denying that IQ is even real?

Then it becomes interesting to notice that domestic neighborhood level brain drain could potentially be stopped by changing laws.

I think Murray never called for this because there wasn't strong data to back it up, but following the logic to the maximally likely model of the world, then thinking about how to get "more of what most people in America" (probably) want based on that model (like a live player would)... 

...the thing I found myself believing at the end of the book is that The Invisible Migration Should Be Stopped

The natural way to do this doesn't even seem that complicated, and it might even work, and all it seems like you'd have to do is: 
(1) make it illegal for universities to use IQ tests (so they can go back to being a way for abnormally successful rich parents to try to transmit their personal success to their mediocre children who have regressed to the mean) but 
(2) make it legal for businesses to use IQ tests directly, and maybe even 
(3) tax businesses for hogging up all the smart people, if they try to brain drain into their own firm?

If smart people were intentionally "spread around" (instead of "bunched up"), I think a lot fewer of them would be walking around worried about everything... I think they would feel less pinched and scared, and less "strongly competed with on all sides". 

Also, they might demand a more geographically even distribution of high quality government services?

And hopefully, over time, they would be less generally insane, because maybe the insanity comes from being forced to into brutal "stacked ranking" competition with so many other geniuses, so that their oligarchic survival depends on inventing fake (and thus arbitrarily controllable) reasons to fire co-workers?

Then... if this worked... maybe they would be more able to relax and focus on teaching and play? 

And I think this would be good for the more normal people who (if the smarties were more spread out) would have better role models for the propagation of a more adaptively functional culture throughout society.

Relevantly, as a tendency-demonstrating exceptional case, presumably caused by unusual local factors:

I, weirdly, live in a poor, dangerous[1] neighborhood where I volunteer as a coach for the local high school's robotics club. If I wasn't around there would be no engineers teaching or coaching at the highschool.

Nice! I admire your willingness and capacity to help others who are local to you <3

[1] I was robbed at gunpoint last weekend.

You have my sympathy. I hope you are personally OK. Also, I hope, for the sake of that whole neighborhood, that the criminal is swiftly captured and justly punished. I fear there is little I can do to help you or your neighborhood from my own distant location, but if you think of something, please let me know.

Replies from: cousin_it, SaidAchmiz, lsusr
comment by cousin_it · 2021-11-03T07:59:20.358Z · LW(p) · GW(p)

(3) tax businesses for hogging up all the smart people, if they try to brain drain into their own firm?

Due to tax incidence, that's the same as taxing smart people for getting together. I don't like that for two reasons. First, people should be free to get together. Second, the freedom of smart people to get together could be responsible for large economic gains, so we should be careful about messing with it.

comment by lsusr · 2021-11-05T01:25:27.664Z · LW(p) · GW(p)

You have my sympathy. I hope you are personally OK. Also, I hope, for the sake of that whole neighborhood, that the criminal is swiftly captured and justly punished. I fear there is little I can do to help you or your neighborhood from my own distant location, but if you think of something, please let me know.

I'm totally unharmed. I didn't even lose my phone. There is absolutely nothing you can do but appreciate the offer and the well wishes.

Replies from: JenniferRM
comment by JenniferRM · 2021-11-05T16:50:41.392Z · LW(p) · GW(p)

I'm glad you are unharmed and that my well wishes were welcome :-)

comment by Zack_M_Davis · 2021-11-03T02:48:55.693Z · LW(p) · GW(p)

What's with the neglect of Richard J. Herrnstein?! His name actually comes first on the cover!

Replies from: lsusr
comment by lsusr · 2021-11-05T01:29:38.431Z · LW(p) · GW(p)

In retrospect, I wish I had titled this [Book Review] "The Bell Curve" by Richard Herrnstein instead. That would have been funny.

I have read two other books by Charles Murray and zero other books by Richard Herrnstein. In my head, I think of all of them as "Charles Murray books", which is unfair to Richard Herrnstein.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-05T02:13:34.847Z · LW(p) · GW(p)

+1 it would have been funny, especially if you'd opened by lampshading it.

comment by PeterMcCluskey · 2021-11-03T19:59:09.941Z · LW(p) · GW(p)

I mostly agree with this review, but it endorses some rather poor parts of the book.

Properly administered IQ tests are not demonstrably biased against social, economic, ethnic, or racial groups. ... Charles Murray doesn’t bother proving the above points. These facts are well established among scientists.

Cultural neutrality is not well established. The Bell Curve's claims here ought to be rephrased as something more like "the cultural biases of IQ tests are equivalent to the biases that 20th century academia promoted". I've written about this here [LW · GW] and here [LW(p) · GW(p)].

the gap between the adopted children with two black parents and the adopted children with two white parents was seventeen points, in line with the B/W difference customarily observed. Whatever the environmental impact may have been, it cannot have been large.

This seems to assume that parental impact constitutes most of environmental impact. Books such as The Nurture Assumption and WEIRDest People have convinced me that this assumption is way off. The Bell Curve has a section on malparenting seemed plausible to me at the time it was written, but which now looks pretty misguided (in much the same way as mainstream social science was/is misguided).

comment by Jay · 2021-11-03T10:57:53.070Z · LW(p) · GW(p)

Just for context, I'd like to point out that the SAT has been revised and renormed since 1994 (twice IIRC).  Current test scores are not straightforwardly comparable to the scores discussed in the book and in the post.

comment by Rafael Harth (sil-ver) · 2021-11-02T20:59:16.792Z · LW(p) · GW(p)

Imagine a world where having [a post mentioning the bell curve] visible on the frontpage runs a risk of destroying a lot of value. This could be through any number of mechanisms like

  • The site is discussed somewhere, someone claims that it's a home for racism and points to this post as evidence. [Someone who in another universe would have become a valueable contributor to LW] sees this (but doesn't read the post) it and decides not to check LW out.
  • A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support
  • Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece. By this time all of the dumb comments are downvoted into invisibility (and none of them ever had high karma to begin with), but the NYT reporter just deals with this by writing that the mods had to step in and censor the most outrageous comments or something.

Question: If you think this is not worth worrying about -- why? What do you know, and how do you think you know it? And in what way would a world-where-it-is-worth-worrying-about look different?

To avoid repeating arguments, there have been discussions similar to this before. Here are the arguments that I remember (I'm sure this is not exhaustive).

  • Pro: Not allowing posts poisons are epistemic discourse; saying 'let's be systematically correct about everything but is a significantly worse algorithm than 'let's be systematically correct', and this can have wide-ranging effects. (Zack_M_Davis [LW · GW] strongly argued for this point, e.g. here [LW(p) · GW(p)]) (this is also on one of the posts where the discussion has happened before, in this case because I made a comment arguing the post shouldn't be on LW)

    • Contra: But we could take it offline. (Even Hubinger [LW · GW], e.g., here [LW(p) · GW(p)])
    • Contra: I've thought about this a lot since the discussion happened, and I increasingly just don't buy that the negative effects are real. Especially not in this case, which seems more clear-cut than the dating post. The Bell Curve seems to be just about the single most controversial book in the world for a good chunk of people, just about any other book would be less of an issue. I assume the argument is that censorship is not proportional to the amount that is censored, but I don't understand the mechanism here. How does this hurt discourse?
  • Pro: LessWrong obviously isn't about this kind of stuff and anyone who takes an honest look at the site will notice that immediately. (Ben Pace [LW · GW] argued this here [LW(p) · GW(p)].) He also said that he's "pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about."

    • I'm unconvinced by this for the same reasons I was then. I agree with the claim, but I don't think assuming people are reasonable is realistic, and I don't understand why we should just fight those fights. Where's the cost-benefit calculation?
      • Follow the above links for arguments from Ben against the above.
  • Pro: LessWrong will get politicized anyway and we should start to practice. (Wei Dai [LW · GW], e.g. here [LW(p) · GW(p)])

    • This makes a lot more sense to me, but starting with a post on the Bell Curve is not the right way to do it. I would welcome some kind of actual plan for how this can be done from the moderators.

Until then, my position is that this post shouldn't be on LessWrong. I've strong-downvoted it and would ultra-strong-downvote it if I could. However, I do think I'm open to evidence for the contrary. I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea. If it's wroth doing, it's worth doing with made-up statistics. If I were to do such a calculation, it would get a bunch of negative numbers for things like what i mentioned at the top of this comment, and almost nothing positive because the benefit of allowing this seems genuinely negligible to me.

Replies from: Ruby, AllAmericanBreakfast, Benito, steven0461, Ericf
comment by Ruby · 2021-11-03T03:46:06.505Z · LW(p) · GW(p)

In my capacity as moderator, I saw this post this morning and decided to leave it posted (albeit as Personal blog with reduced visibility). 

I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what's true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations. That said, there are conversations that wouldn't be worth having on LessWrong, topics that I expect would attract attention just not worth it–those I would block. However, this post didn't feel like where I wanted to draw the line. Blocking this post feels like it would be cutting out too much for the sake of safety and giving the fear of adversaries too much control over of us and our inquiries. I liked how this post gave me a great summary of controversial material so that I now know what the backlash was in response to. I can imagine other posts where I feel differently (in fact, there was a recent post I told an author it might be better to leave off the site, though they missed my message and posted anyway, which ended up being fine).

It's not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren't okay), feel free to DM me or email me: ruby@lesswrong.com

To address something that was mentioned, I expect to change my response in the face of posting trends, if they seemed fraught. There are a number of measure we could potentially take then.

Replies from: sil-ver, fourier
comment by Rafael Harth (sil-ver) · 2021-11-03T11:13:43.786Z · LW(p) · GW(p)

Thanks for being transparent. I'm very happy to see that I was wrong in saying no-one else is taking it seriously. (I didn't notice that the post wasn't on the frontpage, which I think proves that you did take it seriously.)

I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what's true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations.

I don't understand this concern (which I classify as the same kind of thing voiced by Zack many times and AAB just a few comments up [LW(p) · GW(p)].) We've have a norm against discussing politics since before LessWrong 2.0, which doesn't seem to have had any noticeable negative effects on our ability to discuss other topics. I think what I'm advocating for is to extend this norm by a pretty moderate amount? Like, the set of interesting topics in politics seems to me to be much larger than the set of interesting [topics with the property that they risk significant backlash from people who are concerned about social justice]. (I do see how this post is useful, but the bell curve is literally in a class that contains a single element. There seem to be < 5 posts per year which I don't want to have on LW for these kinds of reasons, and most of them are less useful than this one.) My gears-level prediction for how much that would degrade discussion in other areas is basically zero, but at this point I must be missing something?

A difference I can see is that disallowing this post would be done explicitly out of fear or backlash whereas the norm against politics is because politics is the mind killer, but i guess I don't see why that makes a difference (and doesn't the mind killer argument extend to these kinds of topics anyway?)

It's not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren't okay), feel free to DM me or email me: ruby@lesswrong.com

I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here.

I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive statement. I think it's obviously intentionally provocative (though I will take this back if the author says otherwise), given that the author wrote this [LW · GW] four days ago

My favorite thing about living in the 21ˢᵗ century is that nobody can stop me from publishing whatever I want. [...] People tell me they're worried of being cancelled by woke culture. I think this is just a convenient excuse for laziness and cowardice. What are you afraid of saying? [...] Are you afraid to say that there are significant heritable intelligence disparities between ethnic groups? It's the obvious conclusion if you think critically about US immigration policy.

I think condemning TBC has become one of the most widely agreed on loyalty tests for many people who care about social justice. It seems clear to me that Isusr intended this post to have symbolic value, so that being provocative was an intended property. If their utility function had been to review this book because it's very useful while minimizing risk, a very effective way to do this would have been to exclude the name from the title.

Replies from: Vaniver, Ruby, Benito
comment by Vaniver · 2021-11-03T19:40:59.112Z · LW(p) · GW(p)

Elsewhere you write (and also ask to consolidate, so I'm responding here):

The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says 'almost nothing' since all it would take is to extend the norm "let's not talk about politics" to "let's not talk about politics and extremely sensitive social-justice adjacent issues", and I feel like that would extend the set of interesting taboo topics by something like 10%.

I think I used to endorse a model like this much more than I do now. A particular thing that I found sort of radicalizing was the "sexual preference" moment, in which a phrase that I had personally used and wouldn't have associated with malice was overnight retconned to be a sign of bigotry, as far as I can tell primarily to score points during the nomination hearings for Amy Coney Barrett. (I don't know anything special about Barrett's legal decisions or whether or not she's a bigot; I also think that sexual orientation isn't a choice for basically anyone at the moment; I also don't think 'preference' implies that it was a choice, any more than my 'flavor preferences' are my choice instead of being an uncontrollable fact about me.)

Supposing we agree that the taboo only covers ~10% more topics in 2020, I'm not sure I expect it will only cover 10% more topics in 2025, or 2030, or so on? And so you need to make a pitch not just "this pays for itself now" but instead something like "this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself."

Replies from: AllAmericanBreakfast, steven0461
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-04T04:28:59.613Z · LW(p) · GW(p)

This is a helpful addendum. I didn't want to bust out the slippery slope argument because I didn't have clarity on the gears-level mechanism. But in this case, we seem to have a ratchet in which X is deemed newly offensive, and a lot of attention is focused on just this particular word or phrase X. Because "it's just this one word," resisting the offensive-ization is made to seem petty - wouldn't it be such a small thing to give up, in exchange for inflicting a whole lot less suffering on others?

Next week it'll be some other X though, and the only way this ends is if you can re-establish some sort of Schelling Fence of free discourse and resist any further calls to expand censorship, even if they're small and have good reasons to back them up.

I think that to someone who disagrees with me, they might say that what's in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn't I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?

My response to this objection would be that, in much the same way as phrases like "sexual preference" can be seen as offensive for their implications, or a book can be objected to for its symbolism, mild forms of censorship or "updates" in speech codes can provoke anxiety, induce fear, and restrain thought. This may not be their intention, but it is their effect, at least at times and in the present cultural climate.

So a standard of free discourse and a Schelling Fence against expansion of censorship is justified not (just) to avoid a slippery slope of ever-expanding censorship, or to attract people with certain needs or to establish a pipeline into certain roles or jobs. Its purpose is also to create a space in which we have declared that we will strive to be less timid, not just less wrong.

We might not always prioritize or succeed in that goal, but establishing that this is a space where we are giving ourselves permission to try is a feature of explicit anti-censorship norms.

Prioritizing freedom of thought and lessening timidity isn't always the right goal. Sometimes, inclusivity, warmth, and a sense of agreeableness and safety is the right way to organize certain spaces. Different cultural moments, or institutions, might need marginally more safe spaces. Sometimes, though, they need more risky spaces. My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same. A way to protect LW's status as a risky space is to protect our anti-censorship norms, and sometimes to exercise our privilege to post risky material such as this post. 

Replies from: steven0461, sil-ver
comment by steven0461 · 2021-11-04T05:45:19.286Z · LW(p) · GW(p)

My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same.

Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.

Replies from: AllAmericanBreakfast, Vaniver
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-04T05:59:38.218Z · LW(p) · GW(p)

I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.

My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining the epistemic rigor that we both think is necessary.

It is you, and others here, who are advocating a change of the status quo to create a bigger wall between x-risk topics and political controversy. I think that this would harm the goal of preventing x-risk, on current margins, as I’ve argued elsewhere here. We both have our reasons, and I’ve written down the sort of evidence that would cause me to change my point of view.

Fortunately, I enjoy the privilege of being the winner by default in this contest, since the site’s current norms already accord with my beliefs and preferences. So I don’t feel the need to gather evidence to persuade you of my position, assuming you don’t find my arguments here compelling. However, if you do choose to make the effort to gather some of the evidence I’ve elsewhere outlined, I not only would eagerly read it, but would feel personally grateful to you for making the effort. I think those efforts would be valuable for the health of this website and also for mitigating X-risk. However, they would be time-consuming, effortful, and may not pay off in the end.

comment by Vaniver · 2021-11-04T15:41:39.462Z · LW(p) · GW(p)

Our culture is desperately in need of spaces that are correct about the most important technical issues

I also care a lot about this; I think there are three important things to track.

First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn't want to post something to Less Wrong can post someplace classier.)

Second is that people might not be willing to pay costs to follow taboos. The more a space is politically safe, the less people like Robin Hanson will want to be there, because many of their ideas are easier to think of if you're not spending any of your attention on political safety.

Third is that the core topics you care about might, at some point, become political. (Certainly AI alignment was 'political' for many years before it became mainstream, and will become political again as soon as it stops becoming mainstream, or if it becomes partisan.)

The first is one of the reasons why LW isn't a free speech absolutist site, even tho with a fixed population of posters that would probably help us be more correct. But the second and third are why LW isn't a zero-risk space either.

Replies from: steven0461, steven0461
comment by steven0461 · 2021-11-04T21:22:41.319Z · LW(p) · GW(p)

Some more points I want to make:

  • I don't care about moderation decisions for this particular post, I'm just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity's foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
  • On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficiently unpopular views, which makes discussions on sufficiently controversial subjects within such groups a sham. The rationalist community should oppose shams instead of encouraging them.
  • Whether political pressure leaks into technical subjects mostly depends on people's meta-level recognition that inferences subject to political pressure are unreliable, and hosting sham discussions makes this recognition harder.
  • The rationalist community should avoid causing people to think irrationally, and a very frequent type of irrational thinking (even among otherwise very smart people) is "this is on the same website as something offensive, so I'm not going to listen to it". "Let's keep putting important things on the same website as unimportant and offensive things until they learn" is not a strategy that I expect to work here.
  • It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case "right wing" means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.

I'm not as confident about these conclusions as it sounds, but my lack of confidence comes from seeing that people whose judgment I trust disagree, and it does not come from the arguments that have been given, which have not seemed to me to be good.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-04T23:29:10.058Z · LW(p) · GW(p)

It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.

"Stand up to X by not doing anything X would be offended by" is obviously an unworkable strategy, it's taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as little surplus utility as possible in negotiation.

(Not doing anything X would be offended by is generally a strategy for working with X, not standing up to X; it could work if interests are aligned enough that it isn't necessary to demand much in negotiation. But given your concern about "entryism" that doesn't seem like the situation you think you're in.)

Replies from: gjm, steven0461
comment by gjm · 2021-11-05T00:17:33.438Z · LW(p) · GW(p)

steven0461 isn't proposing standing up to X by not doing things that would offend X.

He is proposing standing up to the right by not doing things that would offend the left, and standing up to the left by not doing things that would offend the right. Avoiding posts like the OP here is intended to be an example of the former, which (steven0461 suggests) has value not only for its own sake but also because it lets us also stand up to the left by avoiding things that offend the right, without being hypocrites.

(steven0461's comment seems to treat "standing up to left-wing political entryism" as a thing that's desirable for its own sake, and "standing up to right-wing political entryism" as something we regrettably have to do too in order to do the desirable thing without hypocrisy. This seems kinda strange to me because (1) standing up to all kinds of political entryism seems to me obviously desirable for its own sake, and because (2) if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter.)

Replies from: jessica.liu.taylor, steven0461
comment by jessicata (jessica.liu.taylor) · 2021-11-05T01:13:13.344Z · LW(p) · GW(p)

If someone proposes to do A by doing B, and B by doing C, they are proposing doing A by doing C. (Here A = "stand up to left wing entryism", B = "stand up to right wing entryism", C = "don't do things that left wing people are offended by")

EDIT: Also, the situation isn't symmetrical, since Steven is defining right-wing to mean things the left wing is offended by, and not vice versa. Hence it's clearly a strategy for submitting to the left, as it lets the left construct the left/right dichotomy.

Replies from: gjm
comment by gjm · 2021-11-05T01:22:03.940Z · LW(p) · GW(p)

I'm not sure there's a definite fact of the matter as to when something is "doing X by doing Y" in cases like this where it's indirect, but I think either we shouldn't use that language so broadly as to apply to such cases or it's not obvious that it's unworkable to "stand up to X by not doing things that offend X", since the obvious unworkability of that is (unless I'm misunderstanding your earlier comment) predicated on the idea that it's a sort of appeasement of X, rather than the sort of indirect thing we're actually talking about here.

Maybe I am also being too indirect. Regardless of whether there's some sense in which steven0461 is proposing to "stand up to X by not doing things that would offend X", he was unambiguously not proposing "a negotiating stance that is maximally yielding in the ultimatum game"; "not doing things that would offend X" in his comment is unambiguously not a move in any game being played with X at all. Your objection to what he wrote is just plain wrong, whether or not there is a technical sense in which he did say the thing that you objected to, because your argument against what he said was based on an understanding of it that is wrong whether or not that's so.

[EDITED to add:] As I mention in a grandchild comment, one thing in the paragraph above is badly garbled; I was trying to say something fairly complicated in too few words and ended up talking nonsense. It's not correct to say that "not doing things that would offend X" is not a move in any game being played with X. Rather, I claim that X in your original comment is standing in for two different albeit related Xs, who are involved in two different albeit related interactions ("games" if you like), and the two things you portray as inconsistent are not at all inconsistent because it's entirely possible (whether or not it's wise) to win one game while losing the other.

The game with "left-wing entryists" is one where they try to make LW a platform for left-wing propaganda. The game with "the left" is one where they try to stop LW being a platform for (what they regard as) right-wing propaganda. Steven proposes taking a firm stand against the former, and making a lot of concessions in the latter. These are not inconsistent; banning everything that smells of politics, whether wise or foolish overall, would do both of the things Steven proposes doing. He proposes making concessions to "the left" in the second game in order to resist "right-wing entryists" in the mirror-image of the first game. We might similarly make concessions to "the right" if they were complaining that LW is too leftist, by avoiding things that look to them like left-wing propaganda. I make no claims about whether any of these resistances and concessions are good strategy; I say only that they don't exhibit the sort of logical inconsistency you are accusing Steven of.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-05T01:45:56.346Z · LW(p) · GW(p)

The implied game is:

Step 1: The left decides what is offensively right-wing

Step 2: LW people decide what to say given this

Steven is proposing a policy for step 2 that doesn't do anything that the left has decided is offensively right-wing. This gives the left the ability to prevent arbitrary speech.

If the left is offended by negotiating for more than $1 in the ultimatum game, Steven's proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by talking about things without being attacked by the left)

Replies from: steven0461, Benito, gjm
comment by steven0461 · 2021-11-05T05:50:06.518Z · LW(p) · GW(p)

I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they're at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he's presented with an idea that's noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn't help him achieve his goals, but he'll flatten buildings anyway because that's just the kind of monster he is, and in this way, you and Godzilla can create arbitrary amounts of destruction together. And (to some extent) it's not like someone constructed a reflexively-acting Godzilla so they could control your behavior, either, which would make it possible to deter that person from making future Godzillas. Godzillas seem (to some extent) to arise spontaneously out of the social dynamics of large numbers of people with imperfect procedures for deciding what they believe and care about. So it's not clear to me that there's an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that's unusually important to keep intact. All this is aside from considerations of mercy to Godzilla or respect for Godzilla's opinions.

If I make some substitutions in your comment to illustrate this view of censorious forces as reflexive instead of strategic, it goes like this:

The implied game is:

Step 1: The bull decides what is offensively red

Step 2: LW people decide what cloths to wave given this

Steven is proposing a policy for step 2 that doesn't wave anything that the bull has decided is offensively red. This gives the bull the ability to prevent arbitrary cloth-waving.

If the bull is offended by negotiating for more than $1 in the ultimatum game, Steven's proposed policy would avoid doing that, thereby yielding. (The money here is metaphorical, representing benefits LW people could get by waving cloths without being gored by the bull)

I think "wave your cloths at home or in another field even if it's not as good" ends up looking clearly correct here, and if this model is partially true, then something more nuanced than an absolutist "don't give them an inch" approach is warranted.

edit: I should clarify that when I say Godzilla flattens buildings, I'm mostly not referring to personal harm to people with unpopular opinions, but to epistemic closure to whatever is associated with those people, which you can see in action every day on e.g. Twitter.

Replies from: Zack_M_Davis, SaidAchmiz
comment by Zack_M_Davis · 2021-11-05T07:01:17.794Z · LW(p) · GW(p)

The relevant actors aren't consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That's what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend's unruly 6-year-old child who doesn't want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called "rationalists", the functionality of emotions as a negotiating tactic is very clear to me, even if I don't have the same kind of deliberative control over my feelings as my speech (and the child and my mother don't even think of themselves as doing game theory at all).

(This in itself doesn't automatically negate your concerns, of course, but I think it's an important modeling consideration: animals like Godzilla may be less incentivizable than Homo economicus, but they're more like Homo economicus than a tornado or an avalanche.)

Replies from: steven0461, steven0461
comment by steven0461 · 2021-11-05T22:44:40.693Z · LW(p) · GW(p)

I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what's actually going on. If you look at people of the sneer, it's not at all clear that saying offensive things thwarts their goals. They're pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren't reliable), arguably de-energizes such people more than it emboldens them.

Replies from: SaidAchmiz, steven0461
comment by Said Achmiz (SaidAchmiz) · 2021-11-05T23:22:00.763Z · LW(p) · GW(p)

This logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general).

Inconveniently, that is precisely what turns out to be the case…

Replies from: steven0461
comment by steven0461 · 2021-11-05T23:45:08.849Z · LW(p) · GW(p)

"Offensive things" isn't a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they're signaling to. It sounds like your reasoning is "if we don't post about the Bell Curve, they'll just start taking offense to technological forecasting, and we'll be back where we started but with a more restricted topic space". But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-06T00:22:34.221Z · LW(p) · GW(p)

But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.

I’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not.

To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes.

“Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-11-06T01:34:10.023Z · LW(p) · GW(p)

But there are systemic reasons why Society gets told that hypotheses about genetically-mediated group differences are offensive, and mostly doesn't (yet?) get told that technological forecasting is offensive. (If some research says Ethnicity E has higher levels of negatively-perceived Trait T, then Ethnicity E people have an incentive to discredit the research independently of its truth value—and people who perceive themselves as being in a zero-sum conflict with Ethnicity E have an incentive to promote the research independently of its truth value.)

Steven and his coalition are betting that it's feasible to "hold the line" on only censoring the hypotheses are closely tied to political incentives like this, without doing much damage to our collective ability to think about other aspects of the world. I don't think it works as well in practice as they think it does, due to the mechanisms described in "Entangled Truths, Contagious Lies" [LW · GW] and "Dark Side Epistemology" [LW · GW]—you make a seemingly harmless concession one day, and five years later, you end up claiming with perfect sincerity that dolphins are fish [LW · GW]—but I don't think it's right to dismiss the strategy as fantasy.

Replies from: steven0461
comment by steven0461 · 2021-11-06T20:16:36.821Z · LW(p) · GW(p)

due to the mechanisms described in "Entangled Truths, Contagious Lies" and "Dark Side Epistemology"

I'm not advocating lying. I'm advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, but either way, this is no different from what they already do about lots of subjects, and does not compromise anyone's epistemic integrity.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-11-06T21:52:09.879Z · LW(p) · GW(p)

I'm not advocating lying.

I understand that. I cited a Sequences post that has the word "lies" in the title, but I'm claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people's understanding of what reasoning looks like—can apply more generally to distortions that aren't direct lies.

Omitting information can be a distortion when the information would otherwise be relevant. In "A Rational Argument" [LW · GW], Yudkowsky gives the example of an election campaign manager publishing survey responses from their candidate, but omitting one question which would make their candidate look bad, which Yudkowsky describes as "cross[ing] the line between rationality and rationalization" (!). This is a very high standard—but what made the Sequences so valuable, is that they taught people the counterintuitive idea that this standard exists. I think there's a lot of value in aspiring to hold one's public reasoning to that standard.

Not infinite value, of course! If I knew for a fact that Godzilla will destroy the world if I cite a book that I would otherwise would have cited as genuinely relevant, then fine, for the sake of the sake of the world, I can not cite the book.

Maybe we just quantitatively disagree on how tough Godzilla is and how large the costs of distortions are? Maybe you're happy to throw Sargon of Akkad under the bus, but when Steve Hsu is getting thrown under the bus, I think that's a serious problem for the future of humanity. I think this is actually worth a fight.

With my own resources and my own name (and a pen name), I'm fighting. If someone else doesn't want to fight with their name and their resources, I'm happy to listen to suggestions for how people with different risk tolerances can cooperate to not step on each other's toes! In the case of the shared resource of this website, if the Frontpage/Personal distinction [LW · GW] isn't strong enough, then sure, "This is on our Banned Topics list; take it to /r/TheMotte, you guys" could be another point on the compromise curve. What I would hope for from the people playing the sneaky consequentialist image-management strategy, is that you guys would at least acknowledge that there is a conflict and that you've chosen a side.

might fill their opinion vacuum with false claims from elsewhere, or with true claims

For more on why I think not-making-false-claims is vastly too low of a standard to aim for, see "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" [LW · GW] and "Heads I Win, Tails?—Never Heard of Her" [LW · GW].

Replies from: steven0461
comment by steven0461 · 2021-11-06T23:52:14.866Z · LW(p) · GW(p)

Your posts seem to be about what happens if you filter out considerations that don't go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn't create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sauron. (Distortions caused by misestimation of filtering are going to exist whether the filter has 40% strength or 30% strength. The way to minimize them is to focus on estimating correctly. A 100% strength filter is actually relatively easy to correctly estimate. And having the appearance of a forthright debate creates perverse incentives for people to distort their beliefs so they can have something inoffensive to be forthright about.)

The people going after Steve Hsu almost entirely don't care whether LW hosts Bell Curve reviews. If adjusting allowable topic space gets us 1 util and causes 2 utils of damage distributed evenly across 99 Sargons and one Steve Hsu, that's only 0.02 Hsu utils lost, which seems like a good trade.

I don't have a lot of verbal energy and find the "competing grandstanding walls of text" style of discussion draining, and I don't think the arguments I'm making are actually landing for some reason, and I'm on the verge of tapping out. Generating and posting an IM chat log could be a lot more productive. But people all seem pretty set in their opinions, so it could just be a waste of energy.

comment by steven0461 · 2021-11-05T22:54:17.790Z · LW(p) · GW(p)

Another way this matters: Offense takers largely get their intuitions about "will taking offense achieve my goals" from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate "will taking offense specifically against LessWrong achieve my goals", but most actors simply aren't paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn't care much about, like, I don't know, fricking Sargon of Akkad.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-11-06T03:47:41.589Z · LW(p) · GW(p)

I agree that offense-takers are calibrated against Society-in-general, not particular targets.

As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business's files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn't help the copy of you in this timeline)?

It's a tough call! If your business's files are sufficiently important, then I can definitely see why you'd want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that's not an option ...

our behavior [...] punishment against us [...] some other entity that we shouldn't care much about

If coordinating to resist extortion isn't an option, that makes me very interested in trying to minimize the extent to which there is a collective "us". "We" should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other's throats fighting over who owns the "rationalist" brand name.

comment by steven0461 · 2021-11-05T22:22:43.017Z · LW(p) · GW(p)

My claim was:

if this model is partially true, then something more nuanced than an absolutist "don't give them an inch" approach is warranted

It's obvious to everyone in the discussion that the model is partially false and there's also a strategic component to people's emotions, so repeating this is not responsive.

comment by Said Achmiz (SaidAchmiz) · 2021-11-05T06:16:00.269Z · LW(p) · GW(p)

So it’s not clear to me that there’s an alternative to just accepting the existence of Godzilla and learning as best as you can to work around him in those cases where working around him is cheap, especially if you have a building that’s unusually important to keep intact.

But of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action:

Kill Godzilla.

comment by Ben Pace (Benito) · 2021-11-05T02:15:23.445Z · LW(p) · GW(p)

(Appreciate you spelling it out like this, the above is a clear articulation of one of the main perspectives I have on the situation.)

comment by gjm · 2021-11-05T04:27:02.379Z · LW(p) · GW(p)

It still appears to me that you are completely missing the point. I acknowledge that you are getting a lot of upvotes and I'm not, suggesting that other LW readers disagree with me. I think they are wrong, but outside view suggests caution.


I notice one thing I said that was not at all what I intended to say, so let me correct that before going further. I said

"not doing things that would offend X" in his comment is unambiguously not a move in any game being played with X at all.

but what I actually meant to say was

"standing up to X" in his comment is unambiguously not a move in any game being played with X at all.

[EDITED to add:] No, that also isn't quite right; my apologies; let me try again. What I actually mean is that "standing up to X" and "not doing things that would offend X" are events in two entirely separate games, and the latter is not a means to the former.

There are actually three separate interactions envisaged in Steven's comment, constituting (if you want to express this in game-theoretic terms) three separate games. (1) An interaction with left-wing entryists, where they try to turn LW into a platform for leftist propaganda. (2) An interaction with right-wing entryists, where they try to turn LW into a platform for rightist propaganda. (3) An interaction with leftists, who may or may not be entryists, where they try to stop LW being a platform for right-wing propaganda or claim that it is one. (There is also (4) an interaction with rightists, along the lines of #3, which I include for the sake of symmetry.)

Steven claims that in game 1 we should strongly resist the left-wing entryists, presumably by saying something like "no, LW is not a place for left-wing propaganda". He claims that in order to do this in a principled way we need also to say "LW is not a place for right-wing propaganda", thus also resisting the right-wing entryists in game 2. And he claims that in order to do this credibly we need to be reluctant to post things that might be, or that look like they are, right-wing propaganda, thus giving some ground to the leftists in game 3.

Game 1 and game 3 are entirely separate, and the same move could be a declaration of victory in one and a capitulation in the other. For instance, imposing a blanket ban on all discussion of politically sensitive topics on LW would be an immediate and total victory over entryists of both stripes in games 1 and 2, and something like a total capitulation to leftists and rightists alike in games 3 and 4.

So "not doing things that would offend leftists" is not a move in any game played with left-wing entryists; "standing up to left-wing entryists" is not a move in any game played with leftists complaining about right-wing content on LW; I was trying to say both of those and ended up talking nonsense. The above is what I actually meant.


I agree that steven0461 is saying (something like) that people writing LW articles should avoid saying things that outrage left-leaning readers, and that if you view what happens on LW as a negotiation with left-leaning readers then that proposal is not a strategy that gives you much leverage.

I don't agree that it makes any sense to say, as you did, that Steven's proposal involves "standing up to X by not saying anything that offends X", which is the specific thing you accused him of.

Your comment above elaborates on the thing I agree about, but doesn't address the reasons I've given for disagreeing with the thing I don't agree about. That may be partly because of the screwup on my part that I mention above.

I think the distinction is important, because the defensible accusation is of the form "Steven proposes giving too much veto power over LW to certain political groups", which is a disagreement about strategy, whereas the one you originally made is of the form "Steven proposes something blatantly self-contradictory", which is a disagreement about rationality, and around these parts accusations of being stupid or irrational are generally more serious than accusations of being unwise or on the wrong political team.

The above is my main objection to what you have been saying here, but I have others which I think worth airing:

It is not true that "don't do anything that the left considers offensively right-wing" gives the left "the ability to prevent arbitrary speech", at least not if it's interpreted with even the slightest bit of charity, because there are many many things one could say that no one will ever consider offensively right-wing. Of course it's possible in theory for any given group to start regarding any given thing as offensively right-wing, but I do not think it reasonable to read steven0461's proposal as saying that literally no degree of absurdity should make us reconsider the policy he proposes.

It is not true that Steven proposes to "not do anything that the left has decided is offensively right-wing". "Sufficiently offensive" was his actual wording. This doesn't rule out any specific thing, but again I think any but the most uncharitable reading indicates that he is not proposing a policy of the form "never post anything that anyone finds offensive" but one of the form "when posting something that might cause offence, consider whether its potential to offend is enough to outweigh the benefits of posting it". So, again, the proposal is not to give "the left" complete veto power over what is posted on LW.

I think it is unfortunate that most of what you've written rounds off Steven's references to "left/right-wing political entryism" to "the left/right". I do not know exactly where he draws the boundary between mere X-wing-ism and X-wing political entryism, but provided the distinction means something I think it is much more reasonable for LW to see "political entryism" of whatever stripe as an enemy to be stood up to, than for LW to see "the left" or "the right" as an enemy to be stood up to. The former is about not letting political groups co-opt LW for their political purposes. The latter is about declaring ourselves a political team and fighting opposing political teams.

comment by steven0461 · 2021-11-05T04:48:56.218Z · LW(p) · GW(p)

standing up to all kinds of political entryism seems to me obviously desirable for its own sake

I agree it's desirable for its own sake, but meant to give an additional argument why even those people who don't agree it's desirable for its own sake should be on board with it.

if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter

Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant "neutral" observers.

comment by steven0461 · 2021-11-05T04:42:13.719Z · LW(p) · GW(p)

"Stand up to X by not doing anything X would be offended by" is not what I proposed. I was temporarily defining "right wing" as "the political side that the left wing is offended by" so I could refer to posts like the OP as "right wing" without setting off a debate about how actually the OP thinks of it more as centrist that's irrelevant to the point I was making, which is that "don't make LessWrong either about left wing politics or about right wing politics" is a pretty easy to understand criterion and that invoking this criterion to keep LW from being about left wing politics requires also keeping LessWrong from being about right wing politics. Using such a criterion on a society-wide basis might cause people to try to redefine "1+1=2" as right wing politics or something, but I'm advocating using it locally, in a place where we can take our notion of what is political and what is not political as given from outside by common sense and by dynamics in wider society (and use it as a Schelling point boundary for practical purposes without imagining that it consistently tracks what is good and bad to talk about). By advocating keeping certain content off one particular website, I am not advocating being "maximally yielding in an ultimatum game", because the relevant game also takes place in a whole universe outside this website (containing your mind, your conversations with other people, and lots of other websites) that you're free to use to adjust your degree of yielding. Nor does "standing up to political entryism" even imply standing up to offensive conclusions reached naturally in the course of thinking about ideas sought out for their importance rather than their offensiveness or their symbolic value in culture war.

comment by steven0461 · 2021-11-04T20:21:52.414Z · LW(p) · GW(p)

I agree that LW shouldn't be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I'm not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of "don't bring up highly inflammatory topics unless they're crucial to the site's core interests" wouldn't stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann's theorem, and anyway, having him post on his own blog works fine. AI alignment was never political remotely like how the Bell Curve is political. (I guess some conceptual precursors came from libertarian email lists in the 90s?) If AI alignment becomes very political (e.g. because people talk about it side by side with Bell Curve reviews), we can invoke the "crucial to the site's core interests" thing and keep discussing it anyway, ideally taking some care to avoid making people be stupid about it. If someone wants to argue that having Bell Curve discussion on r/TheMotte instead of here would cause us to lose out on something similarly important, I'm open to hearing it.

Replies from: Vaniver
comment by Vaniver · 2021-11-05T16:37:59.265Z · LW(p) · GW(p)

AI alignment was never political remotely like how the Bell Curve is political.

Not within the mainstream politics, but within academic / corporate CS and AI departments. 

Replies from: steven0461
comment by steven0461 · 2021-11-05T23:06:13.343Z · LW(p) · GW(p)

You'd have to use a broad sense of "political" to make this true (maybe amounting to "controversial"). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of "if you can make a case that it's genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead". At no point could anyone have used the proposed norms to prevent discussion of AI alignment.

comment by Rafael Harth (sil-ver) · 2021-11-04T16:06:43.813Z · LW(p) · GW(p)

I think that to someone who disagrees with me, they might say that what's in fact happening is an increase in knowledge and an improvement in culture, reflected in language. In the same way that I expect to routinely update my picture of the world when I read the newspaper, why shouldn't I expect to routinely update my language to reflect evolving cultural understandings of how to treat other people well?

I know you haven't implied that the someone could be me, but I thought I'd just clarify that I would vehemently oppose such an argument. My argument contra slippery slope is that I don't see evidence for it. If we look ten years into the past, there hasn't been another book like TBC every week; in fact there hasn't been one ever. I would bet against there being another one in the next 10 years.

There may be some risk of a slippery slope on other issues, but honestly I want that to be a separate argument because I estimate this post to carry a lot more risk than the other < 4 posts/year that I mentioned. I don't know if this is true (and it's usually bad form to accuse others of lack of knowledge), but I genuinely wonder if others who've participated in this discussion just don't know how strongly many people feel about this book. (it is of course possible to acknowledge this and still (or especially) be against censorship.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-04T23:52:22.619Z · LW(p) · GW(p)

I'm fairly aware of Murray's public image, but wanted to go a little deeper before replying.

Here's a review from the Washington Post this year, of Murray's latest book. Note that, while critical of his book, it does not call him a racist. Perhaps its strongest critical language is the closing sentence:

He writes as if his conclusions are just a product of cold calculus and doesn’t pause long enough to consider that perhaps it’s the assumptions in his theorem that are antithetical to the soul of America.

It actually more portrays him as out of touch with the rise of the far right than in lockstep with it. The article does not call him a racist, predict his book will cause harm, or suggest that readers avoid it. This suggests to me that there is still room for Murray's output to be considered by a major, relatively liberal news media outlet.

The Standard-Examiner published a positive review of the same book. They are a newspaper with a circulation of about 30,000, based out of Ogden, UT.

Looking over other the couple dozen news articles that popped up containing "Charles Murray" and "The Bell Curve" from 2021, I see several that mention protests against him, or arguments over TBC, mentioned as one of a handful of important examples of prominent debates about race and racism.

I also looked up protests against Murray. There have been a few major ones, most famously at Middlebury College, some minor ones, and some that did not attract protests. My view is that for college protests, the trigger is "close to home," and the protest organizers depend on college advertising and social ties to motivate participation.

So we are in agreement that Murray is a prominent and controversial figure on this topic, and protests against him can provoke once-in-a-decade-level episodes of racial tension on a campus, or be viewed as arguments on par with debates over critical race theory. This isn't just some book about a controversial topic - it was a bestseller, and is still referenced 25 years later as a major source of controversy, and which has motivated hundreds or even thousands of students to protest the author when he's attempted to speak on their campus. There are many scholarly articles writing, and generally critically, about the book.

Despite the controversy, it's possible in 2021 for a liberal journalist to publish a critical but essentially professional review of Murray's new work, and for a conservative journalist to publish a positive review in their newspaper.

The way I see it, Murray is a touchstone figure, but is still only very rarely prominent in the daily news cycle. Just writing about him isn't enough to make the article newsworthy. If lsusr was a highly prominent blogger, then this review might make the news, or be alarming enough to social media activists to outcompete other tweets and shares. But he's not a big enough figure, and this isn't an intense enough article, to even come close to making such a big splash.

If this article poses an issue, it's by adding one piece of evidence to the prosecutor's exhibit that LW is a politically problematic space. Given that, as you say, this is one of the most unusually controversy-courting posts of the year, my assessment that it is "only one more piece of evidence," rather than a potential turning point in this site's public image, strikes me as a point of evidence against censorship. It's just not that big a deal.

If you would care to game out for me in a little more detail about what a long-term scenario in which AGI safety becomes tainted by association with posts such as this, to the serious detriment of humanity, please do!

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-05T05:22:33.288Z · LW(p) · GW(p)

Agree with all of this, but my concern is not that the coupling of [worrying about AGI] and [being anti-social-justice] happens tomorrow. (I did have some separate concerns about people being put off by the post today, but I've been convinced somewhere in the comments under this post that the opposite is about equally likely.) It's that this happens when AGI saftey is a much bigger deal in the public discourse. (Not sure if you think this will never happen? I think there's a chance it never happens but that seems widely uncertain. I would put maybe 50% on it or something? Note that even if it happens very late, say 4 years before AGI poses an existential risk, I think that's still more than enough time for the damage to be done. EY famously argued that there is no firelarm for AGI [LW · GW]; if you buy this then we can't rely on "by this point the danger is so obvious that people will take safety seriously no matter what".)

If your next question is "why worry about this now", one reason is that I don't have faith that mods will react in time when the risk increases (I've updated upward on how likely I think this is after talking to Ruby but not to 100% and who knows who's mod in 20 years), and I have the opportunity to say something now. But even if I had full authority over how the policy changes in the future, I still wouldn't have allowed this post because people can dig out old material if they want to write a hit piece. This post has been archived, so from this point on there will forever be the opportunity to link LW to TBC for anyone wants to do that. And if you applied the analog of security mindset to this problem (which I think is appropriate), this is not something you would allow to happen. There is precedent for people losing positions over things that have happened decades in the past.

One somewhat concrete scenario that seems plausible (but widely unlikely because it's concrete) is that Elon Musk manages to make the issue mainstream in 15 years; someone does a deep dive and links this to LW and LW to anti-social-jutice (even though LW itself still doesn't have that many more readers); this gets picked up a lot of people who think worrying about AGI is bad; the aforementioned coupling occurs.

The only other thing I'd say is that there is also a substantial element of randomness to what does and doesn't create a vast backlash. You can't look at one instance of "person with popularity level said thing of controversy level , nothing bad happened" and conclude that any other instance with and will definitely not lead to anything bad happening.

comment by steven0461 · 2021-11-04T00:07:23.616Z · LW(p) · GW(p)

And so you need to make a pitch not just "this pays for itself now" but instead something like "this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself."

I don't think it will be obvious, but I think we'll be able to make an imperfect estimate of when to change the policy that's still better than giving up on future evaluation of such tradeoffs and committing reputational murder-suicide immediately. (I for one like free speech and will be happy to advocate for it on LW when conditions change enough to make it seem anything other than pointlessly self-destructive.)

comment by Ruby · 2021-11-03T18:27:30.831Z · LW(p) · GW(p)

We've have a norm against discussing politics since before LessWrong 2.0, which doesn't seem to have had any noticeable negative effects on our ability to discuss other topics.

I'm not sure whether that's true, but separately, the norm against politics has definitely impact our ability to discuss politics. Perhaps that's a necessary sacrifice, but it's a sacrifice. In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modeling of the world and I think it'd be a loss to not have this piece.

I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here.

I'm not sure where this post would fall in my ranking (along the dimension you're pointing at). It's possible I agree with you that it's at the extreme end–but there has to be a post at the extreme end. The posts that are imo (or other moderator's opinions) over the line are ones you wouldn't see.

I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive statement.

I'd guess that it was intentionally provocative (to what degree the intention was, I don't know), but I don't feel inclined to tell the author they can't do that in this case.

If I had written the post, I'd have named it differently and added caveats, etc. But I didn't and wouldn't have because of timidness, which makes me hesitant to place requirements on the person who actually did.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T18:47:23.334Z · LW(p) · GW(p)

In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modeling of the world and I think it'd be a loss to not have this piece.

I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not), and also that this particular post has a lot of value. But if you look at the set of all books for which (1) a largely positive reivew could plausibly been written by a super smart guy like lsusr, and (2) the backlash could plausibly be really bad, I think it literally contains a single element. It's only TBC. There are a bunch of non-bookreview posts that I also wouldn't want, but they're very rare. It seems like we're talking about a much smaller set of topics than what's covered by the norm around politics.

I feel like if we wanted to find the optimal point in the value-risk space, there's no way it's "ban on all politics but no restriction on social justice". There have got to be political areas with less risk and more payoff, like just all non-US politics or something.

Replies from: steven0461
comment by steven0461 · 2021-11-03T23:56:20.444Z · LW(p) · GW(p)

I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not)

A global ban on political discussion by rationalists might be a big sacrifice, but it seems to me there are no major costs to asking people to take it elsewhere.

(I just edited "would be a big sacrifice" to "might be a big sacrifice", because the same forces that cause a ban to seem like a good idea will still distort discussions even in the absence of a ban, and perhaps make them worse than useless because they encourage the false belief that a rational discussion is being had.)

comment by Ben Pace (Benito) · 2021-11-03T18:11:49.063Z · LW(p) · GW(p)

Just a short note that the title seems like the correct one so that it's searchable by the name of the book slash author. Relatedly, all book reviews on LW are called "Book Review: <Book Name>", this one didn't stand out as any different to me (except it adds the author's name, which seems pretty within reasonable bounds to me).

Replies from: Ruby
comment by Ruby · 2021-11-03T18:28:11.031Z · LW(p) · GW(p)

Fwiw, I bet adding the author's name was an intentional move because it'd be controversial.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-03T18:36:59.096Z · LW(p) · GW(p)

Okay. Maybe not the ideal goal, not sure, but I think it's pretty within range of fine things to do. There's a fairly good case that people will search the author's name and want to understand their ideas because he's well-known, so it helps as a search term.

comment by fourier · 2021-11-05T00:02:15.816Z · LW(p) · GW(p)
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-02T21:37:31.977Z · LW(p) · GW(p)

I'll bite, but I can't promise to engage in a lot of back-and-forth.

  • The site is discussed somewhere, someone claims that it's a home for racism and points to this post as evidence. Someone else who would have otherwise become a valuable contributor reads it and decides not to check it out
  • A woke and EA-aligned person gets wind of it and henceforth thinks all x-risk related causes are unworthy of support

Let's generalize. A given post on LW's frontpage may heighten or diminish its visibility and appeal to potential newcomers, or the visibility/appeal of associated causes like X-risk. You've offered one reason why this post might heighten its visibility while diminishing its appeal.

Here's an alternative scenario, in which this post heightens rather than diminishes the appeal of LW. Perhaps a post about the Bell Curve will strike somebody as a sign that this website welcomes free and open discourse, even on controversial topics, as long as it's done thoughtfully. This might heighten, rather than diminish, LW's appeal, for a person such as this. Indeed, hosting posts on potentially controversial topics might select for people like this, and that might not only grow the website, but reinforce its culture in a useful way.

I am not claiming that this post heightens the appeal of LW on net - only that it's a plausible alternative hypothesis. I think that we should be very confident that a post will diminish the appeal of LW to newcomers before we advocate for communally-imposed censorship.

Not only do we have to worry that such censorship will impact the free flow of information and ideas, but that it will personally hurt the feelings of a contributor. Downvotes and calls for censorship pretty clearly risk diminishing the appeal of the website to the poster, who has already demonstrated that they care about this community. If successful, the censorship would only potentially bolster the website's appeal for some hypothetical newcomer. It makes more sense to me to prioritize the feelings of those already involved. I don't know how lsusr feels about your comment, but I know that when other people have downvoted or censored my posts and comments, I have felt demoralized.

  • Someone links the article from somewhere, it gets posted on far right reddit board, a bunch of people make accounts on LessWrong to make dumb comments, someone from the NYT sees it and writes a hit piece.

The reason I think this is unlikely is that the base rate of (blogs touching on politics making it into the NYT for far-right trolling)/(total blogs touching on politics) is low. Slate Star Codex had a large number of readers before the NYT wrote an article about it. I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX). LW is the collective work of a bunch of mainly-anonymous bloggers posting stuff that's largely inoffensive and ~never (recently) flagrantly attacking particular political factions. Indeed, we have some pretty strong norms against open politicization. Because its level of openly political posting and its readership are both low, I think LW is an unappealing target for a brigade or hit piece. Heck, even Glen Weyl thinks we're not worth his time!

Edit: See habryka's stats below for a counterpoint. I still think there's a meaningful difference between the concentrated attention given to posts on ACX vs. the diffuse attention (of roughly equal magnitude) distributed throughout the vastness of LW.

For this reason, it once again does not seem worth creating a communal norm of censorship and a risk of hurt feelings by active posters.

Note also that, while you have posited and acted upon (via downvoting and commenting) a hypothesis of yours that the risks of this post outweigh the benefits, you've burdened respondants with supplying more rigor than you brought to your original post ("I would much welcome some kind of a cost-benefit calculation that concludes that this is a good idea"). It seems to me that a healthier norm would be that, before you publicly proclaim that a post is worthy of censorship, that you do the more rigorous cost/benefit calculation, and offer it up for others to critique.

Or should I fight fire with fire, by strongly-upvoting lsusr's post to counteract your strong-downvote? In this scenario, upvotes and downvotes are being used not as a referendum on the quality of the post, but on whether or not it should be censored to protect LW. Is that how we wish this debate to be decided?

As a final question, consider that you seem to view this post in particular as exceptionally risky for LW. That means you are making an extraordinary claim: that this post, unlike almost every other LW post, is worthy of censorship. Extraordinary claims require extraordinary evidence. Have you met that standard?

Replies from: habryka4, sil-ver, anon03
comment by habryka (habryka4) · 2021-11-02T23:25:46.744Z · LW(p) · GW(p)

I believe that LW must be have a readership two orders of magnitude lower than SSC/ACX (in the thousands, or even just the hundreds, for LW, in the hundreds of thousands for SSC/ACX)

LW's readership is about the same order of magnitude as SSC. Depending on the mood of the HN and SEO gods.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-02T23:43:26.540Z · LW(p) · GW(p)

Not that I don't believe you, but that's also really hard for me to wrap my head around. Can you put numbers on that claim? I'm not sure if ACX has a much smaller readership than I'd imagined, or if LW has a much bigger one, but either way I'd like to know!

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-02T23:56:47.175Z · LW(p) · GW(p)

https://www.similarweb.com/website/astralcodexten.substack.com/?competitors=lesswrong.com Currently shows ACX at something like 1.7x of LessWrong. At some points in the past LessWrong was slightly ahead.

LessWrong is a pretty big website. Here is a random snapshot of top-viewed pages from the last month from Google Analytics: 

As you can see from the distribution, it's a long tail of many pages getting a few hundred pageviews each month, which adds up a lot.

Replies from: AllAmericanBreakfast, hath
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T00:11:03.556Z · LW(p) · GW(p)

That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.

It’s very hard for me to square the idea that these websites get roughly comparable readership with my observation that ACX routinely attracts hundreds of comments on every post. LW gets 1-2 orders of magnitude fewer comments than ACX.

So while I’m updating in favor of the site’s readership being quite a bit bigger than I’d thought, I still think there’s some disconnect here between what I’m thinking of by “readership” and the magnitude of “readership” is coming across in these stats.

Replies from: Benito, habryka4, habryka4
comment by Ben Pace (Benito) · 2021-11-03T00:32:17.884Z · LW(p) · GW(p)

Note that LW gets 1-2 OOM fewer comments on the average post, but not in total. I reckon monthly comments is same OOM. And if you add up total word count on each site I suspect LW is 1 OOM bigger each month. ACX is more focused and the discussion is more focused, LW is a much broader space with lots of smaller convos.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T00:48:49.908Z · LW(p) · GW(p)

That makes a lot of sense. I do get the feeling that, although total volume on a particular topic is more limited here, that there's a sense of conversation and connection that I don't get on ACX, which I think is largely due to the notification system we have here for new comments and messages.

Replies from: Ruby
comment by Ruby · 2021-11-03T03:13:08.877Z · LW(p) · GW(p)

This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.

Replies from: philh
comment by philh · 2021-11-08T21:27:37.491Z · LW(p) · GW(p)

I think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.)

In September it was 10350 comments, over 2400/week. I can't be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least.

Not too far separate though, like maybe 2x but not 10x.

(E: to clarify this is "comments on posts published in the relevant month" but that shouldn't particularly matter here)

comment by habryka (habryka4) · 2021-11-03T03:27:39.374Z · LW(p) · GW(p)

I don't think LW gets at all fewer comments than ACX. I think indeed LW has more comments than ACX, it's just that LW comments are spread out over 60+ posts in a given week, whereas ACX has like 2-3 posts a week. LessWrong gets about 150-300 comments a day, which is roughly the same as what ACX gets per day. 

comment by habryka (habryka4) · 2021-11-03T03:39:16.104Z · LW(p) · GW(p)

That is vastly more readership than I had thought. A naive look at these numbers suggests that a small city’s worth of people read Elizabeth’s latest post. But I assume that these numbers can’t be taken at face value.


I think this number can be relatively straightforwardly taken at face value. Elizabeth's post was at the top of HN for a few hours, so a lot of people saw it. A small city's worth seems about right for the number of people who clicked through and at least skimmed it.

comment by hath · 2021-11-03T00:44:24.912Z · LW(p) · GW(p)

I'm surprised to see how many people view the Roko's Basilisk tag. Is that a trend over more than just the last month?

Replies from: Benito
comment by Rafael Harth (sil-ver) · 2021-11-02T21:58:07.338Z · LW(p) · GW(p)

Extraordinary claims require extraordinary evidence. Have you met that standard?

I think the evidence that wokeism is a powerful force in the world we live in is abundant, and my primary reaction to your comment is that it feels like everything you said could have been written in a world where this isn't so. There is an inherent asymmetry here in how many people care about which things to what degree in the real world. (As I've mentioned in the last discussion, I know a person who falls squarely into the second category I've mentioned; committed EA, very technically smart, but thinks all LW-adjacent things are poisonous, in her case because of sexism rather than racism, but it's in the same cluster.)

Sam Harris invited the author of the Bell Curve onto his podcast 4 years ago, and as a result has a stream of hateful rhetoric targeted his way that lasts to this day. Where is the analogous observable effect into the opposite direction? If it doesn't exist, why is postulating the opposite effect plausible in this case?

My rough cost-benefit analysis is -5/-20/-20 for the points I've mentioned, +1 for the advantage of being able to discuss this here, and maybe +2 for the effect of attracting people who like it for the opposite symbolism (i.e., here's someone not afraid to discuss hard things) and I feel like I don't want to assign a number to how it impacts Isur's feelings. The reason I didn't spell this out was because I thought it would come across as unnecessarily uncharitable, and it doesn't convey much new information because I already communicated that I don't see the upside.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-02T22:38:23.590Z · LW(p) · GW(p)

Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.

“Wokeism” is powerful, and I agree that this post elevated this site’s risk of being attacked or condemned either by the right or the left. I also agree that some people have been turned off by the views on racism or sexism they’ve been exposed to on by some posters on this site.

I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.

In fact, I am skeptical about our ability to usefully predict the effect of individual posts on driving readership to or away from this site. Which is why I don’t advocate censoring individual posts on this basis.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-02T22:50:06.237Z · LW(p) · GW(p)

Sam Harris has enormous reach, comparable to Scott’s. Also, podcasts have a different cultural significance than book reviews. Podcasts tend to come with an implicit sense of friendliness and inclusion extended toward the guest. Not so in a book review, which can be bluntly critical. So for the reasons I outlined above, I don’t think Harris’s experiences are a good reference class for what we should anticipate.

I agree that the risk of anything terrible happening right now is very low for this reason. (Though I'd still estimate it to be higher than the upside.) But is "let's rely on us being too small to get noticed by the mob" really a status quo you're comfortable with?

I also think that negativity tends to be more salient than approval. If lsusr’s post costs us one long-term reader and gains us two, I expect the one user who exits over it to complain and point to this post, making the reason for their dissatisfaction clear. By contrast, I don’t anticipate the newcomers to make a fanfare, or to even see lsusr’s post as a key reason they stick around. Instead, they’ll find themselves enjoying a site culture and abundance of posts that they find generally appealing. So I don’t think a “comparable observable effect in the opposite direction” is what you’d look for to see whether lsusr’s post enhances or diminishes the site’s appeal on net.

This comment actually made me update somewhat because it's harder than I thought to find an asymmetry here. But it's still only a part of the story (and the part I've put the least amount of weight on.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-02T23:39:36.478Z · LW(p) · GW(p)

But is "let's rely on us being too small to get noticed by the mob" really a status quo you're comfortable with?

Let me rephrase that slightly, since I would object to several features of this sentence that I think are beside your main point. I do think that taking the size and context of our community into account when assessing how outsiders will see and respond to our discourse is among the absolute top considerations for judging risk accurately.

On a simple level, my framework is that we care about three factors: object-level risks and consequences, and enforcement-level risks and consequences. These are analogous to the risks and consequences from crime (object-level), and the risks and consequences from creating a police force or military (enforcement-level).

What I am arguing in this case is that the negative risks x consequences of the sort of enforcement-level behaviors you are advocating for and enacting seem to outweigh the negative risks x consequences of being brigaded or criticized in the news. Also, I'm uncertain enough about the balance of this post's effect on inflow vs. outflow of readership to be close to 50/50, and expect it to be small enough either way to ignore it.

Note also that Sam Harris and Scott Alexander still have an enormous readership after their encounters with the threats you're describing. While I can imagine a scenario in which unwanted attention becomes deeply unpleasant, I also expect to be a temporary situation. By contrast, instantiating a site culture that is self-censoring due to fear of such scenarios seems likely to be much more of a daily encumbrance - and one that still doesn't rule out the possibility that we get attacked anyway.

I'd also note that you may be contributing to the elevation of risk with your choices of language. By using terms like "wokeism," "mob," and painting scrutiny as a dire threat in a public comment, it seems to me that you add potential fuel for any fire that may come raging through. My standard is that, if this is your earnest opinion, then LW ought to be a good platform for you to discuss that, even if it elevates our risk of being cast in a negative light.

Your standard, if I'm reading you right, is that your comment should be considered for potential censorship itself, due to the possibility that it does harm to the site's reputation. Although it is perhaps not as potentially inflammatory as a review of TBC, it's also less substantial, and potentially interacts in a synergistic way to elevate the risk. Do you think this is a risk you ought to have taken seriously before commenting? If not, why not?

My perspective is that you were right to post what you posted, because it reflected an honest concern of yours, and permits us to have a conversation about it. I don't think you should have had to justify the existence of your comment with some sort of cost/benefit analysis. There are times when I think that such a justification is warranted, but this context is very far from that threshold. An example of a post that I think crosses that threshold would be a description of a way to inflict damage that had at least two of the following attributes: novel, convenient, or detailed. Your post is none of these, and neither is lsusr's, so both of them pass my test for "it's fine to talk about it."

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T00:30:56.991Z · LW(p) · GW(p)

After reading this, I realize that I've done an extremely poor job communicating with everything I've commented on this post, so let me just try to start over.

I think what I'm really afraid of is a sequence of events that goes something like this:

  • Every couple of months, someone on LW makes a post like the above
  • In some (most?) cases, someone is going to speak up against this (in this case, we had two), there will be some discussion, but the majority will come down on the side that censorship is bad and there's no need to take drastic action
  • The result is that we never establish any kind of norm nor otherwise prepare for political backlash
  • In ten or twenty or forty years from now, in a way that's impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment
  • At that point it will be obvious to everyone that not having done anything to prevent this was a catastrophic error

After the discussion on the dating post, I've made some attempts to post a follow-up but chickened out of doing it because I was afraid of the reaction or maybe just because I couldn't figure out how to approach the topic. When I saw this post, I think I originally decided not to do anything, but then anon03 said something and then somehow I thought I had to say something as well but it wasn't well thought out because I already felt a fair amount of anxiety after having failed to write about it before. When my comment got a bunch of downvotes, the feeling of anxiety got really intense and I felt like the above mentioned scenario is definitely going to happen and I won't be able to do anything about it because arguing for censorship is just a lost cause, and I think I then intentionally (but subconsciously) used the language you've just pointed out to signal that I don't agree with the object level part of anything I'm arguing for (probably in the hopes of changing the reception?) even though I don't think that made a lot of sense; I do think I trust people on this site to keep the two things separate. I completely agree that this risks making the problem worse. I think it was a mistake to say it.

I don't think any of this is an argument for why I'm right, but I think that's about what really happened.

Probably it's significantly less than 50% that anything like what I described happens just because of the conjunction -- who knows of anyone even still cares about social justice in 20 years. But it doesn't seem nearly unlikely enough not to take seriously, and I don't see anyone taking it seriously and it really terrifies me. I don't completely understand why since I tend to not be very affected when thinking about x-risks. Maybe because of the feeling that it should be possible to prevent it.

I don't think the fact that Sam still has an audience is a reason not to panic. Joe Rogan has a quadrillion times the audience of the NYT or CNN, but the social justice movement still has disproportionate power over institutions and academia, and probably that includes AI labs?

Replies from: AllAmericanBreakfast, Vaniver, Benito
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T01:15:16.452Z · LW(p) · GW(p)

I will say that although I disagree with your opinion re: censoring this post and general risk assessment related to this issue, I don't think you've expressed yourself particularly poorly. I also acknowledge that it's hard to manage feelings of anxiety that come up in conversations with an element of conflict, in a community you care about, in regards to an issue that is important to the world. So go easier on yourself, if that helps! I too get anxious when I get downvoted, or when somebody disagrees with me, even though I'm on LW to learn, and being disagreed with and turning out to be wrong is part of that learning process.

It sounds like a broader perspective of yours is that there's a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.

I think we roughly agree on the importance of x-risk and AGI safety research. If there was a cheap action I could take that I thought would reliably mitigate x-risk by 0.001%, I would take it. Downvoting a worrisome post is definitely a cheap action, so if I thought it would reliably mitigate x-risk by 0.001%, I would probably take it.

The reason I don't take it is because I don't share your perception that we can effectively mitigate x-risk in this way. It is not clear to me that the overall effect of posts like lsusr's is net negative for these causes, nor that such a norm of censorship would be net beneficial.

What I do think is important is an atmosphere in which people feel freedom to follow their intellectual interests, comfort in participating in dialog and community, and a sense that their arguments are being judged on their intrinsic merit and truth-value.

The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.

That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It's unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed. If LW is just a magnet for a certain species of blogger who happens to be interested in AGI safety, among other things; and if those bloggers risk attracting a lot of scary attention while contributing minimally to the spread of AGI safety awareness or to the research itself, then that seems like a concerning scenario.

It's also hard for me to judge. I can say that LW has played a key role for me connecting with and learning from the rationalist community. I understand AGI safety issues better for it, and am the only point of reference that several of my loved ones have for hearing about these issues.

So, N of 1, but LW has probably improved the trajectory of AGI safety by a miniscule but nonzero amount via its influence on me. And I wouldn't have stuck around on LW if there was a lot of censorship of controversial topics. Indeed, it was the opportunity to wrestle with my attachments and frustrations with leftwing ideology via the ideas I encountered here that made this such an initially compelling online space. Take away the level of engagement with contemporary politics that we permit ourselves here, add in a greater level of censorship and anxiety about the consequences of our speech, and I might not have stuck around.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T11:33:34.654Z · LW(p) · GW(p)

Thanks for this comment.

It sounds like a broader perspective of yours is that there's a strategy for growing the AGI safety community that involves keeping it on the good side of whatever political faction is in power. You think that we should do pretty much whatever it takes to make AGI safety research a success, and that this strategy of avoiding any potentially negative associations is important enough for achieving that outcome that we should take deliberate steps to safeguard its perception in this way. As a far-downstream consequence, we should censor posts like this, out of a general policy of expunging anything potentially controversial being associated with x-risk/AGI safety research and their attendant communities.

I happily endorse this very articulate description of my perspective, with the one caveat that I would draw the line to the right of 'anything potentially controversial' (with the left-right axis measuring potential for backlash). I think this post falls to the right of just about any line; I think it hast he highest potential for backlash out of any post I remember seeing on LW ever. (I just said the same in a reply to Ruby, and I wasn't being hypothetical.)

That said, people actually doing AGI research have other forums for their conversation, such as the alignment forum and various nonprofits. It's unclear that LW is a key part of the pipeline for new AGI researchers, or forum for AGI research to be debated and discussed.

I'm probably an unusual case, but I got invited into the alignment forum by posting the Factored Cognition sequence on LW, so insofar as I count, LW has been essential. If it weren't for the way that the two forums are connected, I wouldn't have written the sequence. The caveat is that I'm currently not pursuing a "direct" path on alignment but am instead trying to go the academia route by doing work in the intersection of [widely recognized] and [safety-relevant] (i.e. on interpretability), so you could argue that the pipeline ultimately didn't work. But I think (not 100% sure) at least Alex Turner [LW · GW] is a straight-forward success story for said pipeline.

And I wouldn't have stuck around on LW if there was a lot of censorship of controversial topics.

I think you probably want to respond on this on my reply to Rubin [LW(p) · GW(p)] so that we don't have two discussions about the same topic. My main objection is that the amount of censorship I'm advocating for seems to me to be tiny, I think less than 5 posts per year, far less than what is censored by the norm against politics.

Edit: I also want to object to this:

The norm that our arguments should be judged based on their instrumental impact on the world seems to me to be generally harmful to epistemics. And having an environment that tries to center epistemic integrity above other concerns seems like a relatively rare and valuable thing, one that basically benefits AGI safety.

I don't think anything of what I'm saying involves judging arguments based on their impact on the world. I'm saying you shouldn't be allowed to talk about TBC on LW in the first place. This seems like a super important distinction because it doesn't involve lying or doing any mental gymnastics. I see it as closely analogous to the norm against politics, which I don't think has hurt our discourse.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T19:07:13.395Z · LW(p) · GW(p)

I don't think anything of what I'm saying involves judging arguments based on their impact on the world.

What I mean here is that you, like most advocates of a marginal increase in censorship, justify this stance on the basis that the censored material will cause some people, perhaps its readers or its critics, to take an action with an undesirable consequence. Examples from the past have included suicidal behavior, sexual promiscuity, political revolution, or hate crimes.

To this list, you have appended "elevating X-risk." This is what I mean by "impact on the world."

Usually, advocates of marginal increases in censorship are afraid of the content of the published documents. In this case, you're afraid not of what the document says on the object level, but of how the publication of that document will be perceived symbolically.

An advocate of censorship might point out that we can potentially achieve significant gains on goals with widespread support (in our society, stopping hate crimes might be an example), with only modest censorship. For example, we might not ban sales of a certain book. We just make it library policy not to purchase them. Or we restrict purchase to a certain age group. Or major publishers make a decision not to publish books advocating certain ideas, so that only minor publishing houses are able to market this material. Or we might permit individual social media platforms to ban certain articles or participants, but as long as internet service providers aren't enacting bans, we're OK with it.

On LW, one such form of soft censorship is the mod's decision to keep a post off the frontpage.

To this list of soft censorship options, you are appending "posting it as a linkpost, rather than on the main site," and assuring us that only 5 posts per year need to be subject even to this amount of censorship.

It is OK to be an advocate of a marginal increase in censorship. Understand, though, that to one such as myself, I believe that it is precisely these small marginal increases in censorship that pose a risk to X-risk, and the marginal posting of content like this book review either decreases X-risk (by reaffirming the epistemic freedom of this community) or does not affect it. If the community were larger, with less anonymity, and had a larger amount of potentially inflammatory political material, I would feel differently about this.

Your desire to marginally increase censorship feels to me a bit like a Pascal's Mugging. You worry about a small risk of dire consequences that may never emerge, in order to justify a small but clear negative cost in the present moment. I don't think you're out of line to hold this belief. I just think that I'd need to see some more substantial empirical evidence that I should subscribe to this fear before I accept that we should pay this cost.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T19:28:40.926Z · LW(p) · GW(p)

To this list of soft censorship options, you are appending "posting it as a linkpost, rather than on the main site," and assuring us that only 5 posts per year need to be subject even to this amount of censorship.

The link thing was anon03's idea; I want posts about TBC to be banned outright.

Other than that, I think you've understood my model. (And I think I understand yours except that I don't understand the gears of the mechanism by which you think x-risk increases.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T21:10:47.860Z · LW(p) · GW(p)

Sorry for conflating anon03's idea with yours!

A quick sketch at a gears-level model:

  1. X-risk, and AGI safety in particular, require unusual strength in gears-level reasoning to comprehend and work on; a willingness to stand up to criticism not only on technical questions but on moral/value questions; an intense, skeptical, questioning attitude; and a high value placed on altruism. Let's call these people "rationalists."
  2. Even in scientific and engineering communities, and the population of rational people generally, the combination of these traits I'm referring to as "rationalism" is rare.
  3. Rationalism causes people to have unusually high and predictable needs for a certain style and subject of debate and discourse, in a way that sets them apart from the general population.
  4. Rationalists won't be able to get their needs met in mainstream scientific or engineering communities, which prioritize a subset of the total rationalist package of traits.
  5. Hence, they'll seek an alternative community in which to get those needs met.
  6. Rationalists who haven't yet discovered a rationalist community won't often have an advance knowledge of AGI safety. Instead, they'll have thoughts and frustrations provoked by the non-rationalist society in which they grew up. It is these prosaic frustrations - often with politics - that will motivate them to seek out a different community, and to stay engaged with it.
  7. When these people discover a community that engages with the controversial political topics they've seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they'll take it as evidence that they should stick around. It will also be a place where even AGI safety researchers and their friends can deal with the ongoing their issues and interests beyond AGI safety.
  8. By associating with this community, they'll pick up on ideas common in the community, like a concern for AGI safety. Some of them will turn it into a career, diminishing the amount of x-risk faced by the world.

I think that marginally increasing censorship on this site risks interfering with step 7. This site will not be recognized by proto-rationalists as a place where they can deal with the frustrations that they're wrestling with when they first discover it. They won't see an open attitude of free enquiry modeled, but instead see the same dynamics of fear-based censorship that they encounter almost everywhere else. Likewise, established AGI safety people and their friends will lose a space for free enquiry, a space for intellectual play and exploration that can be highly motivating. Loss of that motivation and appeal may interrupt the pipeline or staying power for people to work on X-risks of all kinds, including AGI safety.

Politics continues to affect people even after they've come to understand why it's so frustrating, and having a minimal space to deal with it on this website seems useful to me. When you have very little of something, losing another piece of it feels like a pretty big deal.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T22:19:26.304Z · LW(p) · GW(p)

When these people discover a community that engages with the controversial political topics they've seen shunned and censored in the rest of society, and doing it in a way that appears epistemically healthy to them, they'll take it as evidence that they should stick around.

What has gone into forming this model? I only have one datapoint on this (which is myself). I stuck around because of the quality of discussion (people are making sense here!); I don't think the content mattered. But I don't have strong resistance to believing that this is how it works for other people.

I think if your model is applied to the politics ban, it would say that it's also quite bad (maybe not as bad because most politics stuff isn't as shunned and censored as social justice stuff)? If that's true, how would you feel about restructuring rather than widening the censorship? Start allowing some political discussions (I also keep thinking about Wei Dai's "it'll go there eventually so we should practice" argument) but censor the most controversial social justice stuff. I feel like the current solution isn't pareto optimal in the {epistemic health} x {safety against backlash} space.

Replies from: None, AllAmericanBreakfast
comment by [deleted] · 2021-11-04T00:05:58.344Z · LW(p) · GW(p)

Anecdotal, but about a year ago I committed to the rationalist community for exactly the reasons described. I feel more accepted in rationalist spaces than trans spaces, even though rationalists semi-frequently argue against the standard woke line and trans spaces try to be explicitly welcoming.

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-11-03T23:08:21.636Z · LW(p) · GW(p)

Just extrapolating from my own experience. For me, the content was important.

I think where my model really meets challenges is that clearly, the political content on LW has alienated some people. These people were clearly attracted here in the first place. My model says that LW is a magnet for likely AGI-safety researchers, and says nothing about it being a filter for likely AGI-safety researchers. Hence, if our political content is costing us more involvement than it's retaining, or if the frustration experienced by those who've been troubled by the political content outweigh the frustration that would be experienced by those whose content would be censored, then that poses a real problem for my cost/benefit analysis.

A factor asymmetrically against increased censorship here is that censorship is, to me, intrinsically bad. It's a little like war. Sometimes, you have to fight a war, but you should insist on really good evidence before you commit to it, because wars are terrible. Likewise, censorship sucks, and you should insist on really good evidence before you accept an increase in censorship.

It's this factor, I think, that tilts me onto the side of preferring the present level of political censorship rather than an increase. I acknowledge and respect the people who feel they can't participate here because they experience the environment as toxic. I think that is really unfortunate. I also think that censorship sucks, and for me, it roughly balances out with the suckiness of alienating potential participants via a lack of censorship.

This, I think, is the area where my mind is most susceptible to change. If somebody could make a strong case that LW currently has a lot of excessively toxic, alienating content, that this is the main bottleneck for wider participation, and that the number of people who'd leave if that controversial content were removed were outweighed by the number of people who'd join, then I'd be open-minded about that marginal increase in censorship.

An example of a way this evidence could be gathered would be some form of community outreach to ex-LWers and marginal LWers. We'd ask those people to give specific examples of the content they find offensive, and try both to understand why it bothers them, and why they don't feel it's something they can or want to tolerate. Then we'd try to form a consensus with them about limitations on political or potentially offensive speech that they would find comfortable, or at least tolerable. We'd also try to understand their level of interest in participating in a version of LW with more of these limitations in place.

Here, I am hypothesizing that there's a group of ex-LWers or marginal LW-ers who feel a strong affinity for most of the content, while an even stronger aversion for a minority subset of the content to such a degree that they sharply curtail their participation. Such that if the offensive tiny fraction of the content were removed, they'd undergo a dramatic and lasting increase in engagement with LW. I find it unlikely that a sizeable group like this exists, but am very open to having my mind changed via some sort of survey data.

It seems more likely to me that ex/marginal-LWers are people with only a marginal interest in the site as a whole, who point to the minority of posts they find offensive as only the most salient example of what they dislike. Even if it were removed, they wouldn't participate.

At the same time, we'd engage in community dialog with current active participants about their concerns with such a change. How strong are their feelings about such limitations? How many would likely stop reading/posting/commenting if these limitations were imposed? For the material they feel most strongly about it, why do they feel that way?

I am positing that there are a significant subset of LWers for whom the minority of posts engaging with politics are very important sources of its appeal.

How is it possible that I could simultaneously be guessing - and it is just a guess - that controversial political topics are a make-or-break screening-in feature, but not a make-or-break screening-out feature?

The reason is that there are abundant spaces online and in-person for conversation that does have the political limitations you are seeking to impose here. There are lots of spaces for conversation with a group of likeminded ideologues across the entire political spectrum, where conformity is a prerequisite of polite conversation. Hence, imposing the same sort of guardrails or ideological conformities on this website would make it similar to many other platforms. People who desire these guardrails/conformities can get what they want elsewhere. For them, LW would be a nice-to-have.

For those who desire polite and thoughtful conversation on a variety of intellectual topics, even touching on politics, LW is verging on a need-to-have. It's rare. This is why I am guessing that a marginal increase in censorship would cost us more appeal than it would gain us.

I agree with you that the risk of being the subject of massive unwanted attention as a consequence is nonzero. I simply am guessing that it's small enough not to be worth the ongoing short-term costs of a marginal increase in censorship.

But I do think that making the effort to thoroughly examine and gather evidence for the extent to which our political status quo serves to attract or repel people would be well worth a thorough examination. Asking at what point the inherent cost of a marginal increase in censorship becomes worth paying in exchange for a more inclusive environment seems like a reasonable question to ask. But I think this process would need a lot of community buy-in and serious effort on the part of a whole team to do it right.

The people who are already here would need persuading, and indeed, I think they deserve the effort to be persuaded to give up some of their freedom to post what they want here in exchange for, the hope would be, a larger and more vibrant community. And this effort should come with a full readiness to discover that, in fact, such restrictions would diminish the size and vibrancy and intellectual capacity of this community. If it wasn't approached in that spirit, I think it would just fail.

comment by Vaniver · 2021-11-03T17:43:17.833Z · LW(p) · GW(p)

In ten or twenty or forty years from now, in a way that's impossible to predict because any specific scenario is extremely unlikely, the position to be worried about AGI will get coupled to being anti social justice in the public discourse, as a result it will massively lose status and the big labs react by taking safety far less seriously and maybe we have fewer people writing papers on alignment

So, I both think that in the past 1) people have thought the x-risk folks are weird and low-status and didn't want to be affiliated with them, and in the present 2) people like Phil Torres are going around claiming that EAs and longtermists are white surpremacists, because of central aspects of longtermism (like thinking the present matters in large part because of its ability to impact the future). Things like "willingness to read The Bell Curve" no doubt contribute to their case, but I think focusing on that misses the degree to which the core is actually in competition with other ideologies or worldviews.

I think there's a lot of value in trying to nudge your presentation to not trigger other people's allergies or defenses, and trying to incorporate criticisms and alternative perspectives. I think we can't sacrifice the core to do those things. If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-11-03T18:02:05.316Z · LW(p) · GW(p)

If we disagree with people about whether the long-term matters, then we disagree with them; if they want to call us names accordingly, so much the worse for them.

I mean, this works until someone in a position of influence bows the the pressure, and I don't see why this can't happen.

I think we can't sacrifice the core to do those things.

The main disagreement seems to come down to how much we would give up when disallowing posts like this. My gears model still says 'almost nothing' since all it would take is to extend the norm "let's not talk about politics" to "let's not talk about politics and extremely sensitive social-justice adjacent issues", and I feel like that would extend the set of interesting taboo topics by something like 10%.

(I've said the same here [LW(p) · GW(p)]; if you have a response to this, it might make sense to all keep it in one place.)

comment by Ben Pace (Benito) · 2021-11-03T18:15:35.815Z · LW(p) · GW(p)

Sorry about your anxiety around this discussion :(

comment by anon03 · 2021-11-03T02:01:44.266Z · LW(p) · GW(p)

I like the norm of "If you're saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you're not doing that." This is not a norm of censorship nor self-censorship, it's a norm of clear communication [LW · GW] and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.

If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on [LW(p) · GW(p)], but I wouldn't feel as strongly about it.

I think "censorship" is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T04:40:04.801Z · LW(p) · GW(p)

I like the norm of “If you’re saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you’re doing that.” This is not a norm of censorship nor self-censorship, it’s a norm of clear communication and of kindness.

I think that this is completely wrong. Such a norm is definitely a norm of (self-)censorship—as has been discussed on Less Wrong [LW(p) · GW(p)] already.

It is plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone, but simply as a book review / summary, just like it says. Catering, in any way whatsoever, to anyone who finds the current post “hurtful and insulting”, is an absolutely terrible idea. Doing such a thing cannot do anything but corrode Less Wrong’s epistemic standards.

Replies from: anon03, Lukas_Gloor
comment by anon03 · 2021-11-03T12:57:01.796Z · LW(p) · GW(p)

Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.

What do you do?

APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.

APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.

I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually "win", i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.

~~

Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don't think it is "plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone".

Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don't have to actually believe this—I don't—I am merely asking you to imagine that Person A believes that.

Now look at the OP through A's eyes. Right from the title, it's clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A's perspective is that OP doesn't think that the book's racism advocacy is a big deal, or maybe OP even thinks it's a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.

Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that's the kind of mistake we should be sympathetic to. It doesn't mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they've been trusting bad sources. Who among us hasn't trusted bad sources at some point in our lives? I sure have!

And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say "Oh well, can't please everyone." But if Person A's wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T14:28:42.311Z · LW(p) · GW(p)

Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don’t have to actually believe this—I don’t—I am merely asking you to imagine that Person A believes that.

If Person A believes this without ever having either (a) read The Bell Curve or (b) read a neutral, careful review/summary of The Bell Curve, then A is not a reasonable person.

All sorts of unreasonable people have all sorts of unreasonable and false beliefs. Should we cater to them all?

No. Of course we should not.

Now look at the OP through A’s eyes. Right from the title, it’s clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person.

The title, as I said before, is neutrally descriptive. Anyone who takes it as an endorsement is, once again… unreasonable.

Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism

Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.

I think a reasonable conclusion from A’s perspective is that OP doesn’t think that the book’s racism advocacy is a big deal, or maybe OP even thinks it’s a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.

No, this would actually be wildly unreasonable behavior, unworthy of any remotely rational, sane adult. Children, perhaps, may be excused for behaving in this way—and only if they’re very young.

The bottom line is: the idea that “reasonable people” think and behave in the way that you’re describing is the antithesis of what is required to maintain a sane society. If we cater to this sort of thing, here on Less Wrong, then we completely betray our raison d’etre, and surrender any pretense to “raising the sanity waterline”, “searching for truth”, etc.

Replies from: anon03
comment by anon03 · 2021-11-03T15:09:03.030Z · LW(p) · GW(p)

Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.

I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a "neutral" point of view.

Do you view those facts as evidence that I'm an unreasonable person?

Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn't read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don't have unlimited time.

By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with "I get it, I know what you've heard about this book, but hear me out, I'm going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…", then I would certainly consider reading the piece.

Replies from: fourier, SaidAchmiz
comment by fourier · 2021-11-05T00:26:22.040Z · LW(p) · GW(p)

Your analogy breaks down because the Bell Curve is extremely reasonable, not some forged junk like "The Protocols Of The Elders Of Zion".

If a book mentioned here mentioned evolution and that offended some traditional religious people, would we need to give a disclaimer and potentially leave it off the site? What if some conservative religious people believe belief in evolution directly harms them? They would be regarded as insane, and so are people offended by TBC.

That's all this is by the way, left-wing evolution denial. How likely is it that people separated for tens of thousands of years with different founder populations will have equal levels of cognitive ability. It's impossible.

comment by Said Achmiz (SaidAchmiz) · 2021-11-03T15:36:03.672Z · LW(p) · GW(p)

I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a “neutral” point of view.

Do you view those facts as evidence that I’m an unreasonable person?

Yeah.

“What do you think you know, and how do you think you know it?” never stopped being the rationalist question.

As for the rest of your comment—first of all, my relative levels of interest in reading a book review of the Protocols would be precisely reversed from yours.

Secondly, I want to call attention to this bit:

“… I’m going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…”

There is no particular reason to “give this book a chance”—to what? Convince us of its thesis? Persuade us that it’s harmless? No. The point of reviewing a book is to improve our understanding of the world. The Protocols of the Elders of Zion is a book which had an impact on global events, on world history. The reason to review it is to better understand that history, not to… graciously grant the Protocols the courtesy of having its allotted time in the spotlight.

If you think that the Protocols are insignificant, that they don’t matter (and thus that reading or talking about them is a total waste of our time), that is one thing—but that’s not true, is it? You yourself say that the Protocols had a terrible impact! All the things which we should strive our utmost to understand, how can a piece of writing that contributed to some of the worst atrocities in history not be among them? How do you propose to prevent history from repeating, if you refuse, not only to understand it, but even to bear its presence?

The idea that we should strenuously shut our eyes against bad things, that we should forbid any talk of that which is evil, is intellectually toxic.

And the notion that by doing so, we are actually acting in a moral way, a righteous way, is itself the root of evil.

Replies from: anon03
comment by anon03 · 2021-11-03T16:30:50.099Z · LW(p) · GW(p)

Hmm, I think you didn't get what I was saying. A book review of "Protocols of the Elders of Zion" is great, I'm all for it. A book review of "Protocols of the Elders of Zion" which treats it as a perfectly lovely normal book and doesn't say anything about the book being a forgery until you get 28 paragraphs into the review and even then it's barely mentioned is the thing that I would find extremely problematic. Wouldn't you? Wouldn't that seem like kind of a glaring omission? Wouldn't that raise some questions about the author's beliefs and motives in writing the review?

Do you view those facts as evidence that I’m an unreasonable person?

Yeah.

Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it's dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)

I think you're holding people up to an unreasonable standard here. You can't do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.

Suppose we want to solve that problem. (We do, right?) I feel like you're proposing a solution of "form a community of people who have never trusted anyone about anything". But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.

We're not born with an epistemology instruction manual. We all have to find our way, and we probably won't get it right the first time. Splitting the world into "people who already agree with me" and "people who are forever beyond reason", that's the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T16:52:28.410Z · LW(p) · GW(p)

Hmm, I think you didn’t get what I was saying. A book review of “Protocols of the Elders of Zion” is great, I’m all for it. A book review of “Protocols of the Elders of Zion” which treats it as a perfectly lovely normal book and doesn’t say anything about the book being a forgery until you get 28 paragraphs into the review and even then it’s barely mentioned is the thing that I would find extremely problematic. Wouldn’t you? Wouldn’t that seem like kind of a glaring omission? Wouldn’t that raise some questions about the author’s beliefs and motives in writing the review?

I agree completely.

But note that here we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact, effects of its publication, etc. The latter sort of thing may properly be discussed in a “discussion section” subsequent to the main body of the review, or it may simply be left up to a Wikipedia link. I would certainly not require that it preface the book review, before I found that review “acceptable”, or forebore to question the author’s motives, or what have you.

And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.

Do you ever, in your life, think that things are true without checking?

I certainly try not to, though bounded rationality does not permit me always to live up to this goal.

Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?)

I have no beliefs about this one way or the other.

Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?)

I have no beliefs about this one way or the other.

Do you think that it’s dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)

Depends on the medicine, but I am given to understand that this is often true. I have “checked” in the sense that I regularly read up on the toxicology and other pharmacokinetic properties of medications I take, or those might take, or even those I don’t plan to take. Yes, I look for skeptical sources.

My recommendation, in general, is to avoid having opinions about things that don’t affect you; aim for a neutral skepticism. For things that do affect you, investigate; don’t just stumble into beliefs. This is my policy, and it’s served me well.

I think you’re holding people up to an unreasonable standard here. You can’t do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.

The solution to this is to trust less, check more; decline to have any opinion one way or the other, where doing so doesn’t affect you. And when you have to, trust—but verify.

Strive always to be aware of just how much trust in sources you haven’t checked underlies any belief you hold—and, crucially, adjust the strength of your beliefs accordingly.

And when you’re given an opportunity to check, to verify, to investigate—seize it!

A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken.

The principle of charity, as often practiced (here and in other rationalist spaces), can actually be a terrible idea [LW(p) · GW(p)].

But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)

We should use it only to the extent that it does not in any way reduce our own ability to seek, and find, the truth, and not one iota more.

Replies from: anon03
comment by anon03 · 2021-11-03T18:03:59.692Z · LW(p) · GW(p)

we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact

A belief that "TBC was written by a racist for the express purpose of justifying racism" would seem to qualify as "worth mentioning prominently at the top" under that standard, right?

And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.

I imagine that very few people would find the title by itself insulting; it's really "the title in conjunction with the first paragraph or two" (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).

Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is "a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc." You don't need to read far or carefully to know that the OP belongs to this genre. It's a very different genre from a (reasonable) book review of "Protocols of the Elders of Zion", or a (reasonable) book review of "Mein Kampf", or a (reasonable) book review of "Harry Potter".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T18:33:31.403Z · LW(p) · GW(p)

A belief that “TBC was written by a racist for the express purpose of justifying racism” would seem to qualify as “worth mentioning prominently at the top” under that standard, right?

No, of course not (the more so because it’s a value judgment, not a statement of fact).

The rest of what you say, I have already addressed.

comment by Said Achmiz (SaidAchmiz) · 2021-11-03T14:16:56.362Z · LW(p) · GW(p)

Approach 2 assumes that A is (a) a reasonable person and (b) coming into the situation with good faith. Usually, neither is true.

What is more, your list of two approaches is a very obvious false dichotomy, crafted in such a way as to mock the people you’re disagreeing with. Instead of either the strawman Approach 1 or the unacceptable Approach 2, I endorse the following:

APPROACH 3: Ignore the fact that A (supposedly) finds X “demeaning”. Say (or don’t say) X whenever the situation calls for it. Behave in all ways as if A’s opinion is completely irrelevant.

(Note, by the way, that Approach 2 absolutely does constitute (self-)censorship, as anything that imposes costs on a certain sort of speech—such as, for instance, requiring elaborate genuflection to supposedly “offended” parties, prior to speaking—will serve to discourage that form of speech. Of course, I suspect that this is precisely the goal—and it is also precisely why I reject your suggestion wholeheartedly. Do not feed utility monsters. [LW(p) · GW(p)])

comment by Lukas_Gloor · 2021-11-03T11:23:34.971Z · LW(p) · GW(p)

There's a difference between catering to an audience and proactively framing things in the least explosive way. 

Maybe what you are saying is that when people try to do the latter, they inevitably end up self-censoring and catering to the (hostile) audience?

But that seems false to me. I not only think framing contoversial topics in a non-explosive way is a strategically important, underappreciated skill. In addition, I suspect that practicing the skill improves our epistemics. It forces us to engage with a critical audience of people with ideological differences. When I imagine having to write on a controversial topic, one of the readers I mentally simulate is "person who is ideologically biased against me, but still reasonable." I don't cater to unreasonable people, but I want to take care to not put off people who are still "in reach." And if they're reasonable, sometimes they have good reasons behind at least some of their concerns and their perspectives can be learnt from. 

Replies from: Ruby, SaidAchmiz
comment by Ruby · 2021-11-03T18:34:57.749Z · LW(p) · GW(p)

As I mentioned elsethread, if I'd written the book review I would have done what you describe. But I didn't and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-11-03T21:12:58.505Z · LW(p) · GW(p)

I was just commenting on the general norm. I haven't read the OP and didn't mean to voice an opinion on it.

I'm updating that I don't understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate.

FWIW, I think the "caving in" discussed/contemplated in Rafael Harth's comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it's a bad strategy.

Maybe I would focus more on criticizing this caving in mentality if I didn't feel like I was preaching to the choir. "Open discussion" norms feel so ingrained on Lesswrong that I'm more worried that other good norms get lost / overlooked.

Maybe I would feel different (more "under attack") if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about "soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense" end up becoming less active or avoiding the comment sections. 

Edit: The virtue I mean is maybe best described as "presenting your side in a way that isn't just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that's predisposed to be suspicious of you." 

comment by Said Achmiz (SaidAchmiz) · 2021-11-03T11:46:37.055Z · LW(p) · GW(p)

This is a moot point, because anyone who finds a post title like “Book review: The Bell Curve by Charles Murray” to be “controversial”, “explosive”, etc., is manifestly unreasonable.

Replies from: anon03
comment by anon03 · 2021-11-03T13:03:22.813Z · LW(p) · GW(p)

My comment here [LW(p) · GW(p)] argues that a reasonable person could find this post insulting.

comment by Ben Pace (Benito) · 2021-11-03T00:29:27.367Z · LW(p) · GW(p)

(Upvote, but disagree.)

comment by steven0461 · 2021-11-03T03:13:26.405Z · LW(p) · GW(p)

This could be through any number of mechanisms like

A story I'm worried about goes something like:

  • LW correctly comes to believe that for an AI to be aligned, its cognitive turboencabulator needs a base plate of prefabulated amulite
  • the leader of an AI project tries to make the base plate out of unprefabulated amulite
  • another member of the project mentions off-hand one time that some people think it should be prefabulated
  • the project leader thinks, "prefabulation, wasn't that one of the pet issues of those Bell Curve bros? well, whatever, let's just go ahead"
  • the AI is built as planned and attains superhuman intelligence, but its cognitive turboencabulator fails, causing human extinction
comment by Ericf · 2021-11-03T13:12:28.757Z · LW(p) · GW(p)

Meta-meta note:

Even if a theoretical author cares not one whit about appearing to endorse "bad things" #scarequotes, including preemptive disclaimers is still good practice to forestall this sort of meta-commentary and keep the comments focused on the content of the post, and not the method of delivery.

comment by anon03 · 2021-11-02T14:45:31.001Z · LW(p) · GW(p)

Strong-downvoted. I want lesswrong to be a peaceful place where we can have polite boring truth-seeking arguments without incurring reputational risk / guilt-by-association. I understand the benefit of having polite boring truth-seeking arguments about racism-adjacent topics that take sides in an incredibly incendiary culture war. However, there is also a cost—namely, there's a public good called "right now there is minimal reputational risk of being publicly IRL known as a lesswrong participant", and each time there's a post about racism-adjacent topics that takes sides in an incredibly incendiary culture war, we're shooting a flaming arrow at that public good, and hoping we get lucky and the whole thing doesn't burn to the ground.

There are simple ways to get almost all the benefit with almost none of the cost, namely: (1) post on a different site (especially a different site that allows comments) and (if you must) do a linkpost here, (2) pick a post title / framing that's less provocative (UPDATE: see my comment here [LW(p) · GW(p)] for why I think reasonable people could find this post provocative / insulting), (3) put more effort into not saying racist / racist-adjacent things (or to be charitable, "things that would plausibly come across as racist"), like (to pick one example) how the word "better" in "a secretary or a dentist who is one standard deviation better than average is worth a 40% premium in salary" seems to be thoughtlessly equating g-factor with moral worth, (4) seriously engage with intelligent criticism of this book, including describing in a sympathetic / ITT-passing [? · GW] way why people might have found this book hurtful or problematic, or if you can't be bothered to do that, then maybe you shouldn't write the post in the first place.

Replies from: Daniel V, lc, Ruby, fourier
comment by Daniel V · 2021-11-02T16:28:22.318Z · LW(p) · GW(p)

I agree the post didn't address Murray's points that critically or look deeply into the long list of critiques of the book, but it's a useful summary of the main points (with some criticism here and there), which I think was the point.

I'm not sure how most of these options would ensure the benefit of summarizing without the cost of reputational risk: (1) This one might, until the connections are easily followed by, say, the NYT or any random internet sleuth; (2) Maybe the title has been edited (?), but I'm not seeing a provocative title or framing, most of it isn't even about race; (3) The example here isn't even about race and is obviously not about moral worth though the general point is good from an editing standpoint; (4) Certainly this would enhance the contribution (I wanted some of this myself), but particularly when it comes to The Bell Curve, people have this misconception that it's just a racist screed, so a summary from someone who actually read the book is helpful to start. Maybe a summary just isn't up to the contribution level of a LW post and one should hit a higher bar - but that's a norm that has yet to be established IMHO.

Intelligence and race are both uncomfortable topics, and mixing them together is even more uncomfortable. If LW wants a norm of not discussing particular uncomfortable topics, then okay! But at least let it be through topic-screening rather than overblowing what is actually being said in a post.

Replies from: anon03
comment by anon03 · 2021-11-02T19:48:59.348Z · LW(p) · GW(p)

…until the connections are easily followed by, say, the NYT or any random internet sleuth…

I think there's a widespread perception in society that "being a platform that hosts racist content" is very much worse than "being a site where one can find a hyperlink to racist content". I'm not necessarily endorsing that distinction, but I'm quite confident that it exists in many people's minds.

I'm not seeing a provocative title or framing

Hmm, maybe you're from a different part of the world / subculture or something. But in cosmopolitan USA culture, merely mentioning TBC (without savagely criticizing it in the same breath) is widely and instantly recognized as a strongly provocative and hurtful and line-crossing thing to do. Saying "Hey, I'm just reading TBC with a curious and open mind, I'm not endorsing every word" is kinda like perceived to be kinda like saying "Hey, I'm just studying the philosophy of Nazism with a curious and open mind, I'm not endorsing every word" or "Hey, I'm just reading this argument for legalizing child rape with a curious and open mind, I'm not endorsing every word" or whatever.

If you're writing a post that some rape law doesn't actually help with rape despite popular perceptions, you open with a statement that rape is in fact bad and you do in fact want to reduce it, and you write it in a way that's sympathetic to people who have been really harmed by rape. By the same token, if you're writing a post that says reading TBC does not actually perpetuate racism despite popular perceptions, you open with a statement that racism is bad and you do in fact want to reduce it, and you write it in a way that's sympathetic to people who have been really seriously harmed by racism.

Replies from: SaidAchmiz, MondSemmel
comment by Said Achmiz (SaidAchmiz) · 2021-11-02T20:05:59.597Z · LW(p) · GW(p)

Hmm, maybe you’re from a different part of the world / subculture or something. But in cosmopolitan USA culture, merely mentioning TBC (without savagely criticizing it in the same breath) is widely and instantly recognized as a strongly provocative and hurtful and line-crossing thing to do.

This may or may not be true, but it is truly shocking to see someone endorsing this standard on Less Wrong, of all places. It’s difficult to think of a more neutrally descriptive title than this post has—it’s almost the Platonic ideal of “neutrally descriptive”! To suggest that we should treat any mention of a book as “provocative” is, frankly, something that I find offensive.

Saying “Hey, I’m just reading TBC with a curious and open mind, I’m not endorsing every word” is kinda like saying “Hey, I’m just studying the philosophy of Nazism with a curious and open mind, I’m not endorsing every word” or “Hey, I’m just reading this argument for legalizing child rape with a curious and open mind, I’m not endorsing every word” or whatever.

This is a truly ludicrous comparison, which makes me disinclined to trust that you’re commenting in good faith.

If you’re writing a post that some rape law doesn’t actually help with rape despite popular perceptions, you open with a statement that rape is in fact bad and you do in fact want to reduce it, and you write it in a way that’s sympathetic to people who have been really harmed by rape. By the same token, if you’re writing a post that says reading TBC does not actually perpetuate racism despite popular perceptions, you open with a statement that racism is bad and you do in fact want to reduce it, and you write it in a way that’s sympathetic to people who have been really seriously harmed by racism.

This is exactly the sort of thing we should not be doing.

Replies from: anon03, anon03
comment by anon03 · 2021-11-02T20:59:39.090Z · LW(p) · GW(p)

it is truly shocking to see someone endorsing this standard on Less Wrong

I don't think I was endorsing it, I was stating (what I believe to be) a fact about how lots of people perceive certain things.

I used the term "provocative" as a descriptive (not normative) statement: it means "a thing that provokes people". I didn't run a survey, but my very strong belief is that "provocative" is an accurate description here.

I do think we should take actions that achieve goals we want in the universe we actually live in, even if this universe is different than the universe we want to live in. If something is liable to provoke people, and we wish it weren't liable to provoke people, we should still consider acting as if it is in fact liable to provoke people. For example, we can consider what are the consequences of provoking people, and do we care, and if we do care, how much effort and cost is required to not provoke people. My suggestion is that this is a case where provoking people has really bad potential consequences, and where not provoking people is an eminently feasible alternative with minimal costs, and therefore we should choose to not provoke people.

This is exactly the sort of thing we should not be doing.

I read Robin's blog post as saying that disclaimers are kinda annoying (which is fair enough), not that they are a very very bad thing that must never be done. I think we can take it on a case-by-case basis, weighing the costs and benefits.

Replies from: MondSemmel
comment by MondSemmel · 2021-11-03T15:12:17.053Z · LW(p) · GW(p)

and where not provoking people is an eminently feasible alternative with minimal costs

Considering that your suggestion is getting pushback, isn't that indicative of this being a fabricated option [LW · GW]? In suggesting to change the culture of Less Wrong, you inevitably get pushback from those who like the status quo. What's the option that offends nobody?

That said, if you just mean that the review could be written better, that may be the case; I haven't read it yet. In any case, I don't have the impression that lsusr's writing style is necessarily one for subtlety and disclaimers. Imposing restrictions on our prolific writers, merely because they might occasionally say something potentially controversial, seems misguided.

A more appropriate response in those cases would be to just strong-downvote the thing you find controversial, and explain why you did so. I did such a thing with this [LW · GW] LW post, which I did indeed find potentially damaging to LW culture (though my response in that case was strong-downvoting, rather than advocating censorship). Reasoning here [LW(p) · GW(p)]. Crucially, in that case I did not argue that the post offended hypothetical third parties or the larger society, but me personally, which forced me to be very specific about what parts of the post I did not like, and my reasons for why.


That said, I think LW overall benefits from its "no frontpaged politics" rule, and hence prefer posts like this one not to be frontpaged, which indeed did not happen. A recent post about Dominic Cummings was briefly frontpaged and did get lots of pushback and rather unproductive political discussions in the comments, but that was eventually rectified by un-frontpaging it. I think that state of affairs is completely fine.

PS: If one does want to develop the skill of saying things more delicately and with ample disclaimers, Scott's Nonfiction Writing Advice covers this among other things; and when he wrote about Charles Murray in some other context, he did indeed use tons of disclaimers; but this was still not enough to deter a NYT hitpiece, once he got famous enough.

Replies from: anon03
comment by anon03 · 2021-11-03T15:47:01.219Z · LW(p) · GW(p)

I don't think my suggestions are getting pushback; I think that my suggestions are being pattern-matched to "let's all self-censor / cower before the woke mob" and everyone loves having that debate at the slightest pretense. For example, I maintain that my suggestion of "post at another site and linkpost from here, in certain special situations" is next-to-zero-cost, for significant benefit. Indeed, some people routinely post-elsewhere-and-linkpost, for no reason in particular. (The OP author already has a self-hosted blog, so there's no inconvenience.) This seems to me like a prudent, win-win move, and if people aren't jumping on it, I'm tempted to speculate that people are here for the fun signaling not the boring problem-solving / world-optimizing.

Imposing restrictions on our prolific writers

That's not a useful framing. The mods have indicated that they won't impose restrictions. Instead, I am trying to persuade people.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2021-11-07T02:54:16.733Z · LW(p) · GW(p)

Although this is a very controversial topic I feel the need to offer a hopefully helpful observation to reduce the tension.

Namely that all these imputed motives, and most of the overall discussion on motives, can be true simultaneously in a single individual. Because people vary in their motives and capacities over time and can truly believe in contradictory positions while typing. (Dependent on anything, such as the phases of the moon, their last conversation with parents, the colour of their hat, etc…)

That is someone could be here for:

  1. ‘fun signalling’
  2. ‘boring problem solving’
  3. ‘cowering before woke mobs’
  4. fighting against ‘cowering before woke mobs’
  5. making others ‘cower before woke mobs’ but personally reject such
  6. helping others fight against ‘cowering before woke mobs’ while accepting such personally
  7. enforcing self censorship on others
  8. revolting against any imposition of self censorship on them
  9. enforcing self censorship on themselves but fighting against any imposition of the same on others
  10. engaging in controversial debate at the ‘slightest pretense’
  11. rejecting all controversial debate at the ’slightest pretense’
  12. making others engage in controversial debate at the ‘slightest pretense’ but personally avoiding and vice versa
  13. and so on…

simultaneously 

So speculating on motives may not be the most efficient way to convince someone even if they genuinely agree with every rational criticism. If you really want to implement the ‘linked post’ solution, maybe there is an immensely convincing argument that the upsides of forcing such a behaviour is greater than the downsides? 

comment by anon03 · 2021-11-03T00:29:24.736Z · LW(p) · GW(p)

This is exactly the sort of thing we should not be doing.

I should also add that Duncan has a recent post [LW · GW] enthusiastically endorsing the idea that we should try to anticipate how other people might misinterpret what we say, and clarify that we do not in fact mean those things. That post got a lot of upvotes and no negative comments. But it seems to me that Duncan's advice is in direct opposition to Robin's advice. Do you think Duncan's post is really bad advice? Or if not, how do you reconcile them?

Replies from: Daniel V, SaidAchmiz
comment by Daniel V · 2021-11-03T02:45:06.968Z · LW(p) · GW(p)

Robin and Duncan are both right. Speakers and listeners should strive to understand each other. Speakers should anticipate, and listeners should be charitable. There are also exceptions to these rules (largely due to either high familiarity or bad faith), but we should as a whole strive for communication norms that allow for concision.

Recommending disclaimers, recommending almost-another-post's-worth-of-wrestling, censorship...all are on a spectrum. Reasonable cases can be made for the options before outright censorship. I am of the opinion that additional critique is beneficial but should not be required of all posts, basic disclaimers are not beneficial but not very costly either, and censorship is usually wrong.

To a previous point of yours, if someone posted a summary of Mein Kampf on here, I'd be pretty taken aback from the lack of fit (which is saying something since this place is pretty eclectic), and I could see that as threatening to the community given how some outsider might react to it. I mean, I guess I would learn what's in it instead of a one-sentence high school teacher's summary passed down from teacher to teacher but without having to subject myself to it - so that'd be nice since I like to learn about things but don't want to read Nazi propaganda (assuming the summary is written as a summary rather than endorsement). But I think there is a lot of daylight between that and TBC. I understand there are many people out there who do not agree, but one takeaway from this summary and JenniferRM's comment is that those people...are mistaken.

I know there is the consequentialist argument that it doesn't matter if they're wrong if they're the one with the gun, and we can't know for sure how the appearance of a TBC summary will be received in the future, but there are a couple other things to do here: work to make them right instead of wrong, or help proliferate norms such that they don't have that gun later. Meh, it is indeed simplest and easiest to just not talk about uncomfortable subjects...

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T04:20:47.971Z · LW(p) · GW(p)

If someone posted a summary of Mein Kampf on here, I would be quite interested to read it! I’ve never read that book myself (and I’m not sure that I could quite bear to do so—which is a personal weakness/fault, I hasten to add, not something at all to be proud of), but I am a firm believer in being familiar with the views of your opponents… or your enemies. If someone were to write a high-quality review of Mein Kampf for Less Wrong, I expect that I’d find it edifying, and it would save me the trouble of, you know… actually slogging through Adolf Hitler’s writing (which I have heard is rather tedious, even setting aside all the evil).

As for “fit”, well, that’s what personal pages are for, yes? If we can have discussions of obscure computer games, shower thoughts about fruit jam, “rational dating profiles”, and so on (all posted on people’s personal pages), then I really don’t see why we can’t have book reviews of… pretty much anything.

Replies from: Ape in the coat
comment by Ape in the coat · 2021-11-05T08:32:38.599Z · LW(p) · GW(p)

That would've been extremely virtious of you, if fashists actually were your outgroup [LW · GW].

It seems that a better example for your case would be some book of a modern extremely woke-leftist. Notice how shocked you are that someone might be endorsing the wokist perception of TBC, how you'd wish that it, as well as disclaimer culture were below the sanity waterline. How tempting it is to assume by default that social justice crowd are unreasonable and are arguing in bad faith while their claims that Murray wrote a book in a bad faith are mildly offensive. How you feel your own raison d’etre being threatenned. Now that looks like an outgroup.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-05T13:09:54.612Z · LW(p) · GW(p)

What do you mean, “would’ve”? That wasn’t a hypothetical scenario. I wasn’t using Mein Kampf as an example of anything; I was talking about the actual thing.

As for the rest of your comment—there’s no “assume” about it.

Replies from: Vaniver
comment by Vaniver · 2021-11-05T16:42:46.547Z · LW(p) · GW(p)

What do you mean, “would’ve”? 

I understand Ape in the coat to be saying the bit from I Can Tolerate Anything Except the Outgroup, “Thus do you gain no merit by tolerating them!”, implying that you have nothing against fascists, and contrasting that to book review by a woke leftist, like perhaps the White Fragility review [LW · GW] that was posted back in September.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-05T17:53:03.100Z · LW(p) · GW(p)

If so, then that’s an absurd thing to say. Given my background, saying that I have nothing against fascists is one heck of a claim…

Replies from: Ape in the coat
comment by Ape in the coat · 2021-11-06T08:44:05.045Z · LW(p) · GW(p)

I'm not saying that you literally have nothing against fashists. I'm pretty sure you disagree with them on nearly every subject, find them generally evil and do not really want to associate with them. I'm saying that they are not your outgroup in the same sense that Osama bin Laden wasn't outgroup for blue tribe while Margaret Thatcher was:

...Blue Tribe – can’t get together enough energy to really hate Osama, let alone Muslims in general. We understand that what he did was bad, but it didn’t anger us personally. When he died, we were able to very rationally apply our better nature and our Far Mode beliefs about how it’s never right to be happy about anyone else’s death.

On the other hand, that same group absolutely loathed Thatcher. Most of us (though not all) can agree, if the question is posed explicitly, that Osama was a worse person than Thatcher. But in terms of actual gut feeling? Osama provokes a snap judgment of “flawed human being”, Thatcher a snap judgment of “scum”.

Fashists are not your outgroup, they are the outgroup of your outgroup. Contrary to a popular belief, this doesn't make them your friends or allies. Sometimes you can benefit them, for instance, by arguing in favour of making LessWrong more attractive for them and less attractive for social justice related people, but it only happens by chance. For you, fashists are mainly an easy way to demonstrate how rational and tolerant you are, compared to the woke-leftists - your real outgroup. Thus scoring some points in a social game.

comment by Said Achmiz (SaidAchmiz) · 2021-11-03T04:08:17.208Z · LW(p) · GW(p)

Do you think Duncan’s post is really bad advice?

Yes.

comment by MondSemmel · 2021-11-03T18:01:10.964Z · LW(p) · GW(p)

Hmm, maybe you're from a different part of the world / subculture or something. But in cosmopolitan USA culture, merely mentioning TBC (without savagely criticizing it in the same breath) is widely and instantly recognized as a strongly provocative and hurtful and line-crossing thing to do.

If this is an argument for adding disclaimers or pulling one's punches, would you agree that LW also shouldn't host anything critical of the People's Republic of China, even if the rest of the world does not consider the topic provocative? If not, what's the difference? (Asking as someone who's from neither place.) Does the answer depend on how many readers are from the U.S. vs. from China?

More generally, to which extent should a presumably international site like Less Wrong conform to the cultural assumptions of the U.S. in particular?

Replies from: anon03
comment by anon03 · 2021-11-03T18:24:04.890Z · LW(p) · GW(p)

I want to say loud and clear that I don't think the only two options are (1) "saying X in a way that will predictably and deeply hurt lots of people and/or piss them off" and (2) "not saying X at all". There's also the option of (3) "saying X in a way that will bring anti-X-ers to change their mind and join your side". And also sometimes there's (4) "saying X in a kinda low-key way where anti-X-ers won't really care or take notice, or at least won't try to take revenge on things that we care about".

My sense is that there's safety-in-numbers in saying "obviously Tiananmen Square is a thing that happened", in a way that there is not safety-in-numbers in saying "obviously TBC is a perfectly lovely normal book full of interesting insights written in good faith by a smart and reasonable person who is not racist in the slightest".

But still, if lots and lots of people in China believe Z, and I were writing a post that says "Here's why Z is false", I would try to write it in a way that might be persuasive to initially-skeptical Chinese readers. And if I were writing a post that says "Z is false, and this has interesting implications on A,B,C", I would try to open it with "Side note: I'm taking it for granted that Z is false for the purpose of this post. Not everyone agrees with me that Z is false. But I really think I'm right about this, and here's a link to a different article that makes that argument in great detail."

comment by lc · 2021-11-02T22:31:43.644Z · LW(p) · GW(p)

Isusr actually wrote a post [LW · GW] about this contention.

comment by Ruby · 2021-11-03T03:47:08.435Z · LW(p) · GW(p)

You might be interested in the comment [LW(p) · GW(p)] I posted on the other thread.

comment by fourier · 2021-11-04T23:51:21.881Z · LW(p) · GW(p)

People on this site should stop pretending to be rational and calling themselves "rationalists" if they're not willing to seek truth just because some people find it offensive.  And it should change its name from "lesswrong".

comment by Aryeh Englander (alenglander) · 2021-11-06T12:05:56.350Z · LW(p) · GW(p)

Could someone please steelman the position of people who disagree strongly with this book? Which parts of the book are considered factually or logically incorrect, which parts do people object to so strongly on moral grounds, etc.?

Replies from: Ape in the coat
comment by Ape in the coat · 2021-11-16T06:50:05.936Z · LW(p) · GW(p)

There are a couple of books pointing out mistakes and methodological problems with the Bell Curve. Maybe we will get their reviews as well in the future.

If you are not alergic to long youtube videos you may be interested in this fairly reasonable and thorough approach to critique of the Bell Curve from the left.

In a nutshell, Murray bases his conclusions on a bunch of very epistemically poor research, often uses shady methodology, contradicts himself in a way that hints at writing the book in a bad faith and despite all his neutral tone, smuggles harmful political agenda. 

Probably the most obvious example of bad research is Richard Lynn's review, which Murray uses as a countrol group for Afro-Americans. Data for this review was cherry-picked, ironicaly didn't produce Bell Curve and wasn't supposed to be translated into IQ score, and was mainly collected in South Africans during apartheid.

Murray's impressive claim that IQ score is a better predictor of success than parents social economic status, becomes much less impressive if we know that while computing SES Murray omits lots of possible factors and as soon as we include them in the calculation, social status becomes a better predictor of success than IQ.

As for contradictions, the Bell Curves goes from stating that even if intelligence is 100% genetically determined it doesn't change anything in our society to claims that due to 60% heritability of intelligence we are to implement specific policies or else we are in trouble. In between Murray mixes heritability of a trait in a population with a degree to which a trait is genetically determined in a an individual, despite the fact that he mentions that these are different things. In general he is ready to aknowledge possible counterarguments but they doesn't seem to affect his conclusion in any way. This creates the impression that he has already written the bottom line [LW · GW], and just creates himself some plausible deniability. 

comment by Mary Chernyshenko (mary-chernyshenko) · 2021-11-12T19:39:14.846Z · LW(p) · GW(p)

Please be alright, it's a horrible thing to be robbed. I don't think I can help from here, but still.

Replies from: lsusr
comment by lsusr · 2021-11-13T00:04:18.461Z · LW(p) · GW(p)

There's nothing to worry about, but thanks. I didn't even lose my phone.

comment by Zmavli Caimle · 2021-11-02T10:10:56.421Z · LW(p) · GW(p)

Possible typo: "Being smart causes work-inhibiting disability." given that the chart you then show says the opposite.

Replies from: lsusr
comment by lsusr · 2021-11-02T21:06:17.185Z · LW(p) · GW(p)

It was a typo. Fixed. Thanks.

comment by burner2501 · 2021-11-03T17:49:17.565Z · LW(p) · GW(p)

Archived.

https://web.archive.org/web/20211103174524/https://www.lesswrong.com/posts/vvc2MiZvWgMFaSbhx/book-review-the-bell-curve-by-charles-murray