Open thread, Jan. 23 - Jan. 29, 2017

post by MrMind · 2017-01-23T07:41:31.185Z · LW · GW · Legacy · 196 comments

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

196 comments

Comments sorted by top scores.

comment by James_Miller · 2017-01-24T00:55:28.120Z · LW(p) · GW(p)

Prediction: Government regulations greatly reduce economic growth. Trump, with the help of the Republican Congress, is going to significantly cut regulations and this is going to supercharge economic growth allowing Trump to win reelection in a true landslide.

Replies from: Douglas_Knight, gjm, ChristianKl, knb, waveman
comment by Douglas_Knight · 2017-01-24T01:25:22.703Z · LW(p) · GW(p)

Do you want to put a probability on that? Also, break it down into a bunch of steps. Be precise. Include timelines.

Has anything like that every happened in the entire history of the world? In four years? For example, most of what Reagan is credited with doing to the economy was either done by Carter or in Reagan's second term.

Why do you believe that federal regulations are a significant portion of the total?

Replies from: James_Miller
comment by James_Miller · 2017-01-24T01:34:25.714Z · LW(p) · GW(p)

Has anything like that every happened in the entire history of the world

Yes, China after Mao.

It might not just be federal regulations. For example, if Republicans passed a freedom to build law that allowed landowners to quickly get permission to build we would see a massive construction boom.

Replies from: Douglas_Knight, ChristianKl
comment by Douglas_Knight · 2017-01-24T02:06:03.432Z · LW(p) · GW(p)

You made a strong conjunction that deregulation lead to economic growth lead to popular support for the regime in four years. That definitely did not happen to China in the first four years after Mao's death. Maybe if you cherry-pick 1980-1984 as the beginning of Deng's real hold on power it is an example, but I doubt it.

Sure, if you want to open up the pathways and no longer predict a conjunction, I can't stop you, but I do complain that this is a new prediction. But predicting that Trump will abolish States' Right so quickly to have economic effects doesn't seem very plausible to me. I wouldn't be focusing on elections in that scenario.

Replies from: drethelin
comment by drethelin · 2017-01-26T00:16:14.398Z · LW(p) · GW(p)

the US regime operates on popular support in a way very unlike that of China.

Replies from: ChristianKl
comment by ChristianKl · 2017-01-26T10:09:42.404Z · LW(p) · GW(p)

The difference is that the regime in China actually has popular support while the US regime doesn't.

comment by ChristianKl · 2017-01-26T10:10:28.240Z · LW(p) · GW(p)

For example, if Republicans passed a freedom to build law that allowed landowners to quickly get permission to build we would see a massive construction boom.

Given that land use is mostly legislated by the individual states, why do you think a Republican congress would infringe on state laws that strongly?

Replies from: James_Miller
comment by James_Miller · 2017-01-26T14:57:03.110Z · LW(p) · GW(p)

The commerce clause + big cities are controlled by the left so Republicans would be willing to step on their power+Trump is a builder.

Replies from: ChristianKl
comment by ChristianKl · 2017-01-26T21:05:58.731Z · LW(p) · GW(p)

What's you credence for this event?

Replies from: James_Miller
comment by James_Miller · 2017-01-27T01:10:29.572Z · LW(p) · GW(p)

Over the next 4 years, 50% that Republicans will enact such a law or that Trump with use regulations to make it easier to build by, for example, claiming it's racist for San Francisco to restrict the building of low income housing. But I'm not willing to bet on this because it would be hard to define the bet's winning conditions.

comment by gjm · 2017-01-24T03:08:19.234Z · LW(p) · GW(p)

Would you like to quantify that enough that we can look back in a few years and see whether you got it right?

Replies from: knb
comment by knb · 2017-01-26T06:04:32.968Z · LW(p) · GW(p)

I think it's a clear enough prediction, but putting some actual numbers on it would be useful. Personally, I would put the odds of a Trump landslide well under 50% even contingent on "supercharged" economic growth. Maybe 25%. Politics is too identity-oriented now to see anything like the Reagan landslides in the near future.

comment by ChristianKl · 2017-01-26T10:06:33.370Z · LW(p) · GW(p)

Could you operationalize the terms? What's a landslide? And what probability do you attach to that event?

comment by knb · 2017-01-26T05:58:02.355Z · LW(p) · GW(p)

Kudos for making a clear prediction.

I voted for Trump but I don't think there is any realistic possibility of a Trump landslide, even if the economy grows very well for the next 4 years. The country is just too bitterly divided along social lines for economic prosperity to deliver one candidate a landslide (assuming a landslide in the popular vote means at least 10% margin of victory.)

In terms of economic growth, I wonder what you mean by "supercharge". I think 4% is pretty unlikely. If the US manages an annual average of 3.0% for the next 4 years that would be a good improvement, but I don't think that could really be called "supercharged."

Trump job approval looks pretty good right now considering the unrelenting negative press, so right now I think Trump is likely to be re-elected if he chooses to run in 2020.

Replies from: satt
comment by satt · 2017-01-26T22:29:54.444Z · LW(p) · GW(p)

The country is just too bitterly divided along social lines for economic prosperity to deliver one candidate a landslide (assuming a landslide in the popular vote means at least 10% margin of victory.)

Assuming that

  1. the US does in fact hold a nationwide presidential election in 2020,

  2. the Democratic & Republican parties get > 90% of the votes in 2020, and

  3. US military fatalities in new, unprovoked, foreign wars are minimal (< 3000),

I predict:

  • a Trump landslide with probability 20%, assuming 2% weighted growth over Trump's term

  • a Trump landslide with probability 70%, assuming 3% weighted growth over Trump's term

  • a Trump landslide with probability 95%, assuming 4% weighted growth over Trump's term

using your definition of landslide, and defining "weighted growth" as annualized growth in quarterly, inflation-adjusted, disposable, personal income per capita, with recent growth weighted more heavily with a discount factor λ of 0.9. (See Hibbs for more details.) This is a clear prediction.

I'm more doubtful of chatter about rising political polarization than I am about fundamentals-based models of voting, and those models highlight the economy & war as the factors that most matter. As such I reckon sufficient economic prosperity could in fact produce a landslide for Trump (and virtually any incumbent, really).

comment by waveman · 2017-01-24T02:41:36.844Z · LW(p) · GW(p)

You should take into account that tariff and other barriers to trade are a form of government regulation.

Replies from: satt
comment by satt · 2017-01-26T23:24:15.695Z · LW(p) · GW(p)

I doubt the remaining trade barriers imposed by the US government are making much difference to overall US growth. As far as I know, models which don't crowbar in optimistic second-order effects (like big jumps in productivity) estimate that trade liberalization would raise US GDP by ~ $10 billion a year. That's a big number, but surely one has to compare it to existing US GDP: $18,560 billion a year.

This gives me the back of the envelope estimate that trade barriers are depriving the US of about 0.05% of GDP. American voters would scarcely notice that.

Replies from: waveman
comment by waveman · 2017-07-11T01:01:53.152Z · LW(p) · GW(p)

Trump was saying he would increase trade barriers, so current levels are not the point.

Replies from: satt
comment by satt · 2017-07-15T11:47:13.245Z · LW(p) · GW(p)

I think in January I read you as amplifying James_Miller's point, giving "tariff and other barriers" as an example of something to slot into his "Government regulations" claim (hence why I thought my comment was germane). But in light of your new comment I probably got your original intent backwards? In which case, fair enough!

comment by gjm · 2017-01-24T15:05:37.118Z · LW(p) · GW(p)

Derek Parfit (author of "Reasons and Persons", a very influential work of analytic philosophy much of which is concerned with questions of personal identity and which comes up with decidedly LW-ish answers to most of its questions) has died. (He actually died a few weeks ago, but I only just heard of it, and I haven't seen his death mentioned on LW.)

Replies from: None
comment by [deleted] · 2017-01-25T18:16:06.694Z · LW(p) · GW(p)

Also our namesake for Parfit's Hitchhiker

comment by Viliam · 2017-01-25T17:13:27.653Z · LW(p) · GW(p)

A few years ago I used to be a hothead. Whenever anyone said anything, I’d think of a way to disagree. I’d push back hard if something didn’t fit my world-view.

It’s like I had to be first with an opinion – as if being first meant something. But what it really meant was that I wasn’t thinking hard enough about the problem. The faster you react, the less you think. Not always, but often.

-- Give it five minutes

Replies from: MrMind
comment by MrMind · 2017-01-26T08:41:44.905Z · LW(p) · GW(p)

Absolutely thumbs up. I strive to achieve this: often when I write my first thoughts it becomes clear only later that I misread, or that I missed the main point, or that I'm completely wrong.
On the other side, it's true that people have very little and precious attention span: if you reply the day after there's already nobody listening. So try to struck a balance between impulse and reflection...

comment by phl43 · 2017-01-26T21:16:31.375Z · LW(p) · GW(p)

Hi everyone,

I'm a PhD candidate at Cornell, where I work on logic and philosophy of science. I learned about Less Wrong from Slate Star Codex and someone I used to date told me she really liked it. I recently started a blog where I plan to post my thoughts about random topics: http://necpluribusimpar.net. For instance, I wrote a post (http://necpluribusimpar.net/slavery-and-capitalism/) against the widely held but false belief that much of the US wealth derives from slavery and that without slavery the industrial revolution wouldn't have happened, as well as another (http://necpluribusimpar.net/election-models-not-predict-trumps-victory/) in which I explain how election models work and why they didn't predict Trump's victory. I think members of Less Wrong will find my blog interesting or, at least, that's what I hope. I welcome any criticisms, suggestions, etc. Sorry for the shameless self-promotion, but I just started the blog and I would like people to know about it :-)

Philippe

comment by Daniel_Burfoot · 2017-01-23T23:11:10.263Z · LW(p) · GW(p)

How do you weight the opinion of people whose arguments you do not accept? Say you have 10 friends who all believe with 99% confidence in proposition A. You ask them why they believe A, and the arguments they produce seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to believe A, which they simply cannot articulate. Should you update in favor of A or not?

Replies from: TheAncientGeek, ChristianKl, Dagon
comment by TheAncientGeek · 2017-01-24T06:47:25.853Z · LW(p) · GW(p)

Trying to steelman arguments by talking to people you know in real life isnt a good method. You will find the best arguments in books and papers written by people who have acquired the rare skill of articulating intuitions.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2017-01-24T14:52:52.624Z · LW(p) · GW(p)

Yes, that may be true, but that's doesn't address the question. A stronger version would be:

Say you have 10 sources who all claim high confidence in proposition A. The arguments produced seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to claim A, which you cannot understand. Should you update in favor of A or not?

comment by ChristianKl · 2017-01-26T12:12:34.665Z · LW(p) · GW(p)

If I don't understand a topic well I'm likely to simply copy the beliefs of friends who seem to have delved deep into an issue even if they can't tell me exactly why they believe what they believe.

If I on the hand already have a firm opinion and especially if the reasons for my opinions aren't possible to be communicated easily I don't update much.

comment by Dagon · 2017-01-24T05:40:03.659Z · LW(p) · GW(p)

What's your prior for A, and what was your prior for their confidence in A? very roughly speaking, updates feel like surprise.

comment by dglukhov · 2017-01-23T15:19:35.747Z · LW(p) · GW(p)

I'm curious if anybody here frequents retraction watch enough to address this concern I have.

I find articles here very effective at announcing retractions and making testimonies from lead figures in investigations a frequent fallback, but rarely do you get to see the nuts and bolts of the investigations being discussed. For example, "How were the journals misleading?" or "What evidence was or was not analyzed, and how did the journal's analysis deviate from correct protocol?" are questions I often ask myself as I read, followed by an urge to see the cited papers. And then upon investigating the articles and their retraction notices, I am given a reason that I can't myself arbitrate. Maybe data was claimed to have been manipulated, or analyzed according to an incorrect framework.

Studies such as these I find alarming because I'm forced to trust the good intentions of a multi-billion dollar corporation in finding the truth. Often I find myself going on retraction watch, trusting the possibly non-existing good intentions of the organization's leadership, as I read the headlines without time to read every detail of the article. I am given certain impressions from the pretentious writing of the articles, but none of the substance, when I choose to skim selections.

Perhaps I am warning against laziness. Perhaps I am concerned about the potential for corruption in even a crusade to fight misinformation that retraction watch seems to fight. Nonetheless, I'm curious if people here have had similar or differing experiences with these articles...

Replies from: morganism, waveman, Lumifer, Douglas_Knight
comment by morganism · 2017-01-23T22:56:08.162Z · LW(p) · GW(p)

A blog list of bogus journals just went down too...

http://ottawacitizen.com/storyline/worlds-main-list-of-science-predators-vanishes-with-no-warning

"Beall, who became an assistant professor, drew up a list of the known and suspected bad apples, known simply as Beall’s List. Since 2012, this list has been world’s main source of information on journals that publish conspiracy theories and incompetent research, making them appear real."

comment by waveman · 2017-01-24T00:38:44.940Z · LW(p) · GW(p)

Crimes and trials are the same. Much goes on in closed rooms. You rightly feel that you are in the dark.

Often there is some material on pubpeer which can help understand what happened.

comment by Lumifer · 2017-01-23T17:59:17.397Z · LW(p) · GW(p)

but rarely do you get to see the nuts and bolts of the investigations being discussed.

Gelman's blog goes into messy details often enough.

because I'm forced to trust

No, you're not. You are offered some results, you do NOT have to trust them.

Replies from: dglukhov
comment by dglukhov · 2017-01-23T18:11:32.472Z · LW(p) · GW(p)

Thanks for the tool.

No, you're not. You are offered some results, you do NOT have to trust them.

Indeed, but I suppose the tool provided solves my problem of judging when data was misanalysed as I could just as easily do the analysis myself.

comment by Douglas_Knight · 2017-01-23T18:56:00.439Z · LW(p) · GW(p)

Why are you consuming research at all? If you are a researcher considering building on someone else's research, then you probably shouldn't trust them and should replicate everything you really need. But you are also privy to a lot of gossip not on LW and so have a good grasp on base rates. If you are considering using a drug, then it has been approved by the FDA, which performs a very thorough check on the drug company. The FDA has access to all the raw data and performs all the analysis from scratch. The FDA has a lot of problems, but letting studies of new drugs get away with fraud is not one of them. But if you want to take a drug off-label, then you are stuck with research.

You say that you don't trust the intentions of a multi-billion dollar corporations. Have you thought about what those intentions are? They don't care about papers. Their main goal is to get the drug approved by the FDA. Their goal is for their early papers to be replicated by big, high quality, highly monitored studies. Whereas, the goal of multi-billion dollar universities is mainly to produce papers with too much focus on quantity and too little on replication.

Replies from: dglukhov, waveman
comment by dglukhov · 2017-01-23T22:18:59.076Z · LW(p) · GW(p)

Why are you consuming research at all?

I'm no researcher, and you're right, if I did want to improve upon my study, I would, given the materials. However, I am not that affluent, I do not have such opportunities unless the research was based on coatings and adhesives (these materials I do have access to). The retraction I linked was merely presented on retraction watch as an example. An example for what? Let's continue to...

Have you thought about what those intentions are?

My understanding is that as a public company your primary concern is bringing in enough value to the company to appease investors. A subset of that goal would be to get FDA approved.

I don't trust the company because of the incentive system, and that is my gut reaction that stems from companies getting away with unscrupulous business practices in the past. Though now that I think about it, however, Pfizer would have nothing to gain from retracting papers they knew they couldn't back up if someone asked them to. My guess is that either:

a) Min-Jean's managers were planning on gambling with her research only to find out their moles in the FDA wouldn't cooperate or,

b) there was no conspiracy, and Min-Jean was incentivized to fabricate her work on her own volition.

I do see your point, since a) is a more complicated theory in this case. But I distrust the situation. I smell a power play, at worst. But I can’t support that, unfortunately, from the articles alone. I can support power plays happening in big companies, but I can’t show those situations are related here. Not yet, anyway…

EDIT: With all that said, you seem to err on the side of trusting the FDA to do their job and trusting Pfizer to comply. Would you be able to back up that trust in this case alone?

I think waveman made my point clearer in that I don't like the fact that I don't know the details of the investigation. Down to the painfully detailed process of verifying image duplication. I'm not so sure a quick phone call to Pfizer or Min-Jean would help me either...

comment by waveman · 2017-01-24T00:47:13.030Z · LW(p) · GW(p)

the FDA, which performs a very thorough check on the drug company

I think you have an overly sunny view of how effective the FDA is. (leaving aside the question of cost effectiveness and the opportunity cost of the delays and even outright prevention of useful drugs getting to market and their effect on the cost of drugs)

There are plenty of cases of the FDA being hoodwinked by drug companies. Regulatory capture is always a concern.

Statistical incompetence is very common. I still cannot believe that they let Vioxx on the market when the fourfold increase in heart attacks had a P value of about 10-11%. . This is the sort of stupidity that would (or should) get you as F in Statistics 101.

My experience over many decades is that over time the benefits of drugs often turn out to be way overstated and the dangers greatly underestimated.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-01-24T01:18:25.222Z · LW(p) · GW(p)

I stand by my narrow claims. Here is another narrow claim: you are wrong about what happened with Vioxx.

Replies from: waveman
comment by waveman · 2017-01-24T02:39:52.448Z · LW(p) · GW(p)

People can read about it for themselves.

https://en.wikipedia.org/wiki/Rofecoxib

comment by [deleted] · 2017-01-29T05:56:00.231Z · LW(p) · GW(p)

In a crack of time between doing my last data analysis for my PhD and writing my thesis, I couldn't stop myself from churning out a brief sparsely-sourced astrobiology blog post in which I argue that the limited lifespan of planetary geospheres and the decay of star formation rates means that even though the vast majority of star-years are in the distant future around long-lived small stars, we are still a typical observer in that we are occurring less than 15 billion years into an apparently open-ended universe.

https://thegreatatuin.wordpress.com/2017/01/29/on-the-death-of-planets/

comment by gjm · 2017-01-25T04:33:11.176Z · LW(p) · GW(p)

I think either you're misunderstanding the paper, or I'm misunderstanding you. (Or of course both.) The point isn't that scientists should be looking at consensus instead of actually doing science; of course they shouldn't. It's that for someone who isn't an expert in the field and isn't in a position to do their own research, the opinions of those who are experts and have done their own research are very useful information. (In cases -- such as this one -- where there is near unanimity among the experts, I think the only reasonable options are "accept expert consensus, if only tentatively" and "become expert at a level comparable to theirs and form your own opinion". Of course no one is obliged to be reasonable.)

comment by Thomas · 2017-01-23T08:42:09.240Z · LW(p) · GW(p)

Again, a math task:

https://protokol2020.wordpress.com/2017/01/21/a-math-question/

Replies from: cousin_it
comment by cousin_it · 2017-01-27T15:02:47.355Z · LW(p) · GW(p)

Maybe check out this, then this if you're hardcore.

Replies from: Thomas
comment by Thomas · 2017-01-29T10:32:30.218Z · LW(p) · GW(p)

Thanks.

Hyper knots are still knots, however. I am more looking for something conceptually new in higher dimensions. Like the rotation is a new concept in 2D, unknown in Lineland. Or knots are unknown concept in Flatland. I think every dimension has something unfathomable to offer for people used to only lower dimensions - like their own number of dimensions and bellow that.

I also think, that at least in principle, a 3D dweller might be able to simulate a 4D or more space vividly. I doubt there is already one such, but it should be possible.

Here is a sneak preview on tomorrow Open Thread's link about my new problem:

https://protokol2020.wordpress.com/2017/01/29/a-topological-problem/

comment by [deleted] · 2017-01-24T00:03:25.168Z · LW(p) · GW(p)

I'm new to writing resumes and am currently writing one for an internship application. I don't know if trying to optimize for uniqueness or quirkiness comes at significant social costs, or if there are many benefits. If anyone is good at this sort of thing (listing / bragging skills), general tips would be very welcome.

Replies from: moridinamael, mindspillage, None, Dagon
comment by moridinamael · 2017-01-24T15:35:37.595Z · LW(p) · GW(p)

It probably depends on the type of job you're looking for.

In school I was taught to make my resume fit on a single page. As far as I can tell, this is nonsense. In my professional life have never seen a resume that was less than two pages. Mine is several pages.

The point of a resume is (a) to give the company a broad first-pass sense of whether you're qualified and (b) to provide a scaffolding of prior knowledge about you around which to conduct an interview. Constructing your resume with the point in mind may simplify things.

I would personally avoid going out of my way to broadcast uniqueness or quirkiness. But I suppose it depends on what exactly you mean. If you hold the world record for pogo stick jumps, that would be something interesting to put on there, partly because that's the kind of thing that connotes ambition and dedication. If you are an ardent fan of some obscure fantasy series, that's not something that's going to conceivably help you get a job.

Replies from: None
comment by [deleted] · 2017-01-24T17:07:07.818Z · LW(p) · GW(p)

Thanks for the information. I saw the one-page-sheet recommendation in a lot of places, but this didn't match up with actual CVs I've seen on people's pages. Clearing that up is helpful.

The general point to keep in mind is also helpful.

Replies from: satt
comment by satt · 2017-01-26T23:45:56.946Z · LW(p) · GW(p)

I saw the one-page-sheet recommendation in a lot of places, but this didn't match up with actual CVs I've seen on people's pages.

Expanding on this, acceptable & typical lengths for CVs seem to vary between sectors. My feeling is that 1-page CVs are a bit uncommon in business (though some people do make it work!), with CVs more often 2-4 pages long. But academic CVs are often a lot longer, and can be pretty much arbitrarily long. (I suspect highly eminent academics' CVs tend shorter. Presumably they have less to prove.)

comment by mindspillage · 2017-01-26T02:54:06.825Z · LW(p) · GW(p)

In general, don't optimize for uniqueness or quirkiness; you have limited space and your potential workplace is probably using the resume to screen for "does this person meet enough of the basic desired qualities that we should find out more about them with an interview". You can add a few small things if they really set you apart, but don't go out of your way to do it. A better opportunity to do this is in your cover letter.

The best reference for workplace norms and job-hunting advice that I know is Ask A Manager; you may want to browse her archives.

comment by [deleted] · 2017-01-25T18:40:22.443Z · LW(p) · GW(p)

I would look strongly into the company culture before making a decision. I would default towards being more professional, but there are certain companies (e.g. Pixar from what I heard at a talk they gave at my school) who value individuality more than others. I would generally say that cover letters are a better place to emphasize personality rather than a resume. Resumes should mostly be to demonstrate qualification.

comment by Dagon · 2017-01-26T03:52:17.089Z · LW(p) · GW(p)

All of the responses so far seem reasonable to me. A bit of theory as to why: How much quirkiness to show a potential employer is a great example of countersignaling.

If you're trying to distinguish yourself among a group that the observer ALREADY classifies as high-achieving, then showing that you can afford not to be serious can indicate you're at the top of that group. If you haven't established that you belong in that category, you should focus first on doing so, or your quirkiness will be taken as further evidence that you are not that good in the first place.

Oh, you might also show some quirkiness as a reverse-filter, and an honest sharing of a matching trait - if you want to only consider employers who'll tolerate (or appreciate) your quirks, this is one way to accomplish that. Usually, I'd save that for later rounds of discussion.

Replies from: None
comment by [deleted] · 2017-01-26T04:34:04.441Z · LW(p) · GW(p)

Thanks for expanding on this. I think it makes more sense (given where I'm at) to be more conservative for right now.

comment by dglukhov · 2017-01-23T18:06:54.876Z · LW(p) · GW(p)

On a lighter note (Note: this is open access)

Replies from: None, Lumifer
comment by [deleted] · 2017-01-23T18:20:48.986Z · LW(p) · GW(p)

Wow, this is very neat. Thanks for sharing! (I'll be giving a talk to students about climate change and psych next month, and this looks to be very helpful.)

Do you have any other papers you'd recommend in this vein?

Replies from: None
comment by [deleted] · 2017-01-23T21:29:52.015Z · LW(p) · GW(p)

This handbook is about climate change and how debunking can actually backfire.

John Cook is an instructor on EDX's Making Sense of Climate Science Denial course.

comment by Lumifer · 2017-01-23T19:58:11.204Z · LW(p) · GW(p)

LOL. "How to prevent crimethink". I recommend introducing global-warming Newspeak and phasing out Oldspeak -- it should help with that "inoculation" thing.

Replies from: gjm
comment by gjm · 2017-01-23T21:23:02.754Z · LW(p) · GW(p)

How would you distinguish between "giving people the tools to detect misinformation" and "preventing crimethink", and why do you regard this one as the latter rather than the former (which is what it claims to be)?

(Or do you think the first of those is always bogus and best thought of as the latter in disguise?)

EDITED to add: The description of "inoculation" near the start of the paper gives the impression that the procedure amounts to "make people less likely to be convinced by misinformation X by presenting a strawman version of X and refuting it", but when they go on to describe the actual "inoculation" they tried it doesn't sound strawman-y at all to me.

Replies from: Lumifer, jimmy
comment by Lumifer · 2017-01-24T02:56:19.248Z · LW(p) · GW(p)

How would you distinguish between "giving people the tools to detect misinformation" and "preventing crimethink"

By looking at who makes the decision. If you let the people decide, you gave them tools. If the right answer is preordained, you're just preventing crimethink.

Replies from: gjm
comment by gjm · 2017-01-24T03:07:17.454Z · LW(p) · GW(p)

I'm not sure how that distinction actually cashes out in practice. I mean, you never have the option of directly controlling what people think, so you can always argue that you've "let the people decide". But you (generic "you", not Lumifer in particular) often have a definite opinion about what's right, so someone can always argue that "the right answer is preordained".

Suppose I write some stuff about mathematics, explaining (say) how the surreal numbers work. Am I "preventing crimethink"? I haven't told my audience "of course this stuff is as likely to be wrong as right", I haven't made any attempt to find people who think there's a contradiction or something in the theory; I've just told them what I think is right.

What if I do the same with evolution?

What if I do the same with anthropogenic climate change?

Replies from: Lumifer
comment by Lumifer · 2017-01-24T03:23:28.694Z · LW(p) · GW(p)

The difference is in whether you think that disagreeing with your views is acceptable or not.

You tell people how you think the world works and why do you believe this, they say "I don't think that's right", you shrug and let them be -- it's one thing.

You tell people how you think the world works and why do you believe this, they say "I don't think that's right", you say "This will not stand, you need to be re-educated and made to disbelieve the false prophets" -- that's quite a different thing.

Replies from: gjm
comment by gjm · 2017-01-24T03:38:17.511Z · LW(p) · GW(p)

Ah, OK. Then the paper we're discussing is not about "preventing crimethink": it is not saying, nor advocating saying, anything like what you describe in that last paragraph.

However, I suspect you will still want to characterize it as "preventing crimethink". Perhaps consider whether you can give a characterization of that term with a bit less spin on it?

(I think the authors are pretty sure they are right about global warming. They believe there is a lot of misinformation around, much of it deliberate. They suggest ways to pre-empt such misinformation and thereby make people less likely to believe it and more likely to believe what the authors consider to be the truth. But they do not say it is "unacceptable" to take a different view; they just think it's incorrect. And they aren't concerned with what you say to someone who has already made up their mind the other way; they are looking at ways to make that less likely to happen as a result of misinformation.)

Replies from: Lumifer
comment by Lumifer · 2017-01-24T03:51:05.329Z · LW(p) · GW(p)

Perhaps consider whether you can give a characterization of that term with a bit less spin on it?

Why? :-P

They believe there is a lot of misinformation around, much of it deliberate.

The problem is, I bet they believe all the misinformation is coming from the sceptics' side.

Replies from: gjm
comment by gjm · 2017-01-24T11:30:18.150Z · LW(p) · GW(p)

Maybe they do. Maybe they're wrong. But holding a wrong opinion is not the same thing as attempting Orwellian thought control.

(My guess is that what they actually think is that almost all the misinformation, and most of the worst misinformation, is coming from the skeptics' side. It appears to me that they are in fact correct.)

Replies from: Lumifer
comment by Lumifer · 2017-01-24T16:13:01.084Z · LW(p) · GW(p)

It appears to me that they are in fact correct

It appears to me that they are not, but I'm disinclined to do another dance round the same mulberry bush...

comment by jimmy · 2017-01-23T23:13:08.813Z · LW(p) · GW(p)

Are they looking at your thought processes or your conclusions?

If they have nothing to say when you choose the "right" conclusion, but have a lot to say when you choose the "wrong" one (especially if they don't know how you arrived at it), then it's crimethink.

If you can have the whole conversation with them without being able to tell which conclusion they personally believe, then they're legit.

Without reading further than the title, my money is on them being on the "global warming is real and if you don't think so you're an idiot" side. (am I wrong?)

Replies from: gjm
comment by gjm · 2017-01-23T23:22:34.747Z · LW(p) · GW(p)

Are they looking at your thought processes or your conclusions?

Neither, I think. I mean, your question seems to rest on an incorrect presupposition about what the paper's about. They're not trying to judge people for their opinions or how they reached them. They're saying "here's a topic with a lot of misinformation flying around; let's see what we can do to make people less likely to be persuaded by the misinformation".

my money is on them being on the [...] side

Well, the authors clearly hold that global warming is real and that the evidence for it is very strong. Does that invalidate the paper for you?

Replies from: ChristianKl, jimmy
comment by ChristianKl · 2017-01-26T10:35:51.295Z · LW(p) · GW(p)

Even if you grant that global warming is real that doesn't mean that there also isn't a lot of misinformation on the global warming side.

If I quiz a random number of liberal on the truth as the truth has been found by the IPCC, there are many issues where the liberals are likely saying that specific scenarios are more likely than the IPCC assumes.

Replies from: gjm
comment by gjm · 2017-01-26T11:38:34.042Z · LW(p) · GW(p)

there are many issues where the liberals are likely saying that specific scenarios are more likely than the IPCC assumes.

Could be. As I've said elsewhere in the thread, I think the relevant question is not "is there misinformation on both sides?" (the answer to that is likely to be yes on almost any question) but "how do the quantity and severity of misinformation differ between sides?". My impression is that it's not at all symmetrical, but of course I might think that even if it were (it's much easier to spot misinformation when you disagree strongly with it). Do you know of any nonpartisan studies of this?

Replies from: ChristianKl
comment by ChristianKl · 2017-01-26T12:09:59.605Z · LW(p) · GW(p)

There was a letter by Nobel Laureates that suggested the probability of global warming is in the same class as evolution.

Given the probability I have in my mind for evolution, that's off more orders of magnitude from the IPCC number than the positions of global warming skeptics.

Do you know of any nonpartisan studies of this?

Who would have to fund a study like this to be nonparitsan?

My impression is that it's not at all symmetrical, but of course I might think that even if it were

How do you make that judgment? Did you read the IPCC report to have the ground truth for various claims? The great thing in the report is that it has probability categories for it's various claims.

In reading most of the claims that the IPCC report makes about global warming are a lot less than 99% certain. Media reports generally have a hard time reasoning about claims with probability 80% or 90%.

Replies from: gjm
comment by gjm · 2017-01-26T14:25:52.059Z · LW(p) · GW(p)

a letter [...] that suggested the probability of global warming is in the same class as evolution [...] more orders of magnitude from the IPCC number than the positions of global warming skeptics.

I can only guess what letter you have in mind; perhaps this one? (Some of its signatories are Nobel laureates; most aren't.) I'll assume that's the one; let me know if I'm wrong.

It doesn't mention probability at all. The way in which it suggests global warming is in the same class as evolution is this:

But when some conclusions have been thoroughly and deeply tested, questioned, and examined, they gain the status of "well-established theories" and are often spoken of as "facts".

For instance, there is compelling scientific evidence that [here they list the age of the earth, the Big Bang, and evolution]. Even as they are overwhelmingly accepted by the scientific community, fame still awaits anyone who could show these theories to be wrong. Climate change now falls into this category: there is compelling, comprehensive, and consistent objective evidence that humans are changing the climate in ways that threaten our societies and the ecosystems on which we depend.

They don't claim that the probabilities are the same. Only that in all these cases the probability is high enough to justify saying that this is a thing that's been scientifically established.

Who would have to fund a study like this to be nonpartisan?

I don't know. Probably best not the fossil fuel industry. Probably best not any environmentalist organization. I think claims of bias on the part of government and academia are severely exaggerated, but maybe best to avoid those if only for the sake of appearances. A more pressing question, actually, is who would have to do it to be nonpartisan. You want people with demonstrated expertise, but the way you demonstrate expertise is by publishing things and as soon as anyone publishes anything related to climate change they will be labelled a partisan by people who disagree with what they wrote.

I don't have a good answer to this.

How do you make that judgement? Did you read the IPCC report [...] most of the claims that the IPCC report makes about global warming are a lot less than 99% certain.

It's not a judgement; my use of the rather noncommital word "impression" was deliberate. I make it by looking at what I see said about climate change, comparing it informally with what I think I know about climate change, and considering the consequences. It's not the result of any sort of statistical study, hence my deliberately noncommittal language. I have read chunks of the IPCC report but not the whole thing. I agree that it's good that they talk about probabilities. The terms they attach actual numerical probabilities to are used for future events; they usually don't give any numerical assessment of probability (nor any verbal assessment signifying a numerical assessment) for statements about the present and past, so I don't see any way to tell whether they regard those as "a lot less than 99% certain". They say "Where appropriate, findings are also formulated as statements of fact without using uncertainty qualifiers", which I take to mean that when they do that they mean there's no uncertainty to speak of.

Here are a few extracts from the AR5 "synthesis report".

Warming of the climate system is unequivocal [...] it is virtually certain [GJM: this is their term for >99%] that globally the troposphere has warmed and the lower stratosphere has cooled since the mid-20th century [...] It is virtually certain that the upper ocean (0-700m) warmed from 1971 to 2010 [...] Human influence [...] is extremely likely [GJM this is their term for 95-100%] to have been the dominant cause of the observed warming since the mid-20th century.

The key "headline" claims that distinguish the "global warming" from "not global warming" positions are "virtually certain"; attribution to human activities is "extremely likely" (and I have the strong impression that they are being deliberately overcautious about this one; note, e.g., that they say the best estimates for how much warming known human activity should have caused and the best estimates for how much warming there has actually been are pretty much equal).

Replies from: ChristianKl
comment by ChristianKl · 2017-01-26T19:51:55.097Z · LW(p) · GW(p)

I would judge the chances that evolution is incorrect by lower than 10^{-6}.

When the IPCC uses 10^{-2} as the category for global warming that off by many orders of magnitude.

A person who would believe that the chances of human-caused global warming are 10% would be nearer at the truth than a person who think that it's in the same category as evolution.

and I have the strong impression that they are being deliberately overcautious

Basically given the information to which you have been exposed you have a strong impression that the IPCC is making a mistake in the direction that would align with your politics.

The outside view suggests that most of the time experts are a bit overconfident. The replication crisis suggests that scientists are often overconfident. With climate science we are speaking about a domain that doesn't even have access to running real controlled experiments to verify important beliefs. That makes me doubt the idea that IPCC are underconfident.

If those IPCC scientists are that good at not being overconfident, why don't we tell the psychologists to listen to them to deal with their replication crisis?

Replies from: gjm
comment by gjm · 2017-01-26T20:10:59.969Z · LW(p) · GW(p)

I would judge the chances [...]

There are some contexts in which the difference between 99.999% and 99.9% is about the same as the difference between 10% and 90%. However, I do not think this is one of them. I repeat: the letter you are talking about did not say anything about probabilities; it said "some scientific theories have a shedload of evidence and are well enough established that we can reasonably call them facts; here are some familiar examples; well, global warming is also in that category".

I think it's probably 100x more certain that the earth is more than a few thousands of years old than that our universe began with a big bang ~14Gya. Does that mean the people who wrote that letter were wrong to group those together? Nope; all that matters is that both are in the "firmly enough established" category. So, they suggest (and I agree), is global warming.

(Not every detail of global warming. Not any specific claim about what the global mean surface temperature will be in 50 years' time. But the broad outline.)

The outside view suggests that most of the time experts are a bit overconfident.

The outside view suggests to me that much of the time experts are horribly overconfident, and some of the time they are distinctly underconfident (at least in what they say). The picture doesn't look to me much like one of consistent slight overconfidence at all.

If those IPCC scientists are that good at not being overconfident, why don't we tell the psychologists to listen to them to deal with their replication crisis?

Hey, psychologists! Go read the IPCC reports, and follow their example!

There you go. I did. It won't actually do any good, because the problem isn't that no one has ever told psychologists to be cautious and avoid overconfidence. And that's the answer to your question "why don't we ...", and you will notice that it has nothing to do with the people who wrote the IPCC being overconfident.

Replies from: Viliam, ChristianKl
comment by Viliam · 2017-01-27T09:22:06.352Z · LW(p) · GW(p)

the problem isn't that no one has ever told psychologists to be cautious and avoid overconfidence.

The problem is probably that psychologists afterwards always nod their heads and say: "uhm, uhm, that's interesting... please tell me more about your feelings of anxiety."

:D

comment by ChristianKl · 2017-01-26T21:17:02.547Z · LW(p) · GW(p)

There are some contexts in which the difference between 99.999% and 99.9% is about the same as the difference between 10% and 90%.

It's not 99.9% in the IPCC report.

Events that happen with 0.005 probability are worth planning for when they have high impacts. We care about asteroid defense when that probability is much lower.

Humanity has a good chance of getting destroyed in this century if decision makers treat 0.001 the same way as 0.0000000001.

The outside view suggests to me that much of the time experts are horribly overconfident, and some of the time they are distinctly underconfident (at least in what they say)

In what examples are are experts underconfident when they give 0.9 or 0.95 probabilities of an event happening.

Replies from: gjm
comment by gjm · 2017-01-26T23:02:40.367Z · LW(p) · GW(p)

It's not 99.9% in the IPCC report.

I wasn't trying to suggest it was; my apologies for (evidently) being insufficiently clear.

Events that happen with 0.005 probability are worth planning for when they have high impacts.

Yup, strongly agreed. But here the low-probability events we're talking about here are things like "it turns out global warming wasn't a big deal after all". It would be sad to have spent a lot of money trying to reduce carbon dioxide emissions in that case, but it wouldn't be much like (e.g.) being hit by an asteroid.

In what examples are experts underconfident when they give 0.9 or 0.95 probabilities of an event happening

I don't have examples to hand, I'm afraid (other than the IPCC example we're discussing here, though actually all we know there is that their probability estimate is somewhere between 0.95 and 1, and is probably below 0.99 since they didn't choose to say "virtually certain". (Only "probably" because when they list what the terms mean they say 95-100% and not 95-99% for "extremely likely", and the best explanation I can see for that is that they are reserving the right to say "extremely likely" rather than "virtually certain" sometimes even though they think the actual probability is over 99%. This is one reason why I suspect them of understating their certainty on purpose: they seem to have gone out of their way to provide themselves with a way to do that.)

comment by jimmy · 2017-01-24T20:07:29.549Z · LW(p) · GW(p)

I'm not addressing the paper specifically, I'm answering your question more generally. I still think it applies here though. When they identify "misinformation", are they first looking for things that support the wrong conclusion and then explaining why you shouldn't believe this wrong thing, or are they first looking at reasoning processes and explaining how to do them better (without tying it to the conclusion they prefer).

For example, do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is? If they don't and they're claiming to be about "misinformation" and that they're not pushing an agenda, then that's quite suspicious. Maybe they do, I dunno. But that's where I'd look to tell the difference between what they're claiming and what Lumifer is accusing them of.

Well, the authors clearly hold that global warming is real and that the evidence for it is very strong. Does that invalidate the paper for you?

The fact that they hold that view does not. It's possible to agree with someones conclusions and still think they're being dishonest about how they're arguing for it, you know. (and also, to disagree with someone's conclusions but think that they're at least honest about how they get there)

The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does, depending on how clear they are about it and how they are clear about it. It's possible for propaganda to contain good arguments, but you do have to be pretty careful with it because you're getting filtered evidence.

(notice how it applies here. I'm talking about processes not conclusions, and haven't given any indication of whether or not I buy into global warming - because it doesn't matter, and if I did it'd just be propaganda slipping out)

Replies from: gjm
comment by gjm · 2017-01-25T00:57:31.456Z · LW(p) · GW(p)

When they identify "misinformation", are they first looking for things that support the wrong conclusion [...] or are they first looking at reasoning processes

What makes misinformation misinformation is that it's factually wrong, not that the reasoning processes underlying it are bad. (Not to deny the badness of bad reasoning, but it's a different failure mode.)

do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is?

They pick one single example of misinformation, which is the claim that there is no strong consensus among climate scientists about anthropogenic climate change.

If they don't and they're claiming to be about "misinformation" and that they're not pushing an agenda, then that's quite suspicious.

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions. As it happens, they aren't. Starting from the premise that global warming is real is no more unreasonable than starting from the premise that evolution is real, and not much more unreasonable than starting from the premise that the earth is not flat.

The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does

I disagree. If you're going to do an experiment about how to handle disinformation, you need an example of disinformation. You can't say "X is an instance of disinformation" without making it clear that you believe not-X. Now, I suppose they could have identified denying that there's a strong consensus on global warming as disinformation while making a show of not saying whether they agree with that consensus or not, but personally I'd regard that more as a futile attempt at hiding their opinions than as creditable neutrality.

I [...] haven't given any indication of whether or not I buy into global warming

I think you have, actually. If there were a paper about how to help people not be deceived by dishonest creationist propaganda, and someone came along and said "do they address any misinformation that would lead people into being misled into thinking 6-day creation is less true than it is?" and the like, it would be a pretty good bet that that person was a creationist.

Now, of course I could be wrong. If so, then I fear you have been taken in by the rhetoric of the "skeptics"[1] who are very keen to portray the issue as one where it's reasonable to take either side, where taking for granted that global warming is real is proof of dishonesty or incompetence, etc. That's not the actual situation. At this point, denial of global warming is about as credible as creationism; it is not a thing scientific integrity means people should treat neutrally.

[1] There don't seem to be good concise neutral terms for the sides of that debate.

Replies from: ChristianKl, jimmy
comment by ChristianKl · 2017-01-26T10:35:47.263Z · LW(p) · GW(p)

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions.

Both are quite simplistic positions. If you look at the IPCC report there are many different claims about global warming effects and those have different probabilities attached to them.

It's possible to be wrong on some of those probabilities in both directions, but thinking about probabilities is a different mode than "On what side do you happen to be?"

Replies from: gjm, gjm
comment by gjm · 2017-01-26T11:44:21.417Z · LW(p) · GW(p)

Both are quite simplistic positions

Incidentally, the first comment in this thread to talk in terms of discrete "sides" was not mine above but one of jimmy's well upthread, and I think most of the ensuing discussion in those terms is a descendant of that. I wonder why you chose my comment in particular to object to.

comment by gjm · 2017-01-26T11:33:35.502Z · LW(p) · GW(p)

Both are quite simplistic positions

I don't know about you, but I don't have the impression that my comments in this thread are too short.

Yes, the climate is complicated. Yes, there is a lot more to say than "global warming is happening" or "global warming is not happening". However, it is often convenient to group positions into two main categories: those that say that the climate is warming substantially and human activity is responsible for a lot of that warming, and those that say otherwise.

comment by jimmy · 2017-01-25T02:50:41.441Z · LW(p) · GW(p)

What makes misinformation misinformation is that it's factually wrong, not that the reasoning processes underlying it are bad.

Yes, and identifying it is a reasoning process, which they are claiming to teach.

It would be quite suspicious if "global warming is real" and "global warming is not real" were two equally credible positions. As it happens, they aren't.

Duh.

You can't say "X is an instance of disinformation" without making it clear that you believe not-X.

Sure, but there's more than one X at play. You can believe, for example, that "the overwhelming scientific consensus is that global warming is real" is false and that would imply that you believe not-"the overwhelming scientific consensus is that global warming is real". You're still completely free to believe that global warming is real.

I think you have, actually.

"What about the misinformation on the atheist side!" is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason (which usually people cannot do).

If someone is actually capable of the kind of honesty where they hold their own side to the same standards as the outgroup side, it is no longer evidence of which side they're on. You're assuming I don't hold my own side to the same standards. That's fine, but you're wrong. I'd have the same complaints if it were a campaign to "teach them creationist folk how not to be duped by misinformation", and I am absolutely not a creationist by any means.

I can easily give an example, if you'd like.

If so, then I fear you have been taken in by the rhetoric of the "skeptics"[1] who are very keen to portray the issue as one where it's reasonable to take either side,

Nothing I am saying is predicated on there being more than one "reasonable" side.

where taking for granted that global warming is real is proof of dishonesty or incompetence, etc

If you take for granted a true thing, it is not proof of dishonesty or incompetence.

However, if you take it for granted and say that there's only one reasonable side, then it is proof that you're looking down on the other side. That's fine too, if you're ready to own that.

It just becomes dishonest when you try to pretend that you're not. It becomes dishonest when you say "I'm just helping you spot misinformation, that's all" when what you're really trying to do is make sure that they believe Right thoughts like you do, so they don't fuck up your society by being stupid and wrong.

There's a difference between helping someone reason better and helping someone come to the beliefs that you believe in, even when you are correct. Saying that you're doing the former while doing the latter is dishonest, and it doesn't help if most people fail to make the distinction (or if you somehow can't fathom that I might be making the distinction myself and criticizing them for honesty rather than for disagreeing with me)

Replies from: gjm
comment by gjm · 2017-01-25T03:37:23.682Z · LW(p) · GW(p)

identifying it is a reasoning process, which they are claiming to teach.

I don't think they are. Teaching people to reason is really hard. They describe what they're trying to do as "inoculation", and what they're claiming to have is not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.

"What about the misinformation on the atheist side!" is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason

Not only that. Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.

I think the situation with global warming is comparable.

You're assuming I don't hold my own side to the same standards.

I'm not. Really, truly, I'm not. I'm saying that from where I'm sitting it seems like global-warming-skeptic misinformation is a big problem, and global-warming-believer misinformation is a much much smaller problem, and the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.

then it is proof that you're looking down on the other side.

Sure. (Though I'm not sure "looking down on" is quite the right phrase.) So far as I can tell, the authors of the paper we're talking about don't make any claim not to be "looking down on" global-warming skeptics. The complaints against them that I thought we were discussing here weren't about them "looking down on" global-warming skeptics. Lumifer described them as trying to "prevent crimethink", and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.

It becomes dishonest when you say "I'm just helping you spot misinformation, that's all" when what you're really trying to do is make sure that they believe Right thoughts like you do

I think this is a grossly unreasonable description of the situation, and the use of the term "crimethink" (Lumifer's, originally, but you repeated it) is even more grossly unreasonable. The unreasonableness is mostly connotational rather than denotational; that is, there are doubtless formally-kinda-equivalent things you could say that I would not object to.

So, taking it bit by bit:

when you say "I'm just helping you spot misinformation, that's all"

They don't say that. They say: here is a way to help people not be taken in by disinformation on one particular topic. (Their approach could surely be adapted to other particular topics. It could doubtless also be used to help people not be informed by accurate information on a particular topic, though to do that you'd need to lie.) They do not claim, nor has anyone here claimed so far as I know, that they are offering a general-purpose way of distinguishing misinformation from accurate information. That would be a neat thing, but a different and more difficult thing.

make sure that they believe Right thoughts

With one bit of spin removed, this becomes "make sure they are correct rather than incorrect". With one bit of outright misrepresentation removed, it then becomes "make it more likely that they are correct rather than incorrect". This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they're not even though they subtend about the same angle from earth, I am trying to "make sure that they believe Right thoughts". But you wouldn't dream of describing it that way. So what makes that an appropriate description in this case?

(Incidentally, it may be worth clarifying that the specific question about which the authors of the paper want people to "believe Right thoughts" is not global warming but whether there is a clear consensus on global warming among climate scientists.)

crimethink

I'm just going to revisit this because it really is obnoxious. The point of the term "crimethink" in 1984 is that certain kinds of thoughts there were illegal and people found thinking them were liable to be tortured into not thinking them any more. No one is suggesting that it should be illegal to disbelieve in global warming. No one is suggesting that people who disbelieve in global warming should be arrested, or tortured, or have their opinions forcibly changed in any other fashion. The analogy with "crimethink just isn't there*. Unless you are comfortable saying that "X regards Y as crimethink" just means "X thinks Y is incorrect", in which case I'd love to hear you justify the terminology.

Replies from: Lumifer, jimmy
comment by Lumifer · 2017-01-25T05:00:11.339Z · LW(p) · GW(p)

No one is suggesting that it should be illegal to disbelieve in global warming.

This is factually incorrect (and that's even without touching Twitter and such).

The analogy with "crimethink* just isn't there.

Oh, all right. You don't like the word. How did you describe their activity? "...not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation."

Here: brainwashing. Do you like this word better?

Replies from: gjm, gjm
comment by gjm · 2017-01-25T12:42:17.687Z · LW(p) · GW(p)

You don't like the word

Oh, one other thing. I've got no problems with the word. What I don't like is its abuse to describe situations in which the totality of the resemblance to the fiction from which the term derives is this: Some people think a particular thing is true and well supported by evidence, and therefore think it would be better for others to believe it too.

If you think that is what makes the stuff about "crimethink" in 1984 bad, then maybe you need to read it again.

Replies from: Lumifer
comment by Lumifer · 2017-01-25T16:05:33.634Z · LW(p) · GW(p)

As usual, I like my points very very sharp, oversaturated to garish colours, and waved around with wild abandon :-)

You don't.

Replies from: gjm
comment by gjm · 2017-01-25T17:41:19.418Z · LW(p) · GW(p)

Or, to put it differently, I prefer not to lie.

Replies from: Lumifer
comment by Lumifer · 2017-01-25T18:09:12.743Z · LW(p) · GW(p)

Would you like to point out to me where I lied, with quotes and all?

Replies from: gjm
comment by gjm · 2017-01-25T19:34:41.845Z · LW(p) · GW(p)

Sure. Just a quick example, because I have other things I need to be doing.

No one is suggesting that it should be illegal to disbelieve in global warming.

That is factually incorrect [with links to two news articles]

I take it that saying "That is factually incorrect" with those links amounts to a claim that the links show that the claim in question is factually incorrect. Neither of your links has anything to do with anyone saying it should be illegal to disbelieve in global warming.

(There were other untruths, half-truths, and other varieties of misdirection in what you said on this, but the above is I think the clearest example.)

[EDITED because I messed up the formatting of the quote blocks. Sorry.]

Replies from: Lumifer
comment by Lumifer · 2017-01-26T02:23:10.940Z · LW(p) · GW(p)

An unfortunate example because I believe I'm still right and you're still wrong.

We've mentioned what, a California law proposal and a potential FBI investigation? Wait, but there is more! A letter from 20 scientists explicitly asks for a RICO (a US law aimed at criminal organizations such as drug cartels) investigation of deniers. A coalition of Attorney Generals of several US states set up an effort to investigate and prosecute those who "mislead" the public about climate change.

There's Bill Nye:

“Was it appropriate to jail the guys from Enron?” Mr. Nye asked in a video interview with Climate Depot’s Marc Morano. “We’ll see what happens. Was it appropriate to jail people from the cigarette industry who insisted that this addictive product was not addictive, and so on?”

“In these cases, for me, as a taxpayer and voter, the introduction of this extreme doubt about climate change is affecting my quality of life as a public citizen,” Mr. Nye said. “So I can see where people are very concerned about this, and they’re pursuing criminal investigations as well as engaging in discussions like this.”

Of course there is James Hansen, e.g. this (note the title):

When you are in that kind of position, as the CEO of one the primary players who have been putting out misinformation even via organisations that affect what gets into school textbooks, then I think that's a crime.

or take David Suzuki:

“What I would challenge you to do is to put a lot of effort into trying to see whether there’s a legal way of throwing our so-called leaders into jail because what they’re doing is a criminal act,” said Dr. Suzuki, a former board member of the Canadian Civil Liberties Association.

“It’s an intergenerational crime in the face of all the knowledge and science from over 20 years.”

The statement elicited rounds of applause.

Here is Lawrence Torcello, Assistant Professor of Philosophy, no less:

What are we to make of those behind the well documented corporate funding of global warming denial? Those who purposefully strive to make sure “inexact, incomplete and contradictory information” is given to the public? I believe we understand them correctly when we know them to be not only corrupt and deceitful, but criminally negligent in their willful disregard for human life. It is time for modern societies to interpret and update their legal systems accordingly.

Hell, there is a paper in a legal journal: Deceitful Tongues: Is Climate Change Denial a Crime? (by the way, the paper says "yes").

Sorry, you are wrong.

Replies from: gjm
comment by gjm · 2017-01-26T13:47:46.113Z · LW(p) · GW(p)

Nice Gish gallop, but not one of those links contradicts my statement that

No one is suggesting that it should be illegal to disbelieve in global warming.

which is what you called "factually incorrect". Most of them (all but one, I think) are irrelevant for the exact same reason I already described: what they describe is people suggesting that some of the things the fossil fuel industry has done to promote doubt about global warming may be illegal under laws that already exist and have nothing to do with global warming, because those things amount to false advertising or fraud or whatever.

In fact, these prosecutions, should any occur, would I think have to be predicated on the key people involved not truly disbelieving in global warming. The analogy that usually gets drawn is with the tobacco industry's campaign against the idea that smoking causes cancer; the executives knew pretty well that smoking probably did cause cancer, and part of the case against them was demonstrating that.

Are you able to see the difference between "it should be illegal to disbelieve in global warming" and "some of the people denying global warming are doing it dishonestly to benefit their business interests, in which case they should be subject to the same sanctions as people who lie about the fuel efficiency of the cars they make or the health effects of the cigarettes they make"?


I'm not sure that responding individually to the steps in a Gish gallop is a good idea, but I'll do it anyway -- but briefly. In each case I'll quote from the relevant source to indicate how it's proposing the second of those rather than the first. Italics are mine.

Letter from 20 scientists: "corporations and other organizations that have knowingly deceived the American people about the risks of climate change [...] The methods of these organizations are quite similar to those used earlier by the tobacco industry. A RICO investigation [...] played an important role in stopping the tobacco industry from continuing to deceive the American people about the dangers of smoking."

Coalition of attorneys general: "investigations into whether fossil fuel companies have misled investors about how climate change impacts their investments and business decisions [...] making sure that companies are honest about what they know about climate change". (But actually this one seems to be mostly about legislation on actual emissions, rather than about what companies say. Not at all, of course, about what individuals believe.)

Bill Nye (actually the story isn't really about him; his own comment is super-vague): "did they mislead their investors and overvalue their companies by ignoring the financial costs of climate change and the potential of having to leave fossil fuel assets in the ground? [...] are they engaged in a conspiracy to mislead the public and affect public policy by knowingly manufacturing false doubt about the science of climate change?"

James Hansen: "he will accuse the chief executive officers [...] of being fully aware of the disinformation about climate change they are spreading"

David Suzuki: This is the one exception I mentioned above; Suzuki is (more precisely: was, 9 years ago) attacking politicians rather than fossil fuel companies. It seems to be rather unclear what he has in mind, at least from that report. He's reported as talking about "what's going on in Ottawa and Edmonton" and "what they're doing", but there are no specifics. What does seem clear is that (1) he's talking specifically about politicians and (2) it's "what they're doing" rather than "what they believe" that he has a problem with. From the fact that he calls it "an intergenerational crime", it seems like he must be talking about something with actual effects so I'm guessing it's lax regulation or something he objects to.

Lawrence Torcello (incidentally, why "no less"? An assistant professor is a postdoc; it's not exactly an exalted position: "corporate funding of global warming denial [...] purposefully strive to make sure "inexact, incomplete and contradictory information" is given to the public [...] not only corrupt and deceitful, but criminally negligent".

"Deceitful Tongues" paper: "the perpetrators of this deception must have been aware that its foreseeable impacts could be devastating [...] As long as climate change deniers can be shown to have engaged in fraud, that is, knowing and wilful deception, the First Amendment afford them no protection."

So, after nine attempts, you have given zero examples of anyone suggesting that it should be illegal to disbelieve in global warming. So, are you completely unable to read, or are you lying when you offer them as refutation of my statement that, and again I quote, "no one is suggesting that it should be illegal to disbelieve in global warming"?

(I should maybe repeat here a bit of hedging from elsewhere in the thread. It probably isn't quite true that no one at all, anywhere in the world has ever suggested that it should be illegal to disbelieve in global warming. Almost any idea, no matter how batshit crazy, has someone somewhere supporting it. So, just for the avoidance of doubt: what I meant is that "it should be illegal to disbelieve in global warming" is like "senior politicians across the world are really alien lizard people": you can doubtless find people who endorse it, but they will be few in number and probably notably crazy in other ways, and they are in no way representative of believers in global warming or "progressives" or climatologists or any other group you might think it worth criticizing.)

Replies from: Lumifer
comment by Lumifer · 2017-01-26T16:43:12.367Z · LW(p) · GW(p)

I was never a fan of beating my head against a brick wall.

Tap.

comment by gjm · 2017-01-25T11:11:25.152Z · LW(p) · GW(p)

This is factually incorrect

Your first link is to proposed legislation in California. O NOES! Is California going to make it illegal to disbelieve in global warming? Er, no. The proposed law -- you can go and read it; it isn't very long; the actual legislative content is section 3, which is three short paragraphs -- has the following effect: If a business engages in "unfair competition, as defined in Section 17200 of the Business and Professions Code" (it turns out this basically means false advertising), and except that the existing state of the law stops it being prosecuted because the offence was too long ago, then the Attorney General is allowed to prosecute it anyway.

I don't know whether that's a good idea, but it isn't anywhere near making it illegal to disbelieve in global warming. It removes one kinda-arbitrary limitation on the circumstances under which businesses can be prosecuted if they lie about global warming for financial gain.

Your second link is similar, except that it doesn't involve making anything illegal that wasn't illegal before; the DoJ is considering bringing a civil action (under already-existing law, since the DoJ doesn't get to make laws) against the fossil fuel industry for, once again, lying about global warming for financial gain.

Here: brainwashing. Do you like this word better?

"Brainwashing" is just as dishonestly bulshitty as "crimethink", and again so far as I can tell if either term applies here it would apply to (e.g.) pretty much everything that happens in high school science lessons.

Replies from: Lumifer
comment by Lumifer · 2017-01-25T16:01:21.579Z · LW(p) · GW(p)

but it isn't anywhere near making it illegal

Let me quote you yourself, with some emphasis:

No one is suggesting that it should be illegal

We're not talking about making new laws. We're talking about taking very wide and flexible existing laws and applying them to particular targets, ones to which they weren't applied before. The goal, of course, is intimidation and lawfare since the chances of a successful prosecution are slim. The costs of defending, on the other hand, are large.

"Lying for financial gain" is a very imprecise accusation. Your corner chip shop might have a sign which says "Best chips in town!" which is lying for financial gain. Or take non-profits which tend to publish, let's be polite and say "biased" reports which are, again, lying for financial gain.

You point was that no one suggested going after denialists/sceptics with legal tools and weapons. This is not true.

Replies from: gjm
comment by gjm · 2017-01-25T17:40:45.680Z · LW(p) · GW(p)

Your point was that no one suggested going after denialists/sceptics with legal tools and weapons. This is not true.

It also is not my point. There are four major differences between what is suggested by your bloviation about "crimethink" and the reality:

  • "Crimethink" means you aren't allowed to think certain things. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say you're not allowed to say certain things.
  • "Crimethink" is aimed at individuals. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say that businesses are not allowed to say certain things.
  • "Crimethink" applies universally; a good citizen of Airstrip One was never supposed to contemplate the possibility that the Party might be wrong. Proposals like the ones you linked to dishonest descriptions of[1] are concerned only with what businesses are allowed to do in their advertising and similar activities.
  • "Crimethink" was dealt with by torture, electrical brain-zapping, and other such means of brute-force thought control. Proposals like the ones you linked to dishonest descriptions of[1] would lead at most to the same sort of sanction imposed in other cases of false advertising: businesses found guilty (let me remind you that neither proposal involves any sort of new offences) would get fined.

[1] Actually, the second one was OK. The first one, however, was total bullshit.

"Lying for financial gain" is a very imprecise accusation.

Sure. None the less, there is plenty that it unambiguously doesn't cover. Including, for instance, "disbelieving in global warming".

Replies from: Lumifer
comment by Lumifer · 2017-01-25T18:13:06.802Z · LW(p) · GW(p)

suggested by your bloviation about "crimethink"

Please stay on topic. This subthread is about your claim that "No one is suggesting that it should be illegal"

there is plenty that it unambiguously doesn't cover

Are you implying that you can disbelieve all you want deep in your heart but as soon as you open your mouth you're fair game?

Replies from: gjm
comment by gjm · 2017-01-25T19:28:29.537Z · LW(p) · GW(p)

Please stay on topic. This subthread is about your claim that "No one is suggesting that it should be illegal"

A claim I made because you were talking about "crimethink". And, btw, what was that you were saying elsewhere about other people wanting to set the rules of discourse? I'm sorry if you would prefer me to be forbidden to mention anything not explicit in the particular comment I'm replying to, but I don't see any reason why I should be.

Are you implying that [...]

No. (Duh.) But I am saying that a law that forbids businesses to say things X for purposes Y in circumstances Z is not the same as a law that forbids individuals to think X.

comment by jimmy · 2017-01-25T21:10:46.374Z · LW(p) · GW(p)

I don't think they are. Teaching people to reason is really hard. They describe what they're trying to do as "inoculation”

Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.

Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.

Oh… yeah, that’s not true at all. If it were true, and 99% of the bullshit were generated by one side, then yes, it would make sense to spend 99% of one’s time addressing bullshit from that one side and it wouldn’t be evidence for pushing an agenda. There’s still other reasons to have a more neutral balance of criticism even when there’s not a neutral balance of bullshit or evidence, but you’re right - if the bullshit is lopsided then the lopsided treatment wouldn’t be evidence of dishonest treatment.

It’s just that bullshit from one’s own side is a whole lot harder to spot because you immediately gloss over it thinking “yep, that’s true” and don’t stop to notice “wait! That’s not valid!”. In every debate I can think of, my own side (or “the correct side”, if that’s something we’re allowed to declare in the face of disagreement) is full of shit too, and I just didn’t notice it years ago.

I'm not. Really, truly, I'm not. [...]it seems like [...] the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.

This reads to me as “I’m not. Really, truly, I’m not. I’m just [doing exactly what you said I was doing]”. This is a little hard to explain as there is some inferential distance here, but I’ll just say that what I mean by “have given no indication of what I believe” and the reason I think that is important is different from what it looks like to you.

Sure. (Though I'm not sure "looking down on" is quite the right phrase.) So far as I can tell, the authors of the paper we're talking about don't make any claim not to be "looking down on" global-warming skeptics. The complaints against them that I thought we were discussing here weren't about them "looking down on" global-warming skeptics. Lumifer described them as trying to "prevent crimethink", and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.

Part of “preventing crimethink” is that the people trying to do it usually believe that they are justified in doing so (“above” the people they’re trying to persuade), and also that they are “simply educating the masses”, not “making sure they don’t believe things that we believe [but like, we really believe them and even assert that they are True!]”.

With one bit of spin removed, this becomes "make sure they are correct rather than incorrect".

This is what it feels like from the inside when you try to enforce your beliefs on people. It feels like the beliefs you have are merely correct, not your own beliefs (that you have good reason to believe you’re right on, etc). However, you don’t have some privileged access to truth. You have to reason and stuff. If your reasoning is good, you might come to right answers even. If the way that you are trying to make sure they are incorrect is by finding out what is true [according to your own beliefs, of course] and then nudging them towards believing the things that are true (which works out to “things that you believe”), then it is far more accurate to say “make sure they hold the same beliefs as me”, even if you hold the correct beliefs and even if it’s obviously correct and unreasonable to disagree.

And again, just to be clear, this applies to creationism too.

With one bit of outright misrepresentation removed, it then becomes "make it more likely that they are correct rather than incorrect". This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they're not even though they subtend about the same angle from earth, I am trying to "make sure that they believe Right thoughts". But you wouldn't dream of describing it that way. So what makes that an appropriate description in this case?

If you simply said “many people think the sun and the moon are the same size, they aren’t and here’s proof”, I’d see you as offering a helpful reason to believe that the sun is bigger.

If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently. Again, I’m talking about the general principles here and not making claims about what the paper itself actually does (I cannot criticise the paper itself as I have not read it), but it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed. “innoculate against this idea” is talking about the idea and the intent to change their belief. If all you are trying to do is offer someone a new perpsective, you can just do that - no reason to talk about how “effective” this might be.

Unless you are comfortable saying that "X regards Y as crimethink" just means "X thinks Y is incorrect", in which case I'd love to hear you justify the terminology.

Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole. No, they are not actually saying people should be arrested and tortured and I somehow doubt that is the claim Lumifer was trying to make here.

It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”, even if it’s only mild punishment and even if you prefer not to see it that way. If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot. If you frame those who disagree with you as idiots, then even if you have euphemisms for it and try to say “oh, well it’s not your fault that you’re wrong, and everyone is wrong sometimes”, then they are not going to want to interact with you.

Does this make sense?

If you frame them as an idiot, then in order to have a productive conversation with you that isn’t just “nuh uh!”/”yeah huh!”, they have to accept the frame that they’re an idiot, and no one wants to do that. They may be an idiot, and from your perspective it may not be a punishment at all - just that you’re helping them realize their place in society as someone who can’t form beliefs on their own and should just defer to the experts. And you might be right.

Still, by enforcing your frame on them, you are socially punishing them, from their perspective, and this puts pressure on them to “just believe the right things”. It’s not “believe 2+2=5 or the government will torture you”, it’s “believe that this climate change issue is a slam dunk or gjm will publicly imply that you are unreasonable and incapable of figuring out the obvious”, but that pressure is a step in the same direction - whether or not the climate change issue is a slam dunk and whether or not 2+2=5 does not change a thing. If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts, and it becomes real hard to be in my social circle if you don’t share this communal (albeit true) belief. This has costs even when the communal beliefs are true. At the point where I start thinking less of people and imposing social costs on them for not sharing my beliefs (and not their inability to defend their own or update), I am disconnecting the truth finding mechanism and banking on my own beliefs being true enough on their own. This is far more costly than it seems like it should be for more than one reason - the obvious one being that people draw this line waaaaaaay too early, and very often are wrong about things where they stop tracking the distinction between “I believe X” and “X is true”.

And yes, there are alternative ways of going about it that don't require you to pretend that "all opinions are equally valid" or that it you don't think it would be better if more people agreed with you or any of that nonsense.

Does this make sense?

Replies from: gjm
comment by gjm · 2017-01-26T00:14:37.119Z · LW(p) · GW(p)

Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.

Those awful geography teachers, making sure their pupils don't believe the wrong things (as determined by them) about what city is the capital of Australia! Those horrible people at snopes.com, making sure people don't believe the wrong things (as determined by them) about whether Procter & Gamble is run by satanists!

What makes Lumifer's criticism not "too obvious to bother about" is not doubt about whether the people he's criticizing are aiming to influence other people's opinions. It's whether there's something improper about that.

yeah, that's not true at all.

In your opinion, is anti-creationist misinformation as serious a problem as creationist misinformation? (10% as serious?)

This is what it feels like from the inside when you try to enforce your beliefs on people.

Yes, it is. But it's also what it feels like from the inside in plenty of other situations that don't involve enforcing anything, and it's also what it feels like from the inside when the beliefs in question are so firmly established that no reasonable person could object to calling them "facts" as well as "beliefs". (That doesn't stop them being beliefs, of course.)

(The argument "You are saying X. X is what you would say if you were doing Y. Therefore, you are doing Y." is not a sound one.)

it is far more accurate to say “make sure they hold the same beliefs as me”

The trouble is that the argument you have offered for this is so general that it applies e.g. to teaching people about arithmetic. I don't disagree that it's possible, and not outright false, to portray what an elementary school teacher is doing as "make sure these five-year-olds hold the same beliefs about addition as me"; but I think it's misleading for two reasons. Firstly, because it suggests that their goal is "have the children agree with me" rather than "have the children be correct". (To distinguish, ask: Suppose it eventually turns out somehow that you're wrong about this, but you never find that out. Would it be better if the children end up with right beliefs that differ from yours, or wrong ones that match yours? Of course they will say they prefer the former. So, I expect, will most people trying to propagate opinions that are purely political; I am not claiming that answering this way is evidence of any extraordinary virtue. But I think it makes it wrong to suggest that what they want is to be agreed with.) Secondly, because it suggests (on Gricean grounds) that there actually is, or is quite likely to be, a divergence between "the beliefs I hold" and "the truth" in their case. When it comes to arithmetic, that isn't the case.

Now, the fact (if you agree with me that it's a fact; maybe you don't) that the argument leads to a bad place when applied to teaching arithmetic doesn't guarantee that it does so when it comes to global warming. But if not, there must be a relevant difference between the two. In that case, what do you think the relevant differences are?

If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently.

All the talk of "preventing" and other coercion is stuff that you and Lumifer have made up. It's not real.

it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed.

You know, you could actually just read the paper. It's publicly available and it isn't very long. Anyway: there are two different audiences involved here, and it looks to me (not just from the fragment I just quoted, but from what you say later on) as if you are mixing them up a bit.

The paper is (implicitly) addressed to people who agree with its authors about global warming. It takes it as read that global warming is real, not as some sort of nasty coercive attempt to make its readers agree with that but because the particular sort of "inoculation" it's about will mostly be of interest to people who take that position. (And perhaps also because intelligent readers who disagree will readily see how one might apply its principles to other issues, or other sides of the same issue if it happens that the authors are wrong about global warming.)

The paper describes various kinds of interaction between (by assumption, global-warming-believing) scientists and the public. So:

Those interactions are addressed to people who do not necessarily agree with the paper's authors about global warming. In fact, the paper is mostly interested in people who have neither strong opinions nor expertise in the field. The paper doesn't advocate treating those people coercively; it doesn't advocate trying to make them feel shame if they are inclined to disagree with the authors; it doesn't advocate trying to impose social costs for disagreeing; it doesn't advocate saying or implying that anyone is an idiot.

So. Yes, the paper treats global warming as a settled issue. That would be kinda rude, and probably counterproductive, if it were addressed to an audience a nontrivial fraction of which disagrees; but it isn't. It would be an intellectual mistake if in fact the evidence for global warming weren't strong enough to make it a settled issue; but in fact it is. (In my opinion, which is what's relevant for whether I am troubled by their writing what they do.)

Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole.

I don't (always) object to hyperbole. The trouble is that so far as I can tell, nothing that would make the associations of "crimethink" appropriate is true in this case. (By which, for the avoidance of doubt, I mean not only "they aren't advocating torturing and brain-raping people to make them believe in global warming", for instance, but "they aren't advocating any sort of coercive behaviour at all". And likewise for the other implications of "crimethink".) The problem isn't that it's hyperbole, it's that it's not even an exaggeration of something real.

It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”

Except that this "social punishment" is not something in any way proposed or endorsed by the paper Lumifer responded to by complaining about "crimethink". He just made that up. (And you were apparently happy to go along with it despite having, by your own description, not actually read the paper.)

If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot.

No doubt. But, once again, none of that is suggested or endorsed by the paper; neither does it make sense to complain that the paper is itself practising that behaviour, because it is not written for an audience of global-warming skeptics.

You might, of course, want to argue that I am doing that, right here in this thread. I don't think that would be an accurate account of things, as it happens, but in any case I am not here concerned to defend myself. Lumifer complained that the paper was treating global warming skepticism as "crimethink", and that's the accusation I was addressing. If you want to drop that subject and discuss whether my approach in this thread is a good one, I can't stop you, but it seems like a rather abrupt topic shift.

If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts

OK, I guess, though "policing thoughts" seems to me excessively overheated language. But, again, this argument can be applied (as you yourself observe) to absolutely anything. In practice, we generally don't feel the need to avoid saying straightforwardly that the sky is blue, or that 150 million years ago there were dinosaurs. That does impose some social cost on people who think the sky is red or that life on earth began 6000 years ago; but the reason for not hedging all the time with "as some of us believe", etc., isn't (usually) a deliberate attempt to impose social costs; it's that it's clearer and easier and (for the usual Gricean reasons: if you hedge, many in your audience will draw the conclusion that there must be serious doubt about the matter) less liable to mislead people about the actual state of expert knowledge if we just say "the sky is blue" or "such-and-such dinosaurs were around 150 million years ago".

But, again, if we're discussing -- as I thought we were -- the paper linked upthread, this is all irrelevant for the reasons given above. (If, on the other hand, we've dropped that subject and are now discussing whether gjm is a nasty rude evil thought-policer, then I will just remark that I do in fact generally go out of my way to acknowledge that some people do not believe in anthropogenic climate change; but sometimes, as e.g. when Lumifer starts dropping ridiculous accusations about "crimethink", I am provoked into being a bit more outspoken than usual. And what I am imposing (infinitesimal) social costs for here is not "expressing skepticism about global warming"; it's "being dickish about global warming" and, in fact, "attempting to impose social costs for unapologetically endorsing the consensus view on global warming", the latter being what I think Lumifer has been trying to do in this thread.)

Does this make sense?

Replies from: jimmy
comment by jimmy · 2017-01-26T19:12:46.683Z · LW(p) · GW(p)

I'm not criticizing the article, nor am I criticizing you. I'm criticizing a certain way of approaching things like this. I purposely refrain from staking a claim on whether it applies to the article or to you because I'm not interested in convincing you that it does or even determining for sure whether it does. I get the impression that it does apply, but who knows - I haven't read the article and I can't read your mind. If it doesn't, then congrats, my criticism doesn't apply to you.

You're thinking is on a very similar track to mine when you suggest the test "assuming you're wrong, do you want them to agree or be right?". The difference is that I don't think that people saying "be right, of course" is meaningful at all. I think you gotta look at what actually happens when they're confronted with new evidence that they are in fact wrong. If, when you're sufficiently confident, you drop the distinction between your map and the territory, not just in loose speech but in internal representation, then you lose the ability to actually notice when you're wrong and your actions will not match your words. This happens all the time.

I've never had a geography or arithmetic class suffer from that failure mode, and most of the time I disagreed with my teachers they responded in a way that actually helped us figure out which of us were right. However in geometry, power electronics, and philosophy, I have run into this failure mode where when I disagree all they can think of is "how do I convince him he's wrong" rather than "let me address his point and see where that leads" - but that's because those particular teachers sucked and not a fault of teaching in general. With respect to that paper, the title does seem to imply that they've dropped that distinction. It is a very common on that topic for people to drop the distinction and refuse to pick it up, so I'm guessing that's what they're doing there. Who knows though, maybe they're saints. If so, good for them.

In practice, we generally don't feel the need to avoid saying straightforwardly that the sky is blue, or that 150 million years ago there were dinosaurs. That does impose some social cost on people who think the sky is red or that life on earth began 6000 years ago;

Agreed.

I can straightforwardly say to you that there were dinosaurs millions of years ago because I expect that you'll be with me on that and I don't particularly care about alienating some observer who might disagree with us on that and is sensitive to that kind of thing. The important point is that the moment I find out that I'm actually interacting with someone who disagrees about what I presupposed, I stop presupposing that, apologize, and get curious - no matter how "wrong" they are, from my own viewpoint. It doesn't matter if the topic is creationism or global warming or whether they should drive home blackout drunk because they're upset.

A small minority of the times I wont, and instead I'll inform them that I'm not interested in interacting with them because they're an idiot. That's a valid response too, in the right circumstance. This is imposing social costs for beliefs, and I'm actually totally fine with it. I just want to be really sure that I am aware of what I'm doing, why I'm doing it, and that I have a keen eye out for the signs that I was missing something.

What I don't ever want to do is base my interactions with someone on the presupposition that they're wrong and/or unreasonable. If I'm going to choose to interact with them, I'm going to try to meet them where they're at. This is true even when I can put on a convincing face and hide how I really see them. This is true even when I'm talking to some third party about how I plan to interact with someone else. If I'm planning on interacting someone, I'm not presupposing they're wrong/unreasonable. Because doing that would make me less persuasive in the cases I'm right and less likely to notice in the cases I'm not. There's literally no upside and there's downside whether or not I am, in fact, right.

In your opinion, is anti-creationist misinformation as serious a problem as creationist misinformation? (10% as serious?)

I wouldn't ask this question in the first place.

Does this make sense?

Yes. It doesn't surprise me that you believe that.

Replies from: gjm
comment by gjm · 2017-01-26T20:03:23.110Z · LW(p) · GW(p)

I'm not criticizing the article, nor am I criticizing you. I'm criticizing a certain way of approaching things like this.

That seems like the sort of thing that really needs stating up front. It's that Gricean implicature thing again: If someone writes something about goldfish and you respond with "It's really stupid to think that goldfish live in salt water", it's reasonable (unless there's some other compelling explanation for why you bothered to say that) to infer that you think they think goldfish think in salt water.

(And this sort of assumption of relevance is a good thing. It makes discussions more concise.)

The difference is that I don't think that people saying "be right, of course" is meaningful at all. I think you gotta look at what actually happens when they're confronted with new evidence that they are in fact wrong.

For sure that's far more informative. But, like it or not, that's not information you usually have available.

If [...] you drop the distinction between your map and the territory [...]

Yup, it's a thing that happens, and it's a problem (how severe a problem depends on how well being "sufficiently confident" correlates, for the person in question, with actually being right).

With respect to that paper, the title does seem to imply that they've dropped that distinction.

As you say, there's an important difference between dropping it externally and dropping it internally. I don't know of any reliable way to tell when the former indicates the latter and when it doesn't. Nor do I have a good way to tell whether the authors have strong enough evidence that dropping the distinction internally is "safe", that they're sufficiently unlikely to turn out to be wrong on the object level.

My own guess is that (1) it's probably pretty safe to drop it when it comes to the high-level question "is climate change real?", (2) the question w.r.t. which the authors actually show good evidence of having dropped the distinction is actually not that but "is there a strong expert consensus that climate change is real?", and (3) it's probably very safe to drop the distinction on that one; if climate change turns out not to be real then the failure mode is "all the experts got it wrong", not "there was a fake expert consensus". So I don't know whether the authors are "saints" but I don't see good reason to think they're doing anything that's likely to come back to bite them.

the moment I find out that I'm actually interacting with someone who disagrees about what I presupposed, I stop presupposing that [...]

I think this is usually the correct strategy, and it is generally mine too. Not 100% always, however. Example: Suppose that for some reason you are engaged in a public debate about the existence of God, and at some point the person you're debating with supports one of his arguments with some remark to the effect that of course scientists mostly agree that so-called dinosaur fossils are really the bones of the biblical Leviathan, laid down on seabeds and on the land during Noah's flood. The correct response to this is much more likely to be "No, sorry, that's just flatly wrong" than "gosh, that's interesting, do tell me more so I can correct my misconceptions".

I wouldn't ask this question in the first place.

That's OK, you don't need to; I already did. I was hoping you might answer it.

Yes. It doesn't surprise me that you believe that.

So, given what you were saying earlier about "imposing social costs", about not presupposing people are unreasonable, about interacting with people respectfully if at all ... You do know how that "It doesn't surprise me ..." remark comes across, and intend it that way, right?

(In case the answer is no: It comes across as very, very patronizing; as suggesting that you have understood how I, poor fool that I am, have come to believe the stupid things I believe; but that they aren't worth actually engaging with in any way. Also, it is very far from clear what "that" actually refers to.)

Replies from: jimmy
comment by jimmy · 2017-01-27T21:48:18.338Z · LW(p) · GW(p)

That seems like the sort of thing that really needs stating up front. It's that Gricean implicature thing again: If someone writes something about goldfish and you respond with "It's really stupid to think that goldfish live in salt water", it's reasonable (unless there's some other compelling explanation for why you bothered to say that) to infer that you think they think goldfish think in salt water.

(And this sort of assumption of relevance is a good thing. It makes discussions more concise.)

If someone writes "it's stupid to think that goldfish live in saltwater" there's probably a reason they say this, and it's generally not a bad guess that they think you think they can live in salt water. However, it is still a guess and to respond as if they are affirmatively claiming that you believe this is putting words in their mouth that they did not say and can really mess with conversations, as it has here.

For sure that's far more informative. But, like it or not, that's not information you usually have available.

Agree to disagree.

My own guess is that (1) it's probably pretty safe

A big part of my argument is that it doesn't matter if Omega comes down and tells you that you're right. It's still a bad idea.

Another big part is that even when people guess that they're probably pretty safe, they end up being wrong a really significant point of the time, and that from the outside view it is a bad idea to drop the distinction simply because you feel it is "probably pretty safe" - especially when there is absolutely no reason to do it and still reason not to even if you're correct on the matter. (also, people are still often wrong even when they say "yeah, but that's different. They're overconfident, I'm actually safe")

I don't see good reason to think they're doing anything that's likely to come back to bite them.

I note that you don't. I do.

That's OK, you don't need to; I already did. I was hoping you might answer it.

The point is that I don't see it as worth thinking about. I don't know what I would do with the answer. It's not like I have a genie that is offering me the chance to eliminate the problems caused by one side or the other, but that I have to pick.

There are a lot of nuances in things like this, and making people locally more correct is not even always a good thing. I haven't seen any evidence that you appreciate this point, and until I do I can only assume that this is because you don't. It doesn't seem that we agree on what the answer to that question would mean, and until we're on the same page there it doesn't make any sense to try to answer it.

So, given what you were saying earlier about "imposing social costs", about not presupposing people are unreasonable, about interacting with people respectfully if at all ... You do know how that "It doesn't surprise me ..." remark comes across, and intend it that way, right?

(In case the answer is no: It comes across as very, very patronizing; as suggesting that you have understood how I, poor fool that I am, have come to believe the stupid things I believe; but that they aren't worth actually engaging with in any way. Also, it is very far from clear what "that" actually refers to.)

I am very careful with what I presuppose, and what I said does not actually presuppose what you say it does. It's not presupposing that you are wrong or not worth engaging with. It does imply that as it looks to me - and I do keep this distinction in mind when saying this - as it looks to me, it was not worth it for me to engage with you on that level at the time I said it. Notice that I am engaging with you and doing my best to get to the source of our actual disagreement - it's just not on the level you were responding on. Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that's all, and I haven't seen it. This seems far less insulting to me than "poor fool who I am willing to presuppose believes stupid things and is not worth engaging with in any way". Either way though, as a neutral matter of fact, I wasn't surprised by anything you said so take it how you would like.

I'm not presupposing that you're not worth engaging with on that level, but I am refusing to accept your presupposition that you are worth engaging with on that level. That's up for debate, as far as I'm concerned, and I'm open to you being right here. My stance is that is never a good idea to presuppose things that you can predict your conversation partner will disagree with unless you don't mind them writing you off as an arrogant fool and disengaging, but that you never have to accept their presuppositions out of politeness. Do you see why this distinction is important to me?

I was aware that what I said was likely to provoke offense, and I would like to avoid that if possible. It's just that, if you are going to read into what I say and treat it as if I am actively claiming things when you just have shaky reason to suspect that I privately believe them, then you're making me choose between "doing a lot of legwork to prevent gjm from unfairly interpreting me" or "letting gjm unfairly interpret me and get offended by things I didn't say". I have tried to make it clear that I'm only saying what I'm saying, and that the typical inferences aren't going to hold true, and at some point I gotta just let you interpret things how you will and then let you know that again, I didn't claim anything other than what I claimed.

Replies from: gjm
comment by gjm · 2017-01-28T00:07:42.009Z · LW(p) · GW(p)

However, it is still a guess and to respond as if they are affirmatively claiming that you believe this is putting words in their mouth that they did not say and can really mess with conversations, as it has here.

In my experience, when it messes with conversations it is usually because one party is engaging in what I would characterize as bad-faith conversational manoeuvres.

I haven't seen any evidence that you appreciate this point

I'm not sure there's anything I could say or do that you would take as such evidence. (General remark: throughout this discussion you appear to have been assuming I fail to understand things that I do in fact understand. I do not expect you to believe me when I say that. More specific remark: I do in fact appreciate that point, but I don't expect you to believe me about that either.)

I want to have some indication that you actually understand what my argument is, that's all, and I haven't seen it.

I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so; I remark that there are two ways for one person's argument not to be well understood by another; and it seems to me that the underlying problem here is that from the outset you have proceeded on the assumption that I am beneath your intellectual level and need educating rather than engaging. However, I will on this occasion attempt to state your position and see whether you consider my attempt adequate. (If not, I suggest you write me off as too stupid to bother discussing with and we can stop.) I will be hampered here and there by the fact that in many places you have left important bits of your argument implicit, chosen not to oblige when I've asked you questions aimed at clarifying them, and objected when I have made guesses.

So. Suppose we have people A and B. A believes a proposition P (for application to the present discussion, take P to be something like "the earth's climate has warmed dramatically over the last 50 years, largely because of human activity, and is likely to continue doing so unless we change what we're doing") and is very confident that P is correct. B, for all A knows, may be confident of not-P, or much less confident of P than A is, or not have any opinion on the topic just yet. The first question at issue is: How should A speak of P, in discussion with B (or with C, with B in the audience)? And, underlying it: How should A think of P, internally?

"Internally" A's main options are (1) to treat P as something still potentially up for grabs or (2) to treat it as something so firmly established that A need no longer bother paying attention to how evidence and arguments for and against P stack up. With unlimited computational resources and perfect reasoning skills, #1 would be unambiguously better in all cases (with possible exceptions only for things, if any there be, so fundamental that A literally has no way of continuing to think if they turn out wrong); in practice, #2 is sometimes defensible for the sake of efficiency or (perhaps) if there's a serious danger of being manipulated by a super-clever arguer who wants A to be wrong. The first of those reasons is far, far more common; I don't know whether the second is ever really sufficient grounds for treating something as unquestionable. (But e.g. this sort of concern is one reason why some religious people take that attitude to the dogmas of their faith: they literally think there is a vastly superhuman being actively trying to get them to hold wrong beliefs.)

"Externally" A's main options are (1) to talk of P as a disputable matter, to be careful to say things like "since I think P" rather than "since P", etc., when talking to B; and (2) to talk as if A and B can both take it for granted that P is correct. There is some scope for intermediate behaviours, such as mostly talking as if P can be taken for granted but ocasionally making remarks like "I do understand that P is disputed in some quarters" or "Of course I know you don't agree about this, but it's so much less cumbersome not to shoehorn qualifications into every single sentence". There is also a "strong" form of #2 where A says or implies that no reasonable person would reject P, that P-rejecters are stupid or dishonest or crazy or whatever.

Your principal point is, in these terms, that "internally" #2 is very dangerous, even in cases where A is extremely confident that contrary evidence is not going to come along, and that "externally" #2 is something of a hostile act if in fact B doesn't share A's opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude. (And also because this sort of pressure may produce an actual inclination on B's part to accept P, without any actual argument or evidence having been presented.) Introducing this sort of social pressure can make collective truthseeking less effective because it pushes B's thinking around in (I think you might say, though for my part I would want to add some nuance) ways basically uncorrelated with truth. (There's another opposite one, just as uncorrelated with truth or more so, which I don't recall you mentioning: B may perceive A as hostile and therefore refuse to listen even if A has very strong evidence or arguments to offer.) And you make the secondary point that internal #2 and external #2 tend to spill over into one another, so that each also brings along the other's dangers.

We are agreed that internal #2 is risky and external #2 is potentially (for want of a better term) rude, "strong" external #2 especially so. We may disagree on just how high the bar should be for choosing either internal or external #1, and we probably do disagree more specifically on how high it should be in the case where P is the proposition about global warming mentioned above.

(We may also disagree about whether it is likely that the authors of the paper we were discussing are guilty of choosing, or advocating that their readers choose, some variety of #2 when actually #1 would be better; about whether it is likely that I am; about whether it makes sense to apply terms like "crimethink" when someone adopts external #2; and/or about how good the evidence for that global-warming proposition actually is. But I take it none of that is what you wish to be considered "your argument" in the present context.)

In support of the claim that internal #2 is dangerous far more often than A might suppose, you observe (in addition to what I've already said above) that people are very frequently very overconfident about their beliefs; that viewed externally, A's supreme confidence in P doesn't actually make it terribly unlikely that A is wrong about P. Accordingly, you suggest, A is making a mistake in adopting internal #2 even if it seems to A that the evidence and arguments for P are so overwhelming that no one sane could really disagree -- especially if there are in fact lots of people, in all other respects apparently sane, who do disagree. I am not sure whether you hold that internal #2 is always an error; I think everything you've said is compatible with that position but you haven't explicitly claimed it and I can think of good reasons for not holding it.

In support of the claim that external #2 is worse than A might suppose, you observe that (as mentioned above) doing it imposes social costs on dissenters, thereby making it harder for them to think independently and also making it more likely that they will just go away and deprive A's community of whatever insight they might offer. And (if I am interpreting correctly one not-perfectly-clear thing you said) that doing this amounts to deciding not to care about contrary evidence and arguments, in other words to implicitly adopting internal #2 with all its dangers. You've made it explicitly that you're not claiming that external #2 is always a bad idea; on the face of it you've suggested that external #2 is fine provided A clearly understands that it involves (so to speak) throwing B to the wolves; my guess is that in fact you consider it usually not fine to do that; but you haven't made it clear (at least to me) what you consider a good way to decide whether it is. It is, of course, clear that you don't consider that great confidence about P on A's part is in itself sufficient justification. (For the avoidance of doubt, nor do I; but I think I am willing to give it more weight than you are.)

That'll do for now. I have not attempted to summarize everything you've said, and perhaps I haven't correctly identified what subset you consider "your argument" for present purposes. (In particular, I have ignored everything that appears to me to be directed at specific (known or conjectured) intellectual or moral failings of the authors of the paper, or of me, and attended to the more general point.)

Replies from: Lumifer, jimmy
comment by Lumifer · 2017-01-29T01:17:24.138Z · LW(p) · GW(p)

something like "the earth's climate has warmed dramatically over the last 50 years, largely because of human activity, and is likely to continue doing so unless we change what we're doing"

Without restarting the discussion, let me point out what I see to be the source of many difficulties. You proposed a single statement to which you, presumably, want to attach some single truth value. However your statement consists of multiple claims from radically different categories.

"the earth's climate has warmed dramatically over the last 50 years" is a claim of an empirical fact. It's relatively easy to discuss it and figure out whether it's true.

"largely because of human activity" is a causal theory claim. This is much MUCH more complex than the preceding claim, especially given the understanding (existing on LW) that conclusions about causation do not necessarily fall out of descriptive models.

"and is likely to continue doing so" is a forecast. Forecasts, of course, cannot be proved or disproved in the present. We can talk about our confidence in a particular forecast which is also not exactly a trivial topic.

Jamming three very different claims together and treating them as a single statement doesn't look helpful to me.

Replies from: gjm, Good_Burning_Plastic
comment by gjm · 2017-01-29T01:48:10.542Z · LW(p) · GW(p)

a single statement to which you, presumably, want to attach some single truth value

It would be a probability, actually, and it would need a lot of tightening up before it would make any sense even to try to attach any definite probability to it. (Though I might be happy to say things like "any reasonable tightening-up will yield a statement to which I assign p>=0.9 or so".)

your statement consists of multiple claims from radically different categories

Yes, it does.

For the avoidance of doubt, in writing down a conjunction of three simpler propositions I was not making any sort of claim that they are of the same sort, or that they are equally probable, or that they are equivalent to one another, or that it would not often be best to treat individual ones (or indeed further-broken-down ones) separately.

Jamming three very different claims together and treating them as a single statement doesn't look helpful to me.

It seems perfectly reasonable to me. It would be unhelpful to insist that the subsidiary claims can't be considered separately (though each of them is somewhat dependent on its predecessors; it doesn't make sense to ask why the climate has been warming if in fact it hasn't, and it's risky at best to forecast something whose causes and mechanisms are a mystery to you) but, I repeat, I am not in any way doing that. It would be unhelpful to conflate the evidence for one sub-claim with that for another; that's another thing I am not (so far as I know) doing. But ... unhelpful simply to write down a conjunction of three closely related claims? Really?

Replies from: Lumifer
comment by Lumifer · 2017-01-29T02:05:18.399Z · LW(p) · GW(p)

But ... unhelpful simply to write down a conjunction of three closely related claims? Really?

You can, of course, write down anything you want. But I believe that treating that conjunction as a single "unit" is unhelpful, yes.

Replies from: gjm
comment by gjm · 2017-01-29T02:39:06.710Z · LW(p) · GW(p)

In what sense (other than writing it down, and suggesting that it summarizes what is generally meant by "global warming" when people say they do or don't believe it) am I treating it as a single unit?

Replies from: Lumifer
comment by Lumifer · 2017-01-29T03:25:33.617Z · LW(p) · GW(p)

As I mentioned, I don't want to restart the discussion. Feel free to discard my observation if you don't find it useful.

comment by Good_Burning_Plastic · 2017-01-29T02:10:20.188Z · LW(p) · GW(p)

"the earth's climate has warmed dramatically over the last 50 years" is a claim of an empirical fact.

"The earth's climate has warmed by about x °C over the last 50 years" is a claim of an empirical fact. "It is dramatic for a planet to warm by about x °C in 50 years" is an expression of the speaker's sense of drama.

Replies from: Lumifer
comment by Lumifer · 2017-01-29T03:23:47.393Z · LW(p) · GW(p)

Yeah, sure, but I'm skipping over the drama. If we ever find ourselves debating this, I'm sure that X is will get established pretty quickly.

comment by jimmy · 2017-01-28T20:02:47.856Z · LW(p) · GW(p)

I'm not sure there's anything I could say or do that you would take as such evidence.

What you say below (“I do in fact appreciate that point”) is all it takes for this.

(General remark: throughout this discussion you appear to have been assuming I fail to understand things that I do in fact understand. I do not expect you to believe me when I say that.

For what it’s worth, I feel the same way about this. From my perspective, it looks like you are assuming that I don’t get things that I do get, are assuming I’m assuming things things I am not assuming, saying thing things I’m not saying, not addressing my important points, being patronizing yourself, “gish galloping”, and generally arguing in bad faith. I just had not made a big stink about it because I didn’t anticipate that you wanted my perspective on this or that it would cause you to rethink anything.

Being wrong about what one understands is common too (illusion of transparency, and all that), but I absolutely do take this as very significant evidence as it does differentiate you from a hypothetical person who is so wrapped up in ego defense that they don’t want to address this question.

I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so;

Can you explain what you mean by “attempt to seize the intellectual high ground” and “it is unanswerable”, as it applies here? I don’t think I follow. I don’t think I’m “attempting to seize” anything, and have no idea what the question that “unanswerable” applies to is.

it seems to me that the underlying problem here is that from the outset you have proceeded on the assumption that I am beneath your intellectual level and need educating rather than engaging. Is this “educating” as in scare quotes “educating”/”correcting your foolish wrong thoughts”, or as in the “I thought you might be interested in hearing what I have to say about the topic, so I shared” kind of educating? I’ll agree that it’s the latter, but I wouldn’t put “beneath [my] intellectual level” on it. You asked a question, I had an answer, I thought you wanted it. Asking questions does not make people inferior or “beneath” anyone else, in my opinion.

However, if you mean “you don’t seem interested in my rebuttle”, then you’re right, I was not. I have put a ton of thought into the ethics of persuasion over the last several years, and there aren’t really any questions here that I don’t feel like I have a pretty darn solid answer to. Additionally, if you don’t already think about these problems the way that I do, it’s actually really difficult to convey my perspective, even if communication is flowing smoothly. And it often doesn’t, because it’s also really really easy to think I’m talking about something else, leading to the illusion that my point has been understood. This combination makes run-of-the-mill disagreement quite uninteresting, and I only engaged because I mistook your original question for “I would like to learn how to differentiate between teaching and thought-policing”, not “I would like to argue that they aren’t thought policing and that you’re wrong to think they are”.

And again, I do not think it warrants accusations of “patronizing you poor, poor fool” for privately holding the current best guess that this disagreement is more likely to be about you misunderstanding my point than about me hallucinating something in their title. Am I allowed to believe I’m probably right, or do I have to believe that you’re probably right and that I’m probably wrong? Are you allowed to believe that you’re probably right?

However, I will on this occasion attempt to state your position and see whether you consider my attempt adequate.

It is far enough off that I can’t endorse it as “getting” where I’m coming from. For example, “being seen as rude”, itself, is so not what it’s about. There are often two very different ways of looking at things that can produce superficially similar prescriptions for fundamentally different reasons. It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.

However, I do appreciate it as an intellectually honest attempt to check your understanding of my views and it does capture the weight of the main points themselves well enough that I’m curious to hear where you disagree (or if you don’t disagree with it as stated).

Somewhat relatedly but somewhat separately, I’m interested to hear how you think it applies to how you’ve approached things here. From my perspective, you’re doing a whole lot of the external #2 at me. Do you agree and think it’s justified? If so, how? Do you not see yourself as doing external #2 here? If so, do you understand how it looks that way to me?

Given this summary of my view, I do think I see why you don’t see it as suggesting that the researchers were making any mistake. The reason I do think they’re making a mistake is not present in your description of my views.

I will be hampered here and there by the fact that in many places you have [...] chosen not to oblige when I've asked you questions aimed at clarifying them, and objected when I have made guesses.

Hold on.

I gotta stop you there because that’s extremely unfair. I haven’t answered every question you’ve asked, but I have addressed most, if not all of them (and if there’s one I missed that you would like me to address, ask and I will). I also specifically addressed the fact that I don’t have a problem with you making guesses but that I don’t see it as very charitable or intellectually honest when you go above and beyond and respond as if I had actively claimed those things.

You've made it explicitly that you're not claiming that external #2 is always a bad idea; on the face of it you've suggested that external #2 is fine provided

This is a very understandable reading of what I said, but no. I do not agree that what you call “external #2” is ever a good thing to do either. I also would not frame it that way in the first place.

Replies from: gjm
comment by gjm · 2017-01-28T21:53:25.263Z · LW(p) · GW(p)

"gish galloping"

I did not accuse you of that. I don't think you've done that. I said that Lumifer did it because, well, he did: I said "no one is proposing X", he said "what about A and B", I pointed out that A and B were not in fact proposing X, and he posted another seven instances of ... people not proposing X. A long sequence of bad arguments, made quickly but slower to answer: that is exactly what a Gish gallop is. I don't think you've been doing that, I don't think Lumifer usually does it, but on this occasion he did.

I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so;

Can you explain what you mean by “attempt to seize the intellectual high ground” and “it is unanswerable”, as it applies here?

"Attempting to seize the intellectual high ground" = "attempting to frame the situation as one in which you are saying clever sensible things that the other guy is too stupid or blinkered or whatever to understand. "Unanswerable if you choose to make it so" because when you say "I don't think you have grasped my argument", any response I make can be answered with "No, sorry, I was right: you didn't understand my argument" -- regardless of what I actually have understood or not understood. (I suppose one indication of good or bad faith on your part, in that case, would be whether you then explain what it is that I allegedly didn't understand.)

Am I allowed to believe that I'm probably right [...]?

I am greatly saddened, and somewhat puzzled, that you apparently think I might think the answer is no. (Actually, I don't think you think I might think the answer is no; I think you are grandstanding.) Anyway, for the avoidance of doubt, I have not the slightest interest in telling anyone else what they are allowed to believe, and if (e.g.) what I have said upthread about that paper about global warming has led you to think otherwise then either I have written unclearly or you have read uncharitably or both.

For example, “being seen as rude”, itself, is so not what it’s about.

The problem here is unclarity on my part or obtuseness on yours, rather than obtuseness on my part or unclarity on yours :-).The bit about "being seen as rude" was not intended as a statement of your views or of your argument; it was part of my initial sketch of the class of situations to which those views and that argument apply. The point at which I start sketching what I think you were saying is where I say "Your principal point is, in these terms, ...".

The reason I do think they’re making a mistake is not present in your description of my views.

Well, I was (deliberately) attempting to describe what I took to be your position on the general issue, rather than on what the authors of the article might or might not have done. (I am not all that interested in what you think they have done, since you've said you haven't actually looked at the article.) But it's entirely possible that I've failed to notice some key part of your argument, or forgotten to mention it even though if I'd been cleverer I would have. I don't suppose you'd like to explain what it is that I've missed?

This is a very understandable reading of what I said, but no. I do not agree that what you call "external #2" is ever a good thing to do either.

Just in case anyone other than us is reading this, I would like to suggest that those hypothetical readers might like to look back at what I actually wrote and how you quoted it, and notice in particular that I explicitly said that I think your position probably isn't the one that "on the face of it you've suggested". (Though it was not previously clear to me that you think "external #2" is literally never a good idea. One reason is that it looks to me -- and still does after going back and rereading -- as if you explicitly said that you sometimes do it and consider it reasonable. See here and search for "A small minority".)


As to the other things you've said (e.g., asking whether and where and why I disagree with your position), I would prefer to let that wait until you have helped me fix whatever errors you have discerned in my understanding of your position and your argument. Having gone to the trouble of laying it out, it seems like it would be a waste not to do that, don't you think?

You've made specific mention of two errors. One (see above) wasn't ever meant to be describing your position, so that's OK. The other is that my description doesn't mention "the reason I do think they're making a mistake" (they = authors of that article whose title you've read); I don't know whether that's an error on my part, or merely something I didn't think warranted mentioning, but the easiest way to find out would be for you to say what that reason is.

Your other comments give the impression that there are other deficiencies (e.g., "It is far enough off that I can’t endorse it as “getting” where I’m coming from." and "It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.") and I don't think it makes any sense to proceed without fixing this. (Where "this" is probably a lack of understanding on my part, but might also turn out to be that for one reason or another I didn't mention it, or that I wasn't clear enough in my description of what I took to be your position.) If we can't get to a point where we are both satisfied that I understand you adequately, we should give up.

Replies from: satt, jimmy, jimmy, Lumifer
comment by satt · 2017-01-29T17:49:13.185Z · LW(p) · GW(p)

Just in case anyone other than us is reading this,

For whatever little it's worth, I read the first few plies of these subthreads, and skimmed the last few.

From my partial reading, it's unclear to me that Lumifer is/was actually lying (being deliberately deceptive). More likely, in my view, is/was that Lumifer sincerely thinks spurious your distinction between (1) criminalizing disbelief in global warming, and (2) criminalizing the promulgation of assertions that global warming isn't real in order to gain an unfair competitive advantage in a marketplace. I think Lumifer is being wrong & silly about that, but sincerely wrong & silly. On the "crimethink" accusation as applied to the paper specifically, Lumifer plainly made a cheap shot, and you were right to question it.

As for your disagreement with jimmy, I'm inclined to say you have the better of the argument, but I might be being overly influenced by (1) my dim view of jimmy's philosophy/sociology of argument, at least as laid out above, (2) my incomplete reading of the discussion, and (3) my knowledge of your track record as someone who is relatively often correct, and open to dissecting disagreement with others, often to a painstaking extent.

Replies from: gjm, jimmy
comment by gjm · 2017-01-30T02:12:01.197Z · LW(p) · GW(p)

This is helpful; thanks.

comment by jimmy · 2017-01-30T23:12:18.936Z · LW(p) · GW(p)

I, also, appreciate this comment.

I would like to quibble here that I'm not trying to argue anything, and that if gjm had said "I don't think the authors are doing anything nearly equivalent to crimethink and would like to see you argue that they are", I wouldn't have engaged because I'm not interested in asserting that they are.

I'd call it more "[...] of deliberately avoiding argument in favor of "sharing honestly held beliefs for what they're taken to be worth", to those that are interested". If they're taken (by you, gjm, whoever) to be worth zero and there's no interest in hearing them and updating on them, that's totally cool by me.

comment by jimmy · 2017-01-30T22:43:32.310Z · LW(p) · GW(p)

(comment split because it got too long)

I am greatly saddened, and somewhat puzzled, that you apparently think I might think the answer is no. (Actually, I don't think you think I might think the answer is no; I think you are grandstanding.)

It’s neither. I have a hard time imagining that you could say no. I was just making sure to cover all the bases because I also have a hard time imagining that you could still say that I’m actively trying to claim anything after I’ve addressed that a couple times.

I bring it up because at this point, I’m not sure how you can simultaneously hold the views “he can believe whatever he wants”, “he hasn’t done anything in addition that suggests judgement too” (which I get that you haven’t yet agreed to, but you haven’t addressed my arguments that I haven’t yet either), and then accuse me of trying to claim the intellectual high ground without cognitive dissonance. I’m giving you a chance to either teach me something new (i.e. “how gjm can simultaneously hold these views congruently”), or, in the case that you can’t, the chance for you to realize it.

The bit about "being seen as rude" was not intended as a statement of your views or of your argument; it was part of my initial sketch of the class of situations to which those views and that argument apply. The point at which I start sketching what I think you were saying is where I say "Your principal point is, in these terms, ...".

Quoting you, “Your principal point is, in these terms, that [...] and that "externally" #2 is something of a hostile act if in fact B doesn't share A's opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude.” (emphasis mine)

That looks like it’s intended to be a description of my views to me, given that it directly follows the point where you start sketching out what my views are, following a “because”, and before the first period.

Even if it’s not, though, if you’re saying it as part of a sketch of the situation, it’s one that anyone who sees things the way I do can see that I won’t find it to be a relevant part of the situation, and the fact that you mention it - even if it were just part of that sketch - indicates that either you’re missing this or that you see that you’re giving a sketch that I don’t agree with as if my disagreement is irrelevant.

Well, I was (deliberately) attempting to describe what I took to be your position on the general issue, rather than on what the authors of the article might or might not have done.

Right. I think it is the correct approach to describe my position in general. However, the piece of my general position that would come into play in this specific instance was not present so if you apply those views as stated, of course you wouldn’t have a problem with what the authors have done in this specific instance.

(I am not all that interested in what you think they have done, since you've said you haven't actually looked at the article.)

I am also not interested in what (I think) they have done in the article. I have said this already, but I’ll agree again if you’d like. You’re right to not be interested in this.

I don't suppose you'd like to explain what it is that I've missed?

Honestly, I would love to. I don’t think I’m capable of explaining it to you as of where we stand right now. Other people, yes. Once we get to the bottom of our disagreement, yes. Not until then though.

This conversation has been fascinating to me, but it has also been a bit fatiguing to make the same points and not see them addressed. I’m not sure we’ll make it that far, but it’d be awesome if we do.

notice in particular that I explicitly said that I think your position probably isn't the one that "on the face of it you've suggested".

Yes, I noticed that qualification and agree. On the face of it, it certainly does look that way. That’s what I meant by “a very understandable reading”.

However, the preceding line is “You've made it explicitly that you're not claiming that external #2 is always a bad idea”, and that is not true. I said “A small minority of the times I wont [...]”, and what follows is not explicitly “external #2”. I can see how you would group what follows with “external #2”, but I do not. This is what I mean when I say that I predict you will assume that you’re understanding what I’m saying when you do not.

As to the other things you've said (e.g., asking whether and where and why I disagree with your position), I would prefer to let that wait until you have helped me fix whatever errors you have discerned in my understanding of your position and your argument.

This seems backwards to me. Again, with the double cruxing, you have to agree on F before you can agree on E before you can agree on D before you can even think about agreeing on the original topic. This reads to me like you saying you want me to explain why we disagree on B before you address C.

Having gone to the trouble of laying it out, it seems like it would be a waste not to do that, don't you think?

Not necessarily. I think it’s perfectly fine to be uninterested in helping you fix the errors I discern in the understanding of my argument, unless I had already gone out of my way to give you reason to believe I would if you layed out your understanding for me. Especially if I don’t think you’ll be completely charitable.

I haven’t gone out of my way to give you reason to believe I would, since I wasn’t sure at the time, but I’ll state my stance explicitly now. This conversation has been fascinating to me. It has also been a bit fatiguing, and I’m unsure of how long I want to continue this. To the extent that it actually seems we can come to the bottom of our disagreement, I am interested in continuing. If we get to the point where you’re interested in hearing it and I think it will be fruitful, I will try to explain the difference between my view and your attempt to describe them.

As I see it now, we can’t get there until I understand why you treat what I see as “privately holding my beliefs, and not working to hide them from (possibly fallacious) inference” as if it is “actively presupposing that my beliefs are correct, and judging anyone who disagrees as ‘below me’”. I also don’t think we can get there until we can agree on a few other things that I’ve brought up and haven’t seen addressed.

Either way, thanks for the in depth engagement. I do appreciate it.

Replies from: gjm
comment by gjm · 2017-01-31T03:21:53.100Z · LW(p) · GW(p)

On "being seen as rude": I beg your pardon, I was misremembering exactly what I had written at each point. However, I still can't escape the feeling that you are either misunderstanding or (less likely) being deliberately obscure, because what you actually say about this seems to me to assume that I was presenting "being seen as rude" as a drawback of doing what I called "external #2", whereas what I was actually saying is that one problem with "external #2" is that it forces someone who disagrees to do something that could be seen as rude; that's one mechanism by which the social pressure you mentioned earlier is applied.

To the extent that it actually seems we can come to the bottom of our disagreement, I am interested in continuing.

Except that what you are actually doing is repeatedly telling me that I have not understood you correctly, and not lifting a finger to indicate what a correct understanding might be and how it might differ from mine. You keep talking about inferential distances that might prevent me understanding you, but seem to make no effort even to begin closing the alleged gap.

In support of this, in the other half of your reply you say I "seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three"; well, if you say that's how it seems to you then I dare say it's true, but I am pretty sure I haven't said it's "impossible to be on step two honestly" because I don't believe that, and I'm pretty sure I haven't said that you "must be trying to hide from engagement" because my actual position is that you seem to be behaving in a way consistent with that but of course there are other possibilities. And you say that I "should probably make room for both possibilities" (i.e., that you do, or that you don't, see things I don't); which is odd because I do in fact agree that both are possibilities.

So. Are you interested in actually making progress on any of this stuff, or not?

Replies from: jimmy
comment by jimmy · 2017-01-31T19:18:54.182Z · LW(p) · GW(p)

In support of this, in the other half of your reply you say I "seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three"; well, if you say that's how it seems to you then I dare say it's true, but I am pretty sure I haven't said it's "impossible to be on step two honestly" because I don't believe that, and I'm pretty sure I haven't said that you "must be trying to hide from engagement" because my actual position is that you seem to be behaving in a way consistent with that but of course there are other possibilities. And you say that I "should probably make room for both possibilities" (i.e., that you do, or that you don't, see things I don't); which is odd because I do in fact agree that both are possibilities.

Right. I’m not accusing you of doing it. You didn’t say it outright, I don’t expect you to endorse that description, and I don’t see any reason even to start to form an opinion on whether it accurately describes your behavior or not. I was saying it as more of a “hey, here’s what you look like to me. I know (suspect?) this isn’t what you look like to you, so how do you see it and how do I square this with that?”. I just honestly don’t know how to square these things.

If, hypothetically, I’m on step two because I honestly believe that if I tried to explain my views you would likely prematurely assume that you get it and that it makes more sense to address this meta level first, and if, hypothetically, I’m even right and have good reasons to believe I’m right… what’s your prescription? What should I do, if that were the case? What could I do to make it clear that am arguing in good faith, if that were the case?

So. Are you interested in actually making progress on any of this stuff, or not?

If you can tell me where to start that doesn’t presuppose that my beliefs are wrong or that I’ve been arguing in bad faith, I would love to. Where would you have me start?

Replies from: gjm
comment by gjm · 2017-01-31T23:30:24.176Z · LW(p) · GW(p)

I just honestly don't know how to square these things.

Whereas I honestly don't know how to help you square them, because I don't see anything in what I wrote that seems like it would make a reasonable person conclude that I think it's impossible to be on your "step 2" honestly, or that I think you "must be trying to hide from engagement" (as opposed to might be, which I do think).

If [...] I honestly believe that [...] you would likely prematurely assume that you get it [...] what's your prescription? [...] What could I do to make clear that I am arguing in good faith [...]?

My general prescription for this sort of situation (and I remark that not only do I hope I would apply it with roles reversed, but that's pretty much what I am doing in this discussion) is: proceed on the working assumption that the other guy isn't too stupid/blinkered/crazy/whatever to appreciate your points, and get on with it; or, if you can't honestly give that assumption high enough probability to make it worth trying, drop the discussion altogether.

(This is also, I think, the best thing you could do to make it clear, or at any rate markedly more probable to doubtful onlookers, that you're arguing in good faith.)

If you can tell me where to start that doesn't presuppose that my beliefs are wrong or that I've been arguing in bad faith, I would love to. Where would you have me start?

The same place as I've been asking you to start for a while: you say I haven't understood some important parts of your position, so clarify those parts of your position for me. Adopt the working assumption that I'm not crazy, evil or stupid but that I've missed or misunderstood something, and Sure, it might not work: I might just be too obtuse to get it; in that case that fact will become apparent (at least to you) and you can stop wasting your time. Or it might turn out -- as, outside view, it very frequently does when someone smart has partially understood something and you explain to them the things you think they've missed -- that I will understand; or -- as, outside view, is also not so rare -- that actually I understood OK already and there was some sort of miscommunication. In either of those cases we can get on with addressing whatever actual substantive disagreements we turn out to have, and maybe at least one of us will learn something.

(In addition to the pessimistic option of just giving up, and the intermediate option of making the working assumption that I've not understood your position perfectly but am correctible, there is also the optimistic option of making the working assumption that actually I've understood it better than you think, and proceeding accordingly. I wouldn't recommend that option given my impression of your impression of my epistemic state, but there are broadly-similar situations in which I would so I thought I should mention it.)

Replies from: jimmy
comment by jimmy · 2017-02-01T17:40:50.916Z · LW(p) · GW(p)

My general prescription for this sort of situation [...] is: proceed on the working assumption that the other guy isn't too stupid/blinkered/crazy/whatever to appreciate your points, and get on with it; or, if you can't honestly give that assumption high enough probability to make it worth trying, drop the discussion altogether.

All of the options you explicitly list imply disrespect. If I saw all other options as implying disrespect as well, I would agree that “if you can't honestly give that assumption high enough probability to make it worth trying, [it’s best to] drop the discussion altogether”.

However, I see it as possible to have both mutual respect and predictably counterproductive object level discussion. Because of this, I see potential for fruitful avenues other than “plow on the object level and hope it works out, or bail”. I have had many conversations with people whom I respect (and who by all means seem to feel respected by me) where we have done this to good results - and I’ve been on the other side too, again, without feeling like I was being disrespected.

Your responses have all been consistent with acting like I must be framing you as stupid/blinkered/crazy/otherwise-unworthy-of-respect if I don’t think object level discussion is the best next step. Is there a reason you haven’t addressed the possibility that I’m being sincere and that my disinterest in “just explaining my view” at this point isn’t predicated on me concluding that you’re stupid/blinkered/crazy/otherwise-unworthy-of-respect? Even to say that you hear me but conclude that I must be lying/crazy since that’s obviously too unlikely to be worth considering?

The same place as I've been asking you to start for a while: [...] clarify those parts of your position for me. Adopt the working assumption that I'm not crazy, evil or stupid but that I've missed or misunderstood something, and Sure, it might not work: I might just be too obtuse to get it; in that case that fact will become apparent (at least to you) and you can stop wasting your time.

The thing is, that does presuppose that my belief that “in this case, as with many others with large inferential distance, trying to simply clarify my position will result in more misunderstanding than understanding, on expectation, and therefore is not a good idea - even if the other person isn’t stupid/blinkered/crazy/otherwise-undeserving-of-respect” is wrong. Er.. unless you’re saying “sure, you might be right, and maybe it could work your way and couldn’t work my way, but I’m still unwilling to take that seriously enough to even consider doing things your way. My way or it ain’t happenin’.”

If it’s the latter case, and if, as you seem to imply, this is a general rule you live by, I’m not sure what your plan is for dealing with the possibility of object level blind spots - but I guess I don’t have to. Either way, it’s a fair response here, if that’s the decision you want to make - we can agree to disagree here too.

Anyway, if you’re writing all these words because you actually want to know how the heck I see it, then I’ll see what I can do. It might take a while because I expect it to take a decent amount of work and probably end up long, but I promise I will work at it. If, on the other hand, you’re just trying to do an extremely thorough job at making it clear that you’re not closed to my arguments, then I’d be happy to leave it as “you’re unwilling to consider doing things my way”+”I’m unwilling to do things your way until we can agree that your way is the better choice”, if that is indeed a fair description of your stance.

(Sorta separately, I’m sure I’d have a bunch of questions on how you see things, if you’d have any interest in explaining your perspective)

Replies from: gjm
comment by gjm · 2017-02-01T18:33:37.830Z · LW(p) · GW(p)

All the options you explicitly list imply disrespect

Well, the one I'm actually proposing doesn't, but I guess you mean the others do. I'm not sure they exactly do, though I certainly didn't make any effort to frame them in tactfully respect-maximizing terms; in any case, it's certainly not far off to say they all imply disrespect. I agree that there are situations in which you can't explain something without preparation without any disrespect to the other guy being called for; but that's because what happened was

  • jimmy says some things
  • gjm response
  • jimmy starts saying things like "Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that's all, and I haven't seen it."

rather than, say,

  • jimmy says "so I have a rather complicated and subtle argument to make, so I'm going to have to begin with some preliminaries*.

When what happens is that you begin by making your argument and then start saying: nope, you didn't understand it -- and when your reaction to a good-faith attempt at dealing with the alleged misunderstanding is anything other than "oh, OK, let me try to explain more clearly" -- I think it does imply something like disrespect; at least, as much like disrespect as those options I listed above. Because what you're saying is: you had something to say that you thought was appropriate for your audience, and not the sort of thing that needed advance warning that it was extra-subtle; but now you've found that I don't understand it and (you at least suspect) I'm not likely to understand it even if you explain it.

That is, it means that something about me renders me unlikely -- even when this is locally the sole goal of the discussion, and I have made it clear that I am prepared to go to substantial lengths to seem mutual understanding -- to be able to understand this thing that you want to say, and that you earlier thought was a reasonable thing to say without laying a load of preparatory groundwork.

Is there a reason you haven't addressed the possibility that [...] my disinterest [...] isn't predicated on me concluding that you're stupid/blinkered/crazy/otherwise-unworthy-of-respect?

See above for why I haven't considered it likely; the reason I haven't (given that) addressed it is that there's never time to address everything.

If there is a specific hypothesis in this class that you would like us to entertain, perhaps you should consider saying what it is.

The thing is, that does presuppose that my belief that [...] is wrong.

No, it presupposes that it could be wrong. (I would say it carries less presumption that it's wrong than your last several comments in this thread carry presumption that it's right.) The idea is: It could be wrong, in which case giving it a go will bring immediate benefit; it could be wrong but we could be (mutually) reasonable enough to see that it's right when we give it a go and that doesn't work, in which case giving it a go will get us past the meta-level stuff about whether I'm likely to be unable to understand. Or, of course, it could go the other way.

I'm not sure what your plan is for dealing with the possibility of object-level blind spots

When one is suspected, look at it up close and see whether it really is one. Which, y'know, is what I'm suggesting here.

if you're writing all these words because you actually want to know how the heck I see it [...] I expect it to take a decent amount of work

What I was hoping to know, in the first instance, is what I have allegedly misunderstood in what you wrote before. You know, where you said things of the form "your description doesn't even contain my actual reason for saying X" -- which I took, for reasons that still look solid to me, to indicate that you had already given your actual reason.

If the only way for you to explain all my serious misunderstandings of what you wrote is for you to write an effortful lengthy essay about your general view ... well, I expect it would be interesting. But on the face of it that seems like more effort than it should actually take. And if the reason why it should take all that effort is that, in essence, I have (at least in your opinion) understood so little of your position that there's no point trying to correct me rather than trying again from scratch at much greater length then I honestly don't know why you're still in this discussion.

I'm sure I'd have a bunch of questions on how you see things, if you'd have any interest in explaining your perspective

I am happy to answer questions. I've had it pretty much up to here (you'll have to imagine a suitable gesture) with meta-level discussion about what either of us may or may not be capable of understanding, though, so if the questions you want to ask are about what you think of me or what I think of you or what I think you think I think you think I am capable of understanding, then let's give that a miss.

Replies from: jimmy
comment by jimmy · 2017-02-01T23:09:22.371Z · LW(p) · GW(p)

rather than, say, jimmy says "so I have a rather complicated and subtle argument to make, so I'm going to have to begin with some preliminaries*.

I suppose I could have said “so I have a rather complicated and subtle argument to make. I would have to begin with some preliminaries and it would end up being kinda long and take a lot of work, so I’m not sure it’s worth it unless you really want to hear it”, and in a lot of ways I expect that would have gone better. I probably will end up doing this next time.

However in a couple key ways, it wouldn’t have, which is why I didn’t take that approach this time. And that itself is a complicated and subtle argument to make.

EDIT: I should clarify. I don't necessarily think I made the right choice here, and it is something I'm still thinking about. However, it was an explicit choice and I had reasons.

When what happens is that you begin by making your argument and then start saying: nope, you didn't understand it -- and when your reaction to a good-faith attempt at dealing with the alleged misunderstanding is anything other than "oh, OK, let me try to explain more clearly" -- I think it does imply something like disrespect; at least, as much like disrespect as those options I listed above.

Right, and I think this is our fundamental disagreement right here. I don’t think it implies any disrespect at all, but I’m happy to leave it here if you want.

Because what you're saying is: [...] That is, it means that something about me renders me unlikely [...] to be able to understand this thing that you want to say, and that you earlier thought was a reasonable thing to say without laying a load of preparatory groundwork.

I see where you’re coming from, but I don’t think arguments with subtle backing always need that warning, nor do they always need to be intended to be fully understood in order to be worth saying. This means that “I can’t give you an explanation you’ll understand without a ton of work” doesn’t single you out nearly as much as you’d otherwise think.

I can get into this if you’d like, but it’d just be more meta shit, and at this point my solution is starting to converge with yours: “do the damn write up or shut up, jimmy”

See above for why I haven't considered it likely; the reason I haven't (given that) addressed it is that there's never time to address everything.

I agree that you can’t address everything (nor have I), but this one stands out as the one big one I keep getting back to - and one where if you addressed it, this whole thing would resolve pretty much right away.

It seems like now that you have, we’re probably gonna end up at something more or less along the lines of “we disagree whether “mutual respect” and “knowably unable to progress on the object level” go together to a non-negligable amount, at least as it applies here, and gjm is uninterested in resolving this disagreement”. That’s an acceptable ending for me, so long as you know that it is a genuine belief of mine and that I’m not just trying to weasel around denying that I've been showing disrespect and shit.

No, it presupposes that it could be wrong.

I thought I addressed that possibility with the "err, or this" bit.

When one is suspected, look at it up close and see whether it really is one. Which, y'know, is what I'm suggesting here.

I was talking about the ones where that won’t work, which I see as a real thing though you might not.

If the only way for you to explain all my serious misunderstandings of what you wrote is for you to write an effortful lengthy essay about your general view ... well, I expect it would be interesting.

If I ever end up writing it up, I’ll let you know.

But on the face of it that seems like more effort than it should actually take. And if the reason why it should take all that effort is that, in essence, I have (at least in your opinion) understood so little of your position that there's no point trying to correct me rather than trying again from scratch at much greater length then I honestly don't know why you're still in this discussion.

:)

That’d probably have to be a part of the write up, as it calls on all the same concepts

comment by jimmy · 2017-01-30T22:43:09.308Z · LW(p) · GW(p)

"Attempting to seize the intellectual high ground" = [...] any response I make can be answered with "No, sorry, I was right: you didn't understand my argument" -- regardless of what I actually have understood or not understood.

The first part I feel like I’ve already addressed and haven’t seen a response to (the difference between staking active claims vs speaking from a place that you choose to draw (perhaps fallacious) inferences from and then treat as if they’re active claims).

The second part is interesting though. It’s pretty darn answerable to me! I didn’t realize that you thought that I might hear an answer that perfectly paces my views and then just outright lie “nope, that’s not it!”. If that’s something you think I could even conceivably do, I’m baffled as to why you’d be putting energy into interacting with me!

But yes, it does place the responsibility on me of deciding whether you understand my pov and reporting honestly on the matter. And yes, not all people will want to be completely honest on the matter. And yes, I realize that you don’t have reason to be convinced that I will be, and that’s okay.

However, it would be very stupid of me not to be. I can hide away in my head for as long as I want, and if no matter how hard you try, and no matter how obvious the signs become, if I’m willing to ignore them all I can believe my believies for as long as I want and pretend that I’m some sort of wise guru on the mountain top, and that everyone else just lacks my wisdom. You’re right, if I want to hide from the truth and never give you the opportunity to convince me that I’m wrong, I can. And that would be bad.

But I don’t see what solution you have to this, as if the inferential distance is larger than you realize, then your method of “then explain what it is that I allegedly didn't understand” can’t work because if you’re still expecting a short inferential distance then you will have to either conclude that I’m speaking gibberish or that I’m wrong - even if I’m not.

It’s like the “double crux” thing. We’re working our way down the letters, and you’re saying “if you think I don’t understand your pov you should explain where I’m wrong!” and I’m saying “if I thought that you would be able to judge what I’m saying without other hidden disagreements predictably leading to faulty judgements, then I would agree that is a good idea”. I can’t just believe it’s a good idea when I don’t, and yes, that looks the same as “I’m unwilling to stick my neck out because I secretly know I’m wrong”. However, it’s a necessary thing whenever the inferential distance is larger than one party expects, or when one party believes it to be so (and if you don’t believe that I believe that it is… I guess I’d be interested in hearing why). We can’t shortcut the process by pointing at it being “unanswerable”. It is what it is.

It’d be nice if this weren’t ever an issue, but ultimately I think it’s fine because there’s no free lunch. If I feel cognitive dissonance and don’t admit that you have a point, it tends to show, and that would make me look bad. If it doesn’t show somehow, I still fail to convince anyone of anything. I still fail to give anyone any reason to believe I’m some wise guru on the mountaintop even if I really really want them to believe that. It’s not going to work, because I’m not doing anything to distinguish myself from that poser that has nothing interesting to say.

If I want to actually be able to claim status, and not retreat to some hut muttering at how all the meanies won’t give me the status that I deserve, I have to actually stick my neck out and say something useful and falsifiable at some point. I get that - which is why I keep making the distinction between actively staking claims and refusing to accept false presuppositions.

The thing is, my first priority is actually being right. My second priority is making sure that I don’t give people a reason to falsely conclude that I’m wrong and that I am unaware of or/unable to deal with the fact that they think that. My third priority is that I actually get to speak on the object level and be useful. I’m on step two now. You seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three with you. I don’t know what else to tell you. I don’t agree.

If you don’t want to automatically accept that I see things you don’t (and that these things are hard to clearly communicate to someone with your views), then that’s fine. I certainly don’t insist that you accept that I do. Heck, I encourage skepticism. However, I’m not really sure how you can know that I don’t, and it seems like you should probably make room for both possibilities if you want to have a productive conversation with me (and it’s fine if you don’t).

The main test that I use in distinguishing between wise old men on mountain tops and charlatans is whether my doubt in them provokes cognitive signs of cognitive dissonance - but there are both false positives and false negatives there. A second test I use is to see whether this guy has any real world results that impress me. A fourth is to see whether I can get him to say anything useful to me. A fourth test is whether there are in fact times that I end up eventually seeing things his way on my own.

It’s not always easy, and I’ve realized again and again that even foolish people are wiser than I give them credit for, so at this point I’m really hesitant to rule that out so that I can actively deny their implicit claim to status. I prefer to just not actively grant them, and say something like “yes, you might be really wise, but I can’t see that you’re not a clown, and until I do I’m going to have to assign a higher probability to the latter. If you can give me some indication that you’re not a clown, I would appreciate that, and I understand that if you don’t it is no proof that you are”.

comment by Lumifer · 2017-01-29T01:04:18.869Z · LW(p) · GW(p)

A long sequence of bad arguments, made quickly but slower to answer: that is exactly what a Gish gallop is.

I think you're much confused between arguments and evidence in support of a single argument.

Replies from: gjm
comment by gjm · 2017-01-29T01:20:22.325Z · LW(p) · GW(p)

If you go back through my comments on LW (note: I am not actually suggesting you do this; there are a lot of them, as you know) you will find that in this sort of context I almost always say explicitly something like "evidence and arguments", precisely because I am not confused about the difference between the two. Sometimes I am lazy. This was one of those times.

Bad arguments and bad evidence can serve equally well in a Gish gallop.

comment by gjm · 2017-01-31T17:42:08.763Z · LW(p) · GW(p)

Several state attorneys general have initiated them.

Could you give some examples? I'm failing to find any instances where any such action has actually been brought.

What I can find is an investigation by several state AGs into ExxonMobil, which appears to be focusing on what EM's management knew about climate change; there's some suggestion that they're now digging into possible misrepresentations of how big oil reserves are, presumably with a view to arguing that they misled investors. Note that investigating what Exxon management knew about climate change is exactly what we should not expect if this were really about criminalizing skepticism about global warming; the whole point is that allegedly Exxon management tried to spread global warming skepticism while knowing it was probably wrong.

I think in practice it's likely to be [...]

Well, obviously neither your guesses about the future nor mine are much evidence here. The laws prosecutions might use, so far as I know, require evidence of actual dishonesty.

I wasn't aware courts had access to sincerity brain-scanners.

It is absolutely commonplace for legal guilt to depend on state of mind, even though courts don't have brain-scanners, telepaths, etc.

comment by satt · 2017-01-29T23:01:35.323Z · LW(p) · GW(p)

OK, and how is this distinction supposed to manifest in practice?

One distinction is that someone accused under (2) could defend themselves by showing that they genuinely didn't believe anyone was paying attention to their expression of disbelief in global warming, whereas that defence presumably wouldn't be open to them under (1).

[..] in any case when (2) happens who exactly will be forbidden to assert that global worming isn't real? Does it matter if [...]?

Since it suffices to give one operationalizable difference between (1) & (2) for gjm's claim of a distinction to go through, it's not necessary to answer these questions about how a specific practical implementation would work.

Note that the people doing the prosecution haven't presented any evidence of [...]

To which specific people are you referring? To which specific prosecution are you referring?

Thus it is clear that (2) is little more than a fairly transparent excuse to do (1).

It sure isn't clear to me. Your basis for saying so is your (implied) failure to think of a difference between (1) & (2) that could manifest in practice; that you don't know specifically how (2) would be implemented in practice; and an unverifiably generic assertion that unspecified people haven't presented evidence of "promulgation of assertions [etc.]" "beyond the fact that the people in question are asserting that global warming isn't real". Weak stuff.

Given that gjm has just demonstrated that (3) is false,

You do nothing to substantiate that, and in light of your own relatively poor track record, I'm not going to take it on trust. And so...

I'm inclined to believe the real reason for your bias is that you belong to a tribe where agreeing with gjm's conclusion is high status.

...is not an accusation I take seriously, especially since you're a sockpuppeteer with a history of downvote rampages, including against gjm. Incline all you want, mate.

comment by gjm · 2017-01-25T10:55:10.371Z · LW(p) · GW(p)

Catholic theologians are experts in what the Roman Catholic Church believes. If you claim that the RCC isn't really trinitarian, then "bullshit, look at what all the Catholic theologians say" is a perfectly good response.

They claim (or at least let's suppose arguendo that they do) to be experts on the actual facts about God. It turns out they're wrong about that. So ... is their situation nicely parallel to that of climate scientists?

Why, no. Look at all the people in the world who claim to be God-experts and have studied long and hard, got fancy credentials, etc. That includes the Catholic theologians. It also includes the Protestant theologians (who are almost all also trinitarian but disagree about a bunch of other important things). And it includes Islamic scholars, who are very decidedly not trinitarians. It includes Hindu religious experts, whose views are more different still. By any reasonable criterion it also includes philosophers specializing in philosophy of religion, whose views are very diverse.

This is very much not the same as the situation with climate science. (And not only because the term "climate science" has been somehow coopted; there aren't university departments doing heretical climate science and using a different name for it, so far as I can tell.)

Eric Raymond's list of "warning signs" is pretty bullshitty. One warning sign with his warning signs: he prefaces it with a brief list of past alleged junk-science panics, and at least some of the things he lists are definitely not junk science and he only has the luxury of pretending they are because governments listened to the scientists and did something about the things they were warning about. It's amusing that he lists "Past purveyers of junk science do not change their spots" among his signs, incidentally, because to a great extent the organizations (and in some cases the people) supporting global warming skepticism are the same ones that argued that smoking doesn't cause cancer. Why? Because their opinions are on sale to the highest corporate bidder.

comment by niceguyanon · 2017-01-24T18:21:30.736Z · LW(p) · GW(p)

Suggestion to sticky the welcome thread. Stickying the welcome thread to the sidebar would encourage participation/comments/content. And perhaps in the future add emphasis on communication norms to the thread, specifically that negative reception and/or lack of reception is more obvious on LessWrong – So have thick skin and do not take it personal. I'd imagine that quality control will be what it has always been, critical comments.

comment by Viliam · 2017-01-26T17:27:29.155Z · LW(p) · GW(p)

I have just read a debate about whether high-IQ kids should be allowed to attend special schools, and the debate was predictable. So I used this as an opportunity to summarize the arguments against "smart segregation". (The arguments in favor of it seem quite straightforward: better education, less bullying, social and professional company of equals.) Here are the results; please tell me if some frequently-made argument is missing.

Note: different arguments here contradict each other, which is okay, because they are typically not made by the same people.

1 -- There is no such thing as "smart children", because...

1.A -- ...everyone who believes to be smart is actually just a conceited fool. Parents who believe that their children are smart are just typical parents uncritical about their children. (Insert anecdotal evidence about a kid from your elementary school who believed to be super smart, and so did his parents, but he was obviously a moron.)

1.B -- ...you cannot measure smartness on a single scale. There are many kinds of intelligence; everyone is special in a different way. Someone is better at math, but someone else may be better at dancing or spirituality. Belief in g-factor is debunked pseudoscience; it is racist and sexist and shouldn't be given a platform. (Quote S.J.Gould and/or insert example of Hitler believing some people were better than others.)

1.C -- ...you cannot measure smartness fairly. If a child is tested as smart, it only means they have rich parents who were able to buy them tutors, made them cram for the tests, and maybe even bribed the test evaluators. Also, it is known that tests provide unfair advantage to white cishet male children.

1.D -- A weaker version of the previous statement is that if you make programs for smart children, the children from poor or minority families will not be able to participate in them, for various reasons. This would leave them in a worse situation than they are now, because if it becomes a common knowledge that such programs exist, the fact that the child didn't participate in one would be taken as an evidence against being smart. That is, an average smart child would be actually harmed by such policy.

2 -- Having smart children together with dumb ones is better for the smart children, because...

2.A -- ...it will improve the smart children's social skills. The most important social skill is to be able to interact with average people, because they make a majority of the population, so you will interact with them most frequently as an adult. (This assumes that adult people actually interact with a random sample of population, as opposed to living in a bubble of their profession or socioeconomical level, both in professional and private lives.)

2.B -- ...it will allow the smart children to learn important things from the dumb ones, other than the academic skills. (This usually assumes some kind of cosmic justice, where smaller intelligence is balanced by greater emotionality or spirituality, so the dumb children can provide value that the smart children would not be able to provide to each other.)

2.C -- ...it will allow the smart children to have contacts outside of their bubble.

2.D -- ...the smart children can tutor the dumb ones, which will be an enriching experience for both sides. Explaining stuff to other people deepens your own understanding of the topic.

3 -- Having smart children together with dumb ones is better for the dumb children, because...

3.A -- ...having the smart children in the classroom will provide inspiration for the rest of the class.

3.B -- ...the smart children can tutor the dumb ones.

3.C -- ...it will allow the dumb children to have contacts outside of their bubble.

3.D -- ...the smart children in the classroom will motivate the teachers; having motivated teachers at school will benefit all students.

3.E -- ...the parents of the smart children (presumably themselves smart and rich) will care about improving the quality of education in their child's school, which will benefit all students.

4 -- We should actually not optimize for the smart children, even if it would be a net benefit, because...

4.A -- ...the whole "problem" is made up anyway, and a truly smart child will thrive in any environment. Optimizing for smart children should be such low priority that you should be ashamed for even mentioning the topic. (Insert anecdotal evidence about a smart kid who studied at average school, and became successful later.) Even the argument about bullying is invalid, because bullying happens among smart children, too.

4.B -- ...smart children usually have rich parents. Creating better educational opportunities for smart children therefore on average increases income inequality, which is bad.

Replies from: gjm
comment by gjm · 2017-01-26T18:39:46.848Z · LW(p) · GW(p)

I haven't seen a lot of arguments about this issue. Here are some other anti-segregation arguments that occur to me; I make no guarantee that they are common. I do not necessarily endorse them any more than Viliam endorses the ones he mentions. I do not necessarily endorse the conclusion they (in isolation) point towards any more than Viliam does.

I'm going along with Viliam's smart/dumb terminology and dichotomous treatment for simplicity; I am well aware, and I'm sure Viliam is too, that actually it doesn't make much sense to classify every pupil as "smart" or "dumb".

2.E -- ...the smart children will grow up with more awareness that not everyone is like them, and a better idea of what other people outside their bubbles are like. (Not the same about 2.C; it applies to some extent even if smart and dumb never even speak to one another.)

2.F -- ...a certain fraction of dumb children is necessary for various sorts of extracurricular activity mostly but not exclusively liked by dumb children to be sustainable, so without the dumb ones the smart ones who would have benefited from those activities will be left out.

3.F -- ...if they are segregated, better teachers will likely want to avoid the "dumb" schools, so the "dumb" children will get a worse education.

3.G -- (same as 2.E with signs reversed)

3.H -- ...the mostly smart and rich people in government will care about improving the quality of education in all schools, not just the ones attended by the children of People Like Them. (Closely related to 3.E but not the same.)

3.I -- ...the smart children tend to be better behaved too, and a school consisting entirely of dumb children will have serious behaviour problems. (Whether this is better overall depends on how behaviour varies with fraction of smart/dumb children, among other things.)

3.J -- ...a certain fraction of smart children is necessary for various sorts of extracurricular activity mostly but not exclusively liked by smart children to be sustainable, so without the smart ones the dumb ones who would have benefited from those activities will be left out.

5 -- Having smart children together with dumb ones is better for everyone, because ...

5.A -- ...segregation means that on average schools will be further from homes (because children will less often just be going to the nearest school), which means more travel; hence more time wasted, more pollution, more congestion on the roads, etc.

5.B -- ...segregation in schools will lead to segregation of communities as parents who expect their children to go to the "smart" schools move nearer them, and likewise for the "dumb" schools, and more-segregated communities means people more completely in bubbles, less empathy for one another, etc., destabilizing society. (Mumble mumble Trump Brexit mumble out-of-touch elites mumble mumble.)

5.C -- ...parents whose children go to the same school will interact at least a bit for school-related reasons, so non-segregated schools improve social cohesion and cross-bubble awareness by making smart and dumb parents talk to one another from time to time.

5.D -- ...children near the smart/dumb borderline (wherever it's drawn) may do worse, because e.g. if they're generally smart but somewhat worse at one particular thing, there won't be a class of dumbish people for them to do it in, and likewise if they're generally dumb but somewhat better at one particular thing; particularly sad will be the case of a child who develops late or has a bad year for some reason and then finds they're in a school that just doesn't have any lessons that suit the level they're at.

Replies from: Viliam
comment by Viliam · 2017-01-27T09:54:00.491Z · LW(p) · GW(p)

Thanks! What are your central examples of the activities in 2.F? Sport? Craft? Something else?

I think I never actually met anyone using 5.B. Probably because using this argument requires assuming that there are enough genuinely smart people to create a community when given a chance; and most people around me seem to believe that high IQ doesn't really matter, and on the "unspecified but certainly very high" level where it does, those people are too few, not enough to create a functional bubble. Alternatively, other people believe that every above-average high school or every university is already de facto a school for high-IQ kids, and the IQ levels above this line don't actually make a difference, so all such bubbles already exist. -- No one seems to believe that there could be a meaningful line drawn at IQ maybe 150, where the people are too few to create a (non-professional) community spontaneously, but sufficiently different from the rest of the population that they might enjoy actually living in such community if given a chance.

Replies from: gjm
comment by gjm · 2017-01-27T10:45:44.277Z · LW(p) · GW(p)

For 2.F I was indeed thinking sport, but actually I have very little idea whether such activities really exist and if so what they actually are. Plenty of smart kids like sport.

requires assuming that there are enough genuinely smart people to create a community

We're already assuming that there are enough smart-in-whatever-sense people to have their own school. Depending on where the borderline between "smart" and "dumb" is drawn, there may be more or fewer "smart" schools, but each one will have to be of reasonable size.

Replies from: Viliam
comment by Viliam · 2017-01-27T15:41:15.578Z · LW(p) · GW(p)

Well, specific IQ levels are usually not mentioned in the debates I have seen. Which of course only makes the debates more confused. :(

When I think about it quantitatively, if we use Mensa as a Schelling point for "high IQ", then 2% of the population have IQ over 130, which qualifies them as Mensa-level smart. Two percent may seem not too much, but for example in a city with population of half a milion (such as where I live), that gives 10 000 people. To better visualize this number, if you have an apartment building with 7 floors, that is 20 apartments, assuming on average 2.5 people per apartment that is 50 people per building, which gives us 200 buildings.

Of course assuming unrealistically that Mensa could somehow successfully convince all people in the city to take the test, and to convince those who pass to move together. But 200 buildings of Mensa-level people sounds impressive. (Well, if Mensa sounds impressive, which on LW it probably does not.)

Speaking of schools, let's say that people live about 70 years, but there are more young people than old people, so let's assume that for young people a year of age corresponds to 1/50 of the population, so if there are 10 000 Mensa-level people in the half-million city, that makes 200 children for each grade. That's about 7 classrooms for each grade, depending on size. That's like two or three schools. Again, depending on the assumption that Mensa could find those kids, and convince the parents to put them all into Mensa schools. (Which, under the completely unrealistic assumptions, could be built in the "Mensa district" of the city.)

To make this happen, it would require a smaller miracle, but it's not completely impossible. Just making all people in one city interested in Mensa, making them take the test, and making them interested in moving to the "Mensa district" would require a lof of advertising. (And even then, there would be a lot of resistance.) But, hypothetically, if you had a millionaire, who would build the new district, hire a few celebrities to popularize the idea, and perhaps sell or rent the appartments only to high-IQ people for a discount... it could happen. If 10% of the target population would be convinced enough to move, you could get a "Mensa block" with 20 houses, 1 very small elementary school, and 1 very small high school. -- I am afraid this is the best possible outcome, and it already assumes a millionaire sponsor.

If you imagine a higher IQ level, such as IQ 150, even this is impossible.

So, while some people may fear that if we make smart people network with each other, they could take over the whole society, to me it feels like saying that as soon as dozen LessWrong fans start regularly attending a meetup, they will become an army of beisutsukai and take over the world. Nice story, but not gonna happen. If this is the reason against the schools for smart kids, the normies are safe.

Replies from: Lumifer, gjm
comment by Lumifer · 2017-01-27T17:26:26.816Z · LW(p) · GW(p)

specific IQ levels

You might find this interesting: The 7 Tribes of Intellect.

Replies from: Viliam
comment by Viliam · 2017-01-30T13:14:35.634Z · LW(p) · GW(p)

Well, I approximately agree, but that's just a classification of people into IQ layers. I'd like to go much further than that.

For example -- sorry, today I am too lazy to do a google hunt -- I think there was a research, probably by Terman, about why some high-IQ people succeed in life while others fail, often spectacularly. His conclusion was that it mostly depends on how well connected with other high-IQ people they are; most importantly whether they come from a generally high-IQ family. (My hypothesis is that the effect is twofold: first, the high-IQ family itself is a small high-IQ society; second, the older family members were already solving the problem of "how to find other high-IQ people" and can share their strategies and contacts with the younger members.)

If this is true (which I have no reason do doubt), then not allowing high-IQ children to associate with other high-IQ children is child abuse. It is like sending someone on train that is predictably going to crash. I will charitably assume that most people participating in this form of child abuse are actually not aware of what they are doing, so I don't blame them morally... at least until the moment when someone tries to explain to them what are the actual consequences of their actions, and they just stick fingers in their ears and start singing "la la la, I don't hear you, elitism is always bad".

But perhaps a more important thing is this: the usual "compromise" solution of accepting that some children indeed are smarter than others, but solving it by simply having them skip a grade (that is, instead of company of children with similar age and IQ, you give them company of older children with average IQ, so that the "mental age" is kinda balanced) is just a short-term fix that fails in long term. Yes, you are providing the children an appropriately mentally challenging environment, which is good. But what you don't give them, is the opportunity to learn the coping skills that seem necessary for high-IQ people. So when they reach the stage where there is simply no value X such that an average X years old person has the same mental level as the gifted person does now, the whole strategy collapses. (But by the time the gifted person is usually an adult, so everyone just shrugs and says "whatever, it's their fault". Ignoring the fact that the society spent the whole previous time teaching them a coping strategy that predictably fails afterwards.)

So, I believe that for a high-IQ person there is simply no substitute for the company of intellectual peers; even older children will not do, because that is a strategy that predictably fails when the person reaches adulthood. Some kids are lucky, because their parents are high-IQ, because their parents have high-IQ friends, who probably have high-IQ children, so by this social network they can connect with intelectual peers. But high-IQ kids growing up without this kind of network... need a lot of luck.

Replies from: Lumifer
comment by Lumifer · 2017-01-30T16:24:37.410Z · LW(p) · GW(p)

If this is true (which I have no reason do doubt), then not allowing high-IQ children to associate with other high-IQ children is child abuse.

You do understand that "true" here means "we built a model where the coefficient for a factor we describe as 'connectedness' is statistically significant", right? I don't think throwing around terms like "child abuse" is helpful.

Also, do you think the existence of the internet helps with the problem you describe?

Replies from: Viliam
comment by Viliam · 2017-01-30T17:51:39.984Z · LW(p) · GW(p)

I don't think throwing around terms like "child abuse" is helpful.

Yeah, it's probably a strategic mistake to tell people plainly that they are doing a horrible thing. It will most likely trigger a chain of "I am a good person... therefore I couldn't possibly do a horrible thing... therefore what this person is telling me must be wrong", which prevents or delays the solution of the problem. Whether you discuss social deprivation of high-IQ children, or circumcision, or religious education, or whatever, you have to remember that people mostly want to maintain the model of the world where they are the good ones who never did anything wrong, even if it means ignoring all evidence to the contrary. Especially if their opinion happens to be a majority opinion.

It's just that on LW I try to tell it how I see it, ignoring the strategical concerns. As an estimate, let's say that if normal child development is 0 units of abuse, and feral children are 100 units of abuse, then depriving a high-IQ child of contact with other high-IQ children is about 1 unit of abuse. (I specifically chose feral children, because I believe it is an abuse of a similar kind, just much smaller magnitude.) Sure, compared with many horrors that children sometimes suffer, this is relatively minor. However, people who systematically harm thousands of children in this way are guilty of child abuse. I mean those who campaign against existence of high-IQ schools, or even make laws against them. (As an estimate, I would guess that at least one of hundred such children will commit suicide or at least seriously consider it, as a result of the social deprivation.)

Also, do you think the existence of the internet helps with the problem you describe?

I think it helps, but not sufficiently. I may be generalizing from one example here, but internet connection simply doesn't have the same quality as personal interaction. That's why e.g. some people attend LW meetups instead of merely posting here. And generally, why people still bother meeting in person, despite almost everyone (in developed countries) having an internet connection. -- As a high IQ child, you may find other high IQ children on the internet, but unless there is a specialized school for such children, you still have to spend most of your time without them.

Another problem is that the topic is mostly taboo, so even with internet you may simply not know what to look for. To explain, I will use homosexuality as an analogy (obviously, it is not the same situation, I just find this one aspect similar) -- if you know that homosexuality is a thing, and if you know that you happen to be a gay, then you can just open google and look for the nearest gay club, and if you are lucky, then something like that exists in your town. But imagine that you happen to be gay, but you never heard about homosexuality as a concept. No one ever mentioned that; you are not aware that it exists, or that other people may be gay, too. All you know is that other boys of your age seem to be interested in girls, and you don't quite understand why. So, you open google and... what? "Why I am not so interested in girls?" But in a society where this topic is taboo, the first 100 pages of google results will probably contain information that for different boys attraction in girls develops at a different age, or that maybe this is a sign from God to become a celibate priest, or that you should lower your unrealistic standards for female beauty, or that you need to learn more social skills and go out more and meet more girls until you find one that will attract you... some people will say there is something wrong with you, some people will say everything is perfectly okay and all problems will solve themselves in time, but no one will even hint that maybe your lack of attraction to girls may be because you are gay, which is a fact of life, and a solution for such people is to find company of other gays.

Analogically, if you have a very high IQ, and your problem is that there are not people with sufficiently high IQ around you, but you are not aware of the fact that a very high IQ is a thing, what will you write in google? "How not to feel alone?" "How to communicate with people?" "How to feel understood?" And you will get advice like "when interacting with people, you should talk less, and listen more", which is all perfectly true and useful, but all it does is that it helps you connect to average people on their level, which is not the thing you are starving for. (It's like a recipe for a gay how to maintain erection while having sex with a girl. It may be a technically perfect advice, it may even work; it's just not the thing that the gay truly desires. Similarly, the high-IQ person may learn to be able to maintain conversation with an average person, talking on the average person's terms; it's just not fulfilling their deep intellectual desires.) Some of the advice will tell the problem is in you: you don't have enough social skils, you are too proud, you have unrealistic expectations of human interaction; other advice will tell you to calm down because everything is going to magically become okay as soon as you stop worrying. But if you happen to be a high-IQ person, the advice you need is probably "you feel different from other people because you are different, but don't worry, there is a 'high IQ club' down the street, you may find similar people there". (Which is what Mensa tries to be. Except that Mensa is for people with IQ 130, so if you happen to be have IQ 160, you will feel in Mensa just as lonely as an average Mensa member feels among the normies.)

So, analogically to gays, we need to make it generally known that "having a high IQ" is a thing, and that "meeting other people with the similar level to IQ" is the only solution that actually works. And then, people will know what to type in google. And then, having an internet will solve this problem. But most people still have beliefs that are analogical to "homosexuality is a sin, it is unnatural, it shouldn't be encouraged, it corrupts the youth, it will make God send floods on us, you just have to pray and become hetero"; except that they say "IQ is a myth, it is an unhealthy elitism, there are multiple intelligences and everyone has one, IQ doesn't mean anything because EQ is everything and you can increase your EQ by reading Dale Carnegie, and if you believe in IQ you will develop a fixed mindset and fail at life". And you may start believing it, until you happen to stumble upon e.g. a LW meetup and experience your best intellectual orgasm in life, and you suddenly easily click with people and develop fulfilling relationships.

(Where the analogy with gays fails is that people usually don't create fake gay clubs full of hetero people, but there are groups of people who believe themselves to be smart even when they are not. So a person googling for something like a high IQ club may still be disappointed with the results.)

Replies from: Lumifer
comment by Lumifer · 2017-01-30T19:36:11.425Z · LW(p) · GW(p)

ignoring the strategical concerns

I don't think the problem is strategic concerns, I think the problem is connotations.

The connotations for child abuse are "call the cops and/or child protective services which will take the child away from the parents and place him/her into foster care" and "put the parents in jail and throw away the key". Child abuse is not merely bad parenting.

you may simply not know what to look for

What do you mean? Finding your tribe / peer group isn't a matter of plopping the right search terms into Google. I think it mostly works through following the connections from things and people you find on the 'net. If you consistently look for "smarter" and follow the paths to "more smarter" X-), you'll end in the right area.

internet connection simply doesn't have the same quality as personal interaction

Well, of course. But imagine things before the internet :-/

Replies from: Viliam
comment by Viliam · 2017-01-31T10:02:38.944Z · LW(p) · GW(p)

But imagine things before the internet :-/

Yeah, the Dark Ages before 1980s were a cruel place to live.

I think it mostly works through following the connections from things and people you find on the 'net. If you consistently look for "smarter" and follow the paths to "more smarter" X-), you'll end in the right area.

I tried this, and maybe I was doing it wrong or maybe just not persistently enough, but essentialy my findings were of two kinds:

1) Instead of "rational smart" I found "clever smart" people. The kind that has a huge raw brainpower, but uses it to study theology or conspiracy theories. Sometimes it seemed to me that the higher-IQ people are, the more crazy they get. I mean, most of the conspiracy theories I knew, I knew them because someone shared them on a Mensa forum, and not jokingly. Or the people who memorized half of the Bible, and could tell you all the connections and rationalizations for anything. They were able to win any debate, they had high status in their community, and they didn't have a reason to change this.

Essentially, before I found LW, I was using a wrong keyword to google. Instead of "highly intelligent" I actually wanted "highly intelligent and rational". But I didn't know how to express that additional constraint; it just felt like "highly intelligent without being highly stupid at the same time", but of course no one would understand what I meant by saying that. (Did I mean "even better knowledge of the Bible"? Nope. Oh, so "knowedge of how the illuminati and jews rule the world"? Eh, just forget it.)

2) A few (i.e. less than ten) individuals who were highly intelligent and rational, but they were all isolated individuals. Sometimes lonely and lost, just like me. Sometimes doing their thing and being quite successful at it, but admitting that most people seem stupid or crazy (they would tell it more diplomatically, of course) and it is very rare to find an exception. The latter were usually very busy and seemed to prefer being left alone, so when I suggested something like connecting smart and rational people together, there were like "nah, I don't have time for that, I have already found my own thing that I am good at, it makes me happy, and that's the only rational way for a smart person to have a happy life". But I suspect it was simply "better to hope for nothing, than to be disappointed".

Replies from: Lumifer
comment by Lumifer · 2017-01-31T18:42:41.487Z · LW(p) · GW(p)

This thread started with talking about establishing schools/communities/etc. of high-IQ people. Note: all high-IQ people. Now you are pointing out that IQ by itself is not sufficient -- you want people with both high IQ and appropriate culture/upbringing/interests.

Replies from: Viliam
comment by Viliam · 2017-02-01T00:22:40.915Z · LW(p) · GW(p)

I wonder... but yeah, this is extremely speculative... whether the reason why high-IQ people don't have more rationality could be analogical to why feral children don't have better grammar skills.

That is, whether putting high-IQ people together, for a few generations, would increase the fraction of rationalists among them.

Disconnected people don't create culture. High IQ is biological, but rationality is probably cultural.

But yeah... changing the topic, and the thread is too long already.

Replies from: Lumifer
comment by Lumifer · 2017-02-01T15:40:28.951Z · LW(p) · GW(p)

I don't know about that. Social skills and culture are not rationality, they are orthogonal to rationality.

Epistemic rationality is not cultural -- it's basically science, and science is based on matching actual reality (aka "what works") . There was a comment here recently about Newtonian physics being spread by the sword (the context was a discussion about how Christianity spread) which pointed out that physics might well have spread by cannons -- people who "believe" in Newtonian physics tend to have much better cannons than those who don't.

Instrumental rationality is not such a clear-cut case because culture plays a great role in determining acceptable ways of achieving goals. And real-life goal pursuit is usually more complicated than how it's portrayed on LW.

Replies from: Viliam
comment by Viliam · 2017-02-01T16:08:00.274Z · LW(p) · GW(p)

people who "believe" in Newtonian physics tend to have much better cannons than those who don't.

Yeah, I should have said "subculture" instead of "culture". Because as long as people at the key places in the country believe in physics, they can also bring victory for their physics-ignorant neighbors.

Epistemic rationality is not cultural -- it's basically science and science is based on matching actual reality

But you still learn science at school, and some people still decide that they e.g. don't believe in evolution, or believe in homeopathy. So although science means "matching the territory", the opinion that "matching the territory could be somehow important" is just an opinion that some people share and others don't, or some people use in some aspects of life and not in others, free-riding on the research of others.

Replies from: Lumifer
comment by Lumifer · 2017-02-01T16:51:20.581Z · LW(p) · GW(p)

you still learn science at school

No, you don't. You learn to regurgitate back a set of facts and you learn some templates into which you put some numbers and get some other (presumably correct) numbers as output. This is not science.

the opinion that "matching the territory could be somehow important" is just an opinion that some people share and others don't

That, um, depends. Most people believe that "matching the territory" with respect to gravity is important -- in particular, they don't attempt to fly off tall buildings. The issues arise in situations where "matching the territory" is difficult and non-obvious. Take a young creationist -- will any of his actions mismatch the reality? I don't expect so. If he's a regular guy leading a regular life in some town, there is no territory around him which will or will not matched by his young creationist beliefs. It just doesn't matter to him in practice.

comment by gjm · 2017-01-27T16:06:44.394Z · LW(p) · GW(p)

I don't think the concern would be that "they could take over the whole society". It would be more that smart people (more accurately: people in various groups that correlate with smartness, and perhaps more strongly with schools' estimates of pupil-smartness) already have some tendency to interact only with one another, and segregating schools would increase that tendency, and that might be bad for social cohesion and even stability (because e.g. those smart people will include most of the politicians, and the less aware they are of what Ordinary People want the more likely they are to seem out of touch and lead to populist smash-everything moves).

Replies from: Lumifer, Viliam
comment by Lumifer · 2017-01-27T17:25:15.617Z · LW(p) · GW(p)

that might be bad for social cohesion and even stability

This is a complicated argument. Are you basically saying that it's "good" (we'll leave aside figuring out what it means for a second) for people to be tribal at the nation-state level but bad for them to be tribal at more granular levels?

For most cohesion you want a very homogeneous population (see e.g. Iceland). Technically speaking, any diversity reduces social cohesion and diversity in IQ is just one example of that. If you're worried about cohesion and stability, any diversity is "bad" and you want to discourage tribes at the sub-nation levels.

The obvious counterpoint is that diversity has advantages. Homogeneity has well-known downsides, so you're in effect trading off diversity against stability. That topic, of course, gets us into a right into a political minefield :-/

Replies from: gjm
comment by gjm · 2017-01-27T18:06:52.256Z · LW(p) · GW(p)

Are you basically saying [...]

Just to clarify, I am describing rather than making arguments. As I said upthread, I am not claiming that they are actually good arguments nor endorsing the conclusion they (by construction) point towards. With that out of the way:

that it's "good" [...] for people to be tribal at the nation-state level but bad for them to be tribal at more granular levels?

The argument doesn't have anything to say about what should happen at the nation-state level. I guess most people do endorse tribalism at the nation-state level, though.

For most cohesion you want a very homogeneous population [...] any diversity reduces social cohesion

If you have a more or less fixed national population (in fact, what we have that's relevant here is a more or less fixed population at a level somewhere below the national; whatever scale our postulated school segregation happens at) then you don't get to choose the diversity at that scale. At smaller scales you can make less-diverse and therefore possibly more-cohesive subpopulations, at the likely cost of increased tension between the groups.

(I think we are more or less saying the same thing here.)

The obvious counterpoint is that diversity has advantages.

Yes. (We were asked for arguments against segregation by ability, so I listed some. Many of them have more or less obvious counterarguments.)

Replies from: Lumifer
comment by Lumifer · 2017-01-27T20:23:55.067Z · LW(p) · GW(p)

The argument doesn't have anything to say about what should happen at the nation-state level.

Concerns about social cohesion and stability are mostly relevant at the nation-state level. This is so because at sub-state levels the exit option is generally available and is viable. At the state level, not so much.

In plain words, it's much easier to move out if your town loses cohesion and stability than if your country does.

you don't get to choose the diversity

You don't get to choose the diversity, but you can incentivise or disincentivise the differentiation with long-term consequences. For an example, look at what happened to, say, people who immigrated to the US in the first half of the XX century. They started with a lot of diversity but because the general trend was towards homogenisation, that diversity lessened considerably.

comment by Viliam · 2017-01-27T17:10:00.445Z · LW(p) · GW(p)

This again depends a lot on the specific IQ values. There are probably many politicians around the Mensa level, but I would suspect that there are not so many above cca IQ 150, simply because of the low base rate... and maybe even because they might have a communication problem when talking to an average voter, so if they want to influence politics, it would make more sense for them to start a think tank, or becomes advisors, so they don't have to compete for the average Joe's vote directly.

comment by tut · 2017-01-24T22:00:33.341Z · LW(p) · GW(p)

Has the password changed on the username2 account?

Replies from: username2
comment by username2 · 2017-01-25T15:49:40.771Z · LW(p) · GW(p)

No

comment by [deleted] · 2017-01-23T22:30:58.053Z · LW(p) · GW(p)

Thoughts on punching nazis? I can't really wrap my head around why there are so many people who think it's 100% ok to punch nazis. Not sure if discussion about this has happened elsewhere (if so please direct me!) . For the purposes of this discussion let's ignore whether or not the alt-right counts as Nazism and speak only about a hypothetical Nazi ideological group.

I understand to some extent the argument that reasonable discussion with Nazis is almost certainly futile and that they are perhaps a danger to others, however my main concerns with punching Nazis are: 1) It promotes violence as an acceptable means of dealing with disagreement 2) It doesn't accomplish much (though the hypothetical Nazi in question has said that he is more afraid of going outside, so I suppose it's accomplished at least fear which may be a pro or con depending on your point of view, besides that however I don't think it's hindered Nazis very much and has only worsened the image of the anti-Nazis)

Replies from: drethelin, Viliam, satt, username2, g_pepper
comment by drethelin · 2017-01-23T23:29:14.589Z · LW(p) · GW(p)

I think a lot of people's intuitive moral framework relies on the idea of the Outlaw. Traditionally an Outlaw is someone who has zero rights or legal protection accorded them by society: it's legal to steal from them, beat them, or kill them. This was used as punishment in a variety of older societies, but has mostly fallen out of favor. However, a lot of people still seem to think of transgressors as moral non-patients, and are happy to see them receive any amount of punishment. Similar to how people think criminals deserve to be raped in prison, people think Nazis deserve whatever happens to them. This is counter to our judicial system and the happy functioning of civilization, but I don't think most people are susceptible to reasoned arguments when they're in a heightened emotional state.

comment by Viliam · 2017-01-24T15:18:54.317Z · LW(p) · GW(p)

Thoughts on punching nazis?

Step 1: Make a good argument for why punching Nazis is okay.
Step 2: Call everyone you don't like a Nazi.
Step 3: Keep punching.

The steelman version of "punching Nazis is okay" is that one should not bring verbal arguments into a punching fight. That is, we assume that the Nazis are there to punch you, and if you prepare for verbal fight, well, don't expect to return home unharmed.

But this makes an assumption about your opponent, and typically, mindkilled people make a lot of wrong assumptions, especially about their opponents.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2017-01-24T16:25:39.520Z · LW(p) · GW(p)

The steelman version of "punching Nazis is okay" is that one should not bring verbal arguments into a punching fight. That is, we assume that the Nazis are there to punch you, and if you prepare for verbal fight, well, don't expect to return home unharmed.

But that guy didn't just "not bring verbal arguments into a punching fight", he brought a punch into a verbal argument.

Replies from: Viliam
comment by Viliam · 2017-01-24T16:27:26.186Z · LW(p) · GW(p)

I am not familiar with the specific case, my answer was meant in general.

EDIT: I think it was historically the situation that when Nazis (the real ones, i.e. the NSDAP Nazis, not the people who disagree with me "Nazis") were losing a political debate, they often changed the rules of the game, and attacked their opponents physically ("pigeon chess"). Which is why everyone else developed a rule "if Nazis invite you to a debate, you don't participate (or you come ready to throw punches, if necessary)". No idea whether this is a fact or a myth.

comment by satt · 2017-01-29T17:20:42.137Z · LW(p) · GW(p)

Not sure if discussion about this has happened elsewhere (if so please direct me!)

https://www.google.com/search?q=site:twitter.com+is+it+ok+to+punch+nazis

comment by username2 · 2017-01-23T23:50:53.438Z · LW(p) · GW(p)

I think that people punching other people is the default behavior, and it takes conscious effort to control yourself when you are angry at someone. E.g. drunk people who lost their inhibitions often get involved in fights. And people who are angry rejoice at any opportunity to let their inner animal out, feel the rush of adrenaline that comes with losing your inhibitions and not have to think about consequences or social condemnation.

2) It doesn't accomplish much (though the hypothetical Nazi in question has said that he is more afraid of going outside, so I suppose it's accomplished at least fear which may be a pro or con depending on your point of view, besides that however I don't think it's hindered Nazis very much and has only worsened the image of the anti-Nazis)

People like the strong and dislike the weak. If Nazis got punched all the time, they would be perceived as weak and nobody would join them. Even if they didn't like the punching, most likely they would simply be a bystanders.

Replies from: waveman, plethora
comment by waveman · 2017-01-24T00:32:33.703Z · LW(p) · GW(p)

drunk people who lost their inhibitions often get involved in fights

Even here there may be a cultural element. I notice in Japan when I was there, men would be totally drunk without a hint of violence. In some cultures being drunk provides permission to be violent, similar perhaps to the way that men are 'permitted' to hug one another after scoring a goal on the playing field.

comment by plethora · 2017-01-24T16:54:19.985Z · LW(p) · GW(p)

If Nazis got punched all the time, they would be perceived as weak and nobody would join them.

Two thousand years ago, some guy in the Roman Empire got nailed to a piece of wood and left to die. How did that turn out?

Replies from: Lumifer
comment by Lumifer · 2017-01-24T17:11:03.452Z · LW(p) · GW(p)

Quod licet Iovi, non licet bovi

comment by g_pepper · 2017-01-24T01:22:35.854Z · LW(p) · GW(p)

FWIW, Reason magazine condemned the punching.

comment by Lumifer · 2017-02-02T16:24:01.724Z · LW(p) · GW(p)

some clothing, e.g., high heels, is rather impractical

I beg to disagree. To speak of practicality you need to have a specific goal in mind. High heels are very impractical for running, but they are quite practical for attracting the attention of a potential mate.

comment by satt · 2017-01-31T20:21:07.956Z · LW(p) · GW(p)

Do continue trying to put words into my mouth. That's absolutely going to convince me that it's worth responding to you with good arguments.

comment by gjm · 2017-01-31T03:40:08.271Z · LW(p) · GW(p)

Note that the people doing the prosecution haven't presented any evidence of "promulgation of assertions that global warming isn't real in order to gain an unfair competitive advantage in a marketplace" beyond the fact that the people in question are asserting that global warming isn't real.

Are there in fact any such prosecutions yet? (I don't think there are, but maybe there are and I missed them.)

Does it matter if they believe it is in fact not real, does it matter if they have evidence?

Yes, because the proposed prosecutions are under laws that require deliberate falsehood. I think there may be some scope for claiming not "they said X and knew it was false" but merely "they said X despite knowing it might well be false, and didn't know it was false only because they deliberately didn't check" (i.e., X was bullshit rather than lies). But I'm pretty sure that "I looked carefully at the available evidence and honestly came to the conclusion that X" is, in principle, a valid defence in such cases.

Thus it is clear that (2) is little more than a fairly transparent excuse to do (1).

This must be some new meaning of the word "clear" of which I was not previously aware. Suppose we stipulate that you're right that in such prosecutions it wouldn't matter whether the accused sincerely believed that global warming is unreal (or very slight, or beneficial, or whatever); and suppose we stipulate that the people proposing such prosecutions have presented no evidence of misconduct besides asserting that global warming isn't real. How would that make it clear that #2 is a transparent excuse for #1? In particular, how would you distinguish it from a different conspiracy theory: that actually they couldn't care less who believes what about global warming, and what they actually want to do is stick it to the fossil fuel companies?

(For the avoidance of doubt, I do not agree that in such prosecutions it wouldn't matter what the accused believed; I don't know what evidence the people proposing them have offered, but the time when they actually need to offer such evidence is when actually prosecuting; and I don't actually think that different conspiracy theory is terribly likely.)

Given that gjm has just demonstrated that (3) is false

Er, no.

comment by jimmy · 2017-01-31T00:36:19.859Z · LW(p) · GW(p)

I think that "tribal bias" is the norm, not the exception, and accusing someone of having their reasoning messed with, to some extent, by tribal biases is a little like accusing them of having shit that stinks. I'd much rather hold off and only criticize people when they deal with visible bias poorly, and It's legitimately hard enough to see your own tribal biases and how they affect your thinking that I'm a little hesitant to accuse someone of being blatantly dishonest because they don't see and correct for what looks like a bias to me. Especially since sometimes what looks like a bias is actually just noticing that they're using a valid heuristic that you don't understand because you're not part of their tribe.

That said, it's clear that satt wasn't offering Lumifer the amount of charity that I think Lumifer deserves, and was more focused on finding holes in Lumifer's relatively (albeit intentionally and not overly, in my opinion) imprecise arguments than on finding the merits of Lumifer's arguments, which I'd argue is a much better way of going about things, in general.

comment by jimmy · 2017-01-30T22:59:34.682Z · LW(p) · GW(p)

I upvoted you because I think your explanation of Lumifer's point there is correct and needed to be said.

However, I'd like to comment on this bit:

Given that gjm has just demonstrated that (3) is false, I'm inclined to believe the real reason for your bias is that you belong to a tribe where agreeing with gjm's conclusion is high status.

I don't think this is fair to take away gjm's entire reputation based on one disagreement or even one confirmed counterexample.

I also think it's premature to conclude that satt is biased here due to tribal beliefs, because I think the comment satt made is perfectly consistent with a low to nonexistent amount of tribal bias, as well as consistent with a good ability to acknowledge and correct for tribal biases when pointed out. It's consistent with the alternative too, of course, but I'd want to see some distinguishing evidence before making a point of it.

Replies from: gjm
comment by gjm · 2017-01-31T03:24:16.478Z · LW(p) · GW(p)

I don't think this is fair to take away gjm's entire reputation based on one disagreement or even one confirmed counterexample.

I would just like to mention that I see what you did there.

In any case, I am not greatly worried that snark from Yet Another Eugine Sockpuppet is going to "take away gjm's entire reputation".

Replies from: jimmy
comment by jimmy · 2017-01-31T03:36:19.034Z · LW(p) · GW(p)

I'm guessing that you think I'm passive aggressively hinting that this more of a confirmed counter example than an honest disagreement? I promise you that is not my intent. My intent is that it applies even if it were confirmed, since I suspect that user:math might see it that way, while saying nothing about how I see it. To clarify, no, I see it as a disagreement.

I was also not aware that it was Eugine. (and of course, even if it wasn't, that wouldn't remove your reputation in anyone else's eyes, and I was talking about it as an internal move)

comment by satt · 2017-01-30T19:19:43.746Z · LW(p) · GW(p)

I see downvotes are still disabled, so I'll just [throws back head and horse laughs].

comment by Lumifer · 2017-01-29T20:13:22.103Z · LW(p) · GW(p)

We have satellite temperature data since the late 70s. Before that, yes, there is opportunity for shenanigans.

comment by ChristianKl · 2017-01-28T09:54:10.192Z · LW(p) · GW(p)

Economic growth basically means that workers get more productive. Less hours of work means more output. GDP growth is not really possible without making workers more efficient.

It's interesting how in the last years the old luddie arguments got revived. The idea that automation means that there won't be any jobs anymore get's more and more popular.

Replies from: Good_Burning_Plastic, MrMind
comment by Good_Burning_Plastic · 2017-01-30T09:17:40.251Z · LW(p) · GW(p)

Economic growth basically means that workers get more productive. Less hours of work means more output. GDP growth is not really possible without making workers more efficient.

In principle it is possible for GDP to grow even if productivity per hour stays constant provided the number of hours worked goes up. I've heard that's an important factor to consider when comparing the GDPs of France and the US, so it's not that unlikely it also is when comparing the GDP of a country in year X and that of the same country in year X+10. (But of course such a thing couldn't go on arbitrarily far because there are only so many hours in a day.)

Replies from: Viliam
comment by Viliam · 2017-01-30T13:39:33.580Z · LW(p) · GW(p)

When comparing accross countries, I wouldn't be surprised if different countries had different methodologies for calculating GDP. The differences don't have to be obvious at the first sight. For example, both countries may agree that GDP = X + Y + Z, but there may be a huge difference in how exactly they calculate X, Y, and Z. Also, gray economy may or may not be included, and may be estimated incorrectly.

(Sometimes such changes are done for obvious political reasons, for example in my country a government once reduced unemployment by simply changing the definition of how unemployment is calculated. Another example of how the same word can correspond to different things is how countries calculate tourism: in some countries a "tourist" is any foreiger who comes for a non-work visit, in other countries only those who stay at a hotel are counted.)

comment by MrMind · 2017-01-30T08:37:37.706Z · LW(p) · GW(p)

Economic growth basically means that workers get more productive.

Is that the best way to slice the problem? It doesn't seem to cover well instances where new resources are discovered, or new services offered, or production processes improved to deliver a higher added value.

The idea that automation means that there won't be any jobs anymore get's more and more popular.

Well, I think the main worry is that there won't be any more jobs for humans.

Replies from: ChristianKl, Good_Burning_Plastic
comment by ChristianKl · 2017-01-30T12:18:42.491Z · LW(p) · GW(p)

Well, I think the main worry is that there won't be any more jobs for humans.

There are plenty of people who want to have more stuff. I don't think that the constraint for building more stuff or providing more services is that we don't have enough raw materials.

Replies from: MrMind
comment by MrMind · 2017-01-30T16:06:14.170Z · LW(p) · GW(p)

I'm not sure I'm following the analogy. If robots replace humans, we will have an increase in things to buy due to increased efficiency, but a lot more people will become poorer due to a lack of empolyment. If no other factor is involved, what you'll see is at least an increase in the disequality of distribution of richness between those who have been replaced and those who owns the replacement, proportional to the level of sophistication of the said AI.

Replies from: ChristianKl
comment by ChristianKl · 2017-01-31T15:32:36.889Z · LW(p) · GW(p)

People get employed when their work allows an employer to create more value, that a customer can buy, than their wage costs.

Robots need to be designed, built, trained and repaired.

When it comes to wealth inequality that's partly true. Automatization has the potential to create a lot of inequality because skill differences lead to stronger outcome differences.

Replies from: MrMind
comment by MrMind · 2017-02-01T09:20:36.013Z · LW(p) · GW(p)

The robotic revolution and possibly the next AI revolution means that the source of labor can be shifted from people to robot.
Within the usual production model, output = f(capital) x g(labor), labor is to be meant exclusively as human labor, but in the next future, possibly labor will mean robot labor, which can be acquired and owned, thus becoming part of the means of production accessible to capital. In a sense, if AI will take a hold in the industry, labor will be a function of capital, and this means that the equation will be transformed as output = h(capital). Depending on the h, of course, you will have more or less convenience (humans require training and repairing too).

Replies from: ChristianKl
comment by ChristianKl · 2017-02-01T15:22:53.624Z · LW(p) · GW(p)

Before AGI there are many tasks that human can do but that robots/AI can't. It's possible to build a lot of robots if robots are useful.

That's the kind of work that's likely a constraint on producing more stuff. I don't think the constraint will be resources. Number of robots is also unlikely the constraint as you can easily build more robots.

comment by Good_Burning_Plastic · 2017-01-30T09:21:13.093Z · LW(p) · GW(p)

or production processes improved to deliver a higher added value.

That does count as workers getting more productive by the standard definition of the term as usually used e.g. in discussions of Baumol's cost disease.

Replies from: MrMind
comment by MrMind · 2017-01-30T10:53:23.790Z · LW(p) · GW(p)

I'm confused.
If productivity is unit / labor, then switching to another production line which deliveres the same quantity of items but which are sold for a higher price should increase the GDP without increasing productivity.
Reading a couple of papers about Bauomol's disease seems to agree with the definition of productivity as output per labor: the labor cost increases while the productions stays the same, so price rises without an increase in efficiency.

comment by Gram_Stone · 2017-01-24T02:46:05.422Z · LW(p) · GW(p)

Does anyone have an electronic copy of the Oxford Handbook of Metamemory that they're willing to share?

comment by whpearson · 2017-01-23T14:59:00.216Z · LW(p) · GW(p)

Are there any forums explicitly about how to think about and act to best make humanity survive its future?

Replies from: Gunnar_Zarncke, WalterL
comment by Gunnar_Zarncke · 2017-01-24T14:56:32.250Z · LW(p) · GW(p)

There are quite a few points where you can go, e.g. google existential risk

comment by WalterL · 2017-01-23T17:03:37.597Z · LW(p) · GW(p)

Our consensus is pretty unalterably "Build an AI God".

Replies from: Lumifer, WhySpace_duplicate0.9261692129075527, username2, whpearson
comment by Lumifer · 2017-01-23T17:36:27.987Z · LW(p) · GW(p)

Our consensus is pretty unalterably "Build an AI God".

Kinda. The LW's position is "We will make a God, how do we make sure He likes us?"

Replies from: WalterL
comment by WalterL · 2017-01-23T19:33:08.640Z · LW(p) · GW(p)

I lounge corrected. What Lum said is right.

comment by WhySpace_duplicate0.9261692129075527 · 2017-01-23T17:16:54.104Z · LW(p) · GW(p)

I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.

Same likely goes for the existential risk segment of EA. These are the only such discussion forums I'm aware of, but neither is x-risk only.

Replies from: whpearson
comment by whpearson · 2017-01-24T00:43:58.911Z · LW(p) · GW(p)

I'm a cryonaut from a few years back. I had deep philosophical differences to most of the arguments for AI Gods, which you may be able to determine from some of my recent discussions. I still think that it not completely crazy to try and create an beneficial AI God (taking into consideration my fallible hardware and all), but I put a lot more weight on futures where the future of intelligence is very important, but not as potent as a god.

Thanks for your pointers towards the EA segment, I wasn't aware that there was a segment.

Replies from: WhySpace_duplicate0.9261692129075527
comment by WhySpace_duplicate0.9261692129075527 · 2017-01-24T18:54:02.163Z · LW(p) · GW(p)

In that case, let me give a quick summary of what I know of that segment of effective altruism.

For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.

Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move like $3/year (I forget) to effective charities for each dollar donated to them that year. Other organizations may have an even higher multiplier, but this is fairly controversial, because it’s difficult to measure future impact empirically, and it risks turning EA into a self-promoting machine which achieves nothing.

The 4^th category is basically weird future stuff. Mostly this is for people who think humans going extinct would be significantly worse than a mere 7 billion deaths would be. However, it's not exclusively focused on existential risk. Unfortunately, we have no good ways of even evaluating how effective various anti-nuclear efforts are at actually reducing existential risk, and it's even worse for efforts against prospective future technologies like AI. The best we can do is measure indirect effects. So the entire category is fairly controversial.

I would further divide the "weird future stuff" category into Global Catastrophic Risk/x-risk and non-GCR/x-risk stuff. For example, Brian Tomasik has coined the term s-risk for risks of astronomical future suffering. He makes a strong case for wild animals experiencing more net suffering than happiness, and so thinks that even without human extinction the next billion years are likely to be filled with astronomical amounts of animal suffering.

Within the GDR/x-risk half of the "weird future stuff" category, there appear to be maybe 4 or 5 causes I'm aware of. Nuclear war is the obvious one, along with climate change. I think most EAs tend to think climate change is important, but just not tractable enough to be a cost effective use of resources. The risk of another 1918 Flu pandemic, or of an engineered pandemic, comes up occasionally, especially with relation to the new CRISPR gene editing technology. AI is a big concern too, but more controversial, since it is more speculative. I'm not sure I've ever seen asteroid impacts or nanotechnology floated as a cost-effective means of reducing x-risk, but I don't follow that closely, so perhaps there is some good discussion I've missed.

Much or most of the effort I've seen is to better understand the risks, so that we can better allocate resources in the future. Here are some organizations I know of which study existential risk, or are working to reduce it:

  • The Future of Humanity Institute at Oxford, and is led by Nick Bostrom. They primarily do scholarly research, and focus a good chunk of their attention on AI. There are now more academic papers published on human extinction than there are on dung beetles, largely due to their efforts to lead the charge.

  • Center for the Study of Existential Risk is out of Cambridge. I don't know much about them, but they seem to be quite similar to FHI.

  • Future of Life Institute was founded by a bunch of people from MIT, but I don't believe there is any official tie. They fund research too, but they seem to have a larger body of work directed at the general public. They give grants to researchers, and publish articles on a range of existential risks.

Perhaps there are discussion forums associated with these groups, but I'm unaware of them. There are a bunch of EA facebook groups, but they are mostly regional groups as far as I know. However, the EA forum and here are the closest things I know to what you're after.

Replies from: morganism, whpearson
comment by morganism · 2017-01-27T22:29:02.216Z · LW(p) · GW(p)

B612 Foundation is working on impact risks, by trying to get some IR cameras out to L2, L3 at least, and hopefully at S5. and Planetary Resources say that objects found with their IR cameras for mining, will go into the PDSS database.

comment by whpearson · 2017-01-24T19:48:46.469Z · LW(p) · GW(p)

Thanks! I'll get in touch with the EA community in a bit. I've got practical work to finish and I find forums too engaging.

comment by username2 · 2017-01-23T17:16:39.988Z · LW(p) · GW(p)

That is a contentious view.

Replies from: None
comment by [deleted] · 2017-01-24T23:47:06.100Z · LW(p) · GW(p)

To say the least.

comment by whpearson · 2017-01-23T17:08:12.542Z · LW(p) · GW(p)

unalterably

Not very empiricist/bayesian of you? ;)

What is the backup plan? For if that doesn't work.