Against Premature Abstraction of Political Issues

post by Wei Dai (Wei_Dai) · 2019-12-18T20:19:53.909Z · LW · GW · 22 comments

Contents

22 comments

A few days ago romeostevensit wrote [LW(p) · GW(p)] in response to me asking about downvotes on a post:

I didn’t down­vote, but I do think that con­ver­sa­tions like this at­tract peo­ple who aren’t in­ter­ested in ar­gu­ing in good faith. I pre­fer that such dis­cus­sions oc­cur at one ab­strac­tion level up so that they don’t need to men­tion any ob­ject level be­liefs like so­cial jus­tice in or­der to talk about the pat­tern that the au­thor wants to talk about.

And I replied:

This seems like a rea­son­able worry. Maybe one way to ad­dress it would be to make posts tagged as “poli­tics” (by ei­ther the au­thor or a mod­er­a­tor) visi­ble only to logged in users above a cer­tain karma thresh­old or speci­fi­cally ap­proved by mod­er­a­tors. Talk­ing at the meta-level is also good, but I think at some point x-risk peo­ple have to start dis­cussing ob­ject-level poli­tics and we need some place to prac­tice that.

Since writing that, I've had the thought (because of this conversation [LW(p) · GW(p)]) that only talking about political issues at a meta level has another downside: premature abstraction. That is, it takes work to find the right abstraction for any issue or problem, and forcing people to move to the meta level right away means that we can't all participate in doing that work, and any errors or suboptimal choices in the abstraction can't be detected and fixed by the community, leading to avoidable frustrations and wasted efforts down the line.

As an example, consider a big political debate on LW back in 2009, when "a portion of comments here were found to be offensive by some members of this community, while others denied their offensive nature or professed to be puzzled by why they are considered offensive." By the time I took my shot [LW · GW] at finding the right abstraction for thinking about this problem, three other veteran LWers had already tried to do the same thing. Now imagine if the object level issue was hidden from everyone except a few people. How would we have been able to make the intellectual progress necessary to settle upon the right abstraction in that case?

One problem that exacerbates premature abstraction is that people are often motivated to talk about a political issue because they have a strong intuitive position on it, and when they find what they think is the right abstraction for thinking about it, they'll rationalize an argument for their position within that abstraction, such that accepting the abstract argument implies accepting or moving towards their object-level position. When the object level issue is hidden, it becomes much harder for others to detect such a rationalization. If the abstraction they created is actually wrong or incomplete (i.e., doesn't capture some important element of the object-level issue), their explicit abstract argument is even more likely to have little or nothing to do with what actually drives their intuition.

Making any kind of progress that would help resolve the underlying object-level issue becomes extremely difficult or impossible in those circumstances, as the meta discussion is likely to become bogged down and frustrating to everyone involved as one side tries to defend an argument that they feel strongly about (because they have a strong intuition about the object-level issue and think their abstract argument explains their intuition) but may actually be quite weak due to the abstraction itself being wrong. And this can happen even if their object-level position is actually correct!

To put it more simply, common sense says hidden agendas are bad, but by having a norm for only discussing political issues at a meta level, we're directly encouraging that.

(I think for this and other reasons, it may be time to relax the norm against discussing object-level political issues around here. There are definitely risks and costs involved in doing that, but I think we can come up with various safeguards to minimize the risks and costs, and if things do go badly wrong anyway, we can be prepared to reinstitute the norm. I won't fully defend that here, as I mainly want to talk about "premature abstraction" in this post, but feel free to voice your objections to the proposal in the comments if you wish to do so.)

22 comments

Comments sorted by top scores.

comment by johnswentworth · 2019-12-18T20:49:25.678Z · LW(p) · GW(p)

+1 for pointing out an important problem, -1 for relaxing the norm against politics on LW. This sounds like a case of "we should try it, but try it somewhere far away where we won't accidentally light anything important on fire".

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-12-19T00:02:19.955Z · LW(p) · GW(p)

I'm not very familiar with the rationalist diaspora, but I wonder if there were or are spaces within that where political discussions are allowed or welcome, how things turned out, and what lessons we can learn from their history to inform future experiments.

I do know about the weekly cultural war threads on TheMotte and the "EA discuss politics" Facebook group but haven't observed them long enough to make any strong conclusions. Also, for my tastes, they seem a little bit too far removed from LW both culturally and in terms of overlapping membership because they both spawned from LW-adjacent groups rather than LW itself.

comment by evhub · 2019-12-18T20:42:28.103Z · LW(p) · GW(p)

I think for this and other reasons, it may be time to relax the norm against discussing object-level political issues around here. There are definitely risks and costs involved in doing that, but I think we can come up with various safeguards to minimize the risks and costs, and if things do go badly wrong anyway, we can be prepared to reinstitute the norm. I won't fully defend that here, as I mainly want to talk about "premature abstraction" in this post, but feel free to voice your objections to the proposal in the comments if you wish to do so.

Apologies in advance for only engaging with the part of this post you said you least wanted to defend, but I just wanted to register strong disagreement here. Personally, I would like LessWrong to be a place where I can talk about AI safety and existential risk without being implicitly associated with lots of other political content that I may or may not agree with. If LessWrong becomes a place for lots of political discussion, people will form such associations regardless of whether or not such associations are accurate. Given that that's the world we live in—and the importance imo of having a space for AI safety and existential risk discussions—I think having a strong norm against political discussions is a quite a good thing.

Replies from: Wei_Dai, Zack_M_Davis
comment by Wei Dai (Wei_Dai) · 2019-12-18T21:22:21.652Z · LW(p) · GW(p)

Personally, I would like LessWrong to be a place where I can talk about AI safety and existential risk without being implicitly associated with lots of other political content that I may or may not agree with.

Good point, I agree this is probably a dealbreaker for a lot of people (maybe even me) unless we can think of some way to avoid it. I can't help but think that we have to find a solution besides "just don't talk about politics" though, because x-risk is inherently political and as the movement gets bigger it's going to inevitably come into conflict with other people's politics. (See here for an example of it starting to happen already.) If by the time that happens in full force, we're still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well? (ETA: This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the "don't talk about politics" norm, I really want to hear that so I can maybe work in that direction instead.)

Replies from: evhub, charlotte-s
comment by evhub · 2019-12-18T21:42:38.355Z · LW(p) · GW(p)

I can't help but think that we have to find a solution besides "just don't talk about politics" though, because x-risk is inherently political and as the movement gets bigger it's going to inevitably come into conflict with other people's politics.

My preferred solution to this problem continues to be [LW(p) · GW(p)] just taking political discussions offline. I recognize that this is difficult for people not situated somewhere like the bay area where there are lots of other rationalist/effective altruist people around to talk to, but nevertheless I still think it's the best solution.

EDITS:

See here for an example of it starting to happen already.

I also agree with Weyl's point here that another very effective thing to do is to talk loudly and publicly about racism, sexism, etc.—though obviously as Eliezer points out that's not always possible, as not every important subject necessarily has such a component.

This is not an entirely rhetorical question, BTW. If anyone can see how things work out well in the end despite LW never getting rid of the "don't talk about politics" norm, I really want to hear that so I can maybe work in that direction instead.

My answer would be that we figure out how to engage with politics, but we do it offline rather than using a public forum like LW.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-12-18T21:49:53.538Z · LW(p) · GW(p)

How much of an efficiency hit do you think taking all discussion of a subject offline ("in-person") involves? For example if all discussions about AI safety could only be done in person (no forums, journals, conferences, blogs, etc.), how much would that slow down progress?

Replies from: evhub
comment by evhub · 2019-12-18T21:58:49.564Z · LW(p) · GW(p)

How much of an efficiency hit do you think taking all discussion of a subject offline ("in-person") involves?

Probably a good deal for anything academic (like AI safety), but not at all for politics. I think discussions focused on persuasion/debate/argument/etc. are pretty universally bad (e.g. not truth-tracking), and that online discussion lends itself particularly well into falling into such discussions. It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics, so I suspect that not having the ability to talk about politics online won't be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).

Replies from: Zack_M_Davis, Wei_Dai
comment by Zack_M_Davis · 2019-12-18T23:29:09.392Z · LW(p) · GW(p)

anything academic (like AI safety), but not at all for politics [...] avoiding of any hot-button issues

"Politics" isn't a separate magisterium, though; what counts as a "hot-button issue" is a function of the particular socio-psychological forces operative in the culture of a particular place and time [LW · GW]. Groups of humans (including such groups as "corportations" or "governments") are real things in the real physical universe and it should be possible to build predictive models of their behavior using the same general laws [LW · GW] of cognition that apply to everything else [LW · GW].

To this one might reply, "Oh, sure, I'm not objecting to the study of sociology, social psychology, economics, history, &c., just politics." This sort of works if you define "political" as "of or concerning any topic that seems likely to trigger motivated reasoning and coalition-formation among the given participants." But I don't see how you can make that kind of clean separation in a principled way, and that matters if you care about getting the right answer to questions that have been infused with "political" connotations in the local culture of the particular place and time in which you happen to live.

Put it this way: astronomy is not a "political" topic in Berkeley 2019. In Rome 1632, it was. The individual cognitive algorithms [LW · GW] and collective "discourse algorithms" that can't just get the right answer to questions that seem "political" in Berkeley 2019, would have also failed to get the right answer on heliocentrism in Rome 1632—and I really doubt they're adequate to solve AGI alignment in Berkeley 2039.

Replies from: 9eB1, mr-hire
comment by 9eB1 · 2019-12-19T15:18:56.897Z · LW(p) · GW(p)

This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying "politics" does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace "politics" just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.

I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.

comment by Matt Goldenberg (mr-hire) · 2019-12-19T01:36:28.234Z · LW(p) · GW(p)
—and I really doubt they're adequate to solve AGI alignment in Berkeley 2039.

Is this because you think technical alignment work will be a political issue in 2039?

comment by Wei Dai (Wei_Dai) · 2019-12-19T02:42:42.285Z · LW(p) · GW(p)

It is sometimes possible to avoid this failure mode, but imo basically only if the conversations are kept highly academic and avoiding of any hot-button issues (e.g. as in some online AI safety discussions, though not all). I think this is basically impossible for politics

I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren't around yet when we had the big 2009 political debate [LW · GW] that I referenced in the OP, but I think that one worked out pretty well in the end. And I note that (at least from my perspective) a lot of progress in that debate was made online as opposed to in person, even though presumably many parallel offline discussions were also happening.

so I suspect that not having the ability to talk about politics online won’t be much of a problem (and might even be quite helpful, since I suspect it would overall raise the level of political discourse).

Do you think just talking about politics in person is good enough for making enough intellectual progress and disseminating that widely enough to eventually solve the political problems around AI safety and x-risks? Even if I didn't think there's an efficiency hit relative to current ways of discussing politics online, I would be quite worried about that and trying to find ways to move beyond just talking in person...

Replies from: evhub, Pattern
comment by evhub · 2019-12-20T19:00:02.094Z · LW(p) · GW(p)

I disagree, and think LW can actually do ok, and probably even better with some additional safeguards around political discussions. You weren't around yet when we had the big 2009 political debate that I referenced in the OP, but I think that one worked out pretty well in the end.

Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I'm genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.

Replies from: Wei_Dai, Zack_M_Davis
comment by Wei Dai (Wei_Dai) · 2019-12-20T23:36:59.079Z · LW(p) · GW(p)

I think it was bad in the short term (it was at least a distraction, and maybe tainted AI safety by association although I don't have any personal knowledge of that), but probably good in the long run, because it gave people a good understanding of one political phenomenon (i.e., the giving and taking of offense) which let them better navigate similar situations in the future. In other words, if the debate hadn't happened online and the resulting understanding widely propagated through this community, there probably would have been more political drama over time because people wouldn't have had a good understanding of the how and why of avoiding offense.

But I do agree that "taint by association" is a big problem going forward, and I'm not sure what to do about that yet. By mentioning the 2009 debate I was mainly trying to establish that if that problem could be solved or ameliorated to a large degree, then online political discussions seem to be worth having because they can be pretty productive.

comment by Pattern · 2019-12-20T19:06:00.888Z · LW(p) · GW(p)

What safeguards?

comment by charlotte S (charlotte-s) · 2019-12-19T18:28:30.564Z · LW(p) · GW(p)

You said: If by the time that happens in full force, we're still mostly political naivetes with little understanding of how politics works in general or what drives particular political ideas, how is that going to work out well."

I think the debate here might rely on an unnecessary dichotomy, either I discuss politics on LW/in the rationalist community OR I will have little (or no) understanding.

Another solution would be to think of spaces to discuss politics which one can join.

I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:

Some preliminary thoughts how to learn it outside LW or the rationalists community

  • join or work (if you like just for a few months) for a political party, a member of parliaments
  • go to debates from different political groups
  • write about Public Policy solutions and disagreements
  • help in a national campaign, you will learn a lot about the way of reasoning of people in politics
  • join other platforms to discuss politics (if interested in AI and in the EU: (EU AI Alliance)

Other forms of learning more about politics which wouldnt be political by the definition above:

  • learn and discuss (etc) political theory (by the definition above this is not "political")

Might add things later. I also have a few other ideas I can share in pm,

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-12-21T04:51:04.255Z · LW(p) · GW(p)

Another solution would be to think of spaces to discuss politics which one can join.

There are spaces I can join (and have joined) to do politics or observe politics but not so much to discuss politics, because the people there lack the rationality skills or background knowledge (e.g., the basics of Bayesian epistemology, or an understanding of game theory in general and signaling in particular) to do so.

I believe that we wont get a better understanding of politics by discussing it here, as its more of a form of empirical knowledge you acquire:

I think we need both, because after observing "politics in the wild", I need to systemize the patterns I observed, understand why things happened the way they did, predict whether the patterns/trends I saw are likely to continue, etc. And it's much easier to do that with other people's help than to do it alone.

comment by Zack_M_Davis · 2019-12-24T08:40:33.558Z · LW(p) · GW(p)

Given that that's the world we live in

It's not the world we live in—it's you! [LW · GW]

comment by Raemon · 2019-12-19T01:18:04.750Z · LW(p) · GW(p)

I have noticed myself updating towards this for Local Politics (i.e. when I do a bunch of thinking about an issue among nearby EA / X-risk / Rationality orgs or communities). 

In particular, I've noticed a fair amount of talking past each other when resolving some disagreement. Alice and Bob disagree, they talk a bit. Alice concludes it's because Bob doesn't understand Principle X. Alice writes an effortpost on Principle X.

And... well, the effortpost is usually pretty useful. Principle X is legitimately important and it's good to have it written up somewhere if it wasn't already.

But, Principle X usually wasn't the crux between Alice and Bob. 

(Hmm, I notice that this comment is doing literally the thing we're talking about here. I don't feel like digging up the details but will note that people I've seen doing this include Duncan, Ben Hoffman, maybe Jessica Taylor, maybe Ruby and Me?)

((It's less obvious to me when I do this because of illusion of transparency. I suppose it's possible my entire doublecrux sequence is an instance of this, although in that case I don't think I was expecting it to be the missing piece so much as "I wanted to make sure there was common knowledge of some foundational stuff."))

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-12-19T01:30:42.255Z · LW(p) · GW(p)

I actually think these posts often update my thinking towards that person's point of view even though it's not the crux. You think is rock is important, I think hard place is important. You make a post about rock, but it updates me that Rock was even more important than I thought.

Replies from: Raemon
comment by Raemon · 2019-12-19T01:38:42.986Z · LW(p) · GW(p)

I think I've found that for Ben/Jessica/Zack posts, but not for Duncan posts (where instead I'm like "hmm. umm, so, that's like literally the same post I would have wrote to support my point.")

comment by FactorialCode · 2019-12-18T23:54:29.602Z · LW(p) · GW(p)

I agree that this is a real limitation of exclusively meta level political discussion.

However, I'm going to voice my strong opposition to any sort of object level political discussion on LW. The main reason is that my model of the present political climate is that it consumes everything valuable that it comes into contact with it. Having any sort of object level discussion of politics could attract the attention of actors with a substantial amount of power who have an interest in controlling the conversation.

I would even go so far as to say that the combination of "politics is the mindkiller", EY's terrible PR, and the fact that "lesswrong cult" is still the second result after typing "lesswrong" into google has done us a huge favor. Together, it's ensured that this site has never had any strategic importance whatsoever to anyone trying to advance their political agenda.

That being said, I think it would be a good idea to have a rat-adjacent space for discussing these topics. For now, the closest thing I can think of is r/themotte on reddit. If we set up a space for this, then it should be on a separate website with a separate domain and separate usernames that can't be easily traced back to us on LW. That way, we can break all ties with it/nuke it from orbit if things go south.