Creating better infrastructure for controversial discourse
post by Rudi C (rudi-c) · 2020-06-16T15:17:13.204Z · LW · GW · 11 commentsContents
11 comments
Currently there are three active forks of Lesswrong; Itself, the alignment forum and the EA forum. Can adding a new fork that is more focused on good discourse on controversial/taboo topics be a good idea?
I am an Iranian, and have a fair share of experience with bad epistemic conditions. Disengagement with politics is always a retreat, though it can be strategic at times. As time goes on, the politics of a memetic-based elite will increasingly infringe on your rights and freedoms. Aside from this, as your community’s status grows, your epistemic freedoms get a lot worse. The community itself will be increasingly infiltrated by the aggressive memetic structure, as the elite control the education, the media, and distribution of status/resources.
The fact (IMHO) that the current object-level neo-religion(s) is also vastly superior in its dogma is also bad news for us. The better a religion is, the harder fighting it is. I, myself, believe a bit in the neo-religion.
The rationalist/EA community has not invested much in creating a medium for controversial discourse. So there should be quite a bit of low-hanging fruit there. Different incentive mechanisms can be tried to see what works. I think anonymity will help a lot. (As a datapoint, Quora had anonymous posting in the past. I don’t know if they still do.) Perhaps hiding all karma scores might help, as well; (An anonymized score of the user’s karma on Lesswrong can be shown to distinguish “the old guard.”) Outsiders won’t have as much of an incentive to engage without a karma system. Posts that are, e.g., just ripping on the outgroup won’t be as attractive without the scores.
A big issue is distancing the controversial forum from tainting Lesswrong’s brand. I don’t have good ideas for this. The problem is that we need to somehow connect this forum to Lesswrong’s community.
Another direction to tackle the controversy problem is introducing jargon. Jargon repels newcomers, attracts academic-minded people, gives protection against witch-hunters, and raises the community’s status. (Its shortcomings should be obvious.) One of the ways it protects against witch hunters is that it signals that we are politically passive and merely “philosophizing.” Another reason (possibly what gives the signal its credibility) is that we automatically lose most of the population by going down this route. We have introduced artificial inferential distance, and thus made ourselves exclusive.
I was going to post this as a reply to Wei Dai here [LW · GW], but I thought posting it as a top post might be wiser. This is less a post about my ideas and more a reminder that investing in defensive epistemic infrastructure is a worthwhile endeavor that this community has a comparative advantage in doing. This is true even if things don’t get worse, and even if the bad epistemic conditions have historical precedent. If things do get worse, the importance of these defenses obviously shots up. I am not well aquatinted with the priorities the EA community is pursuing, but creating better epistemic conditions seems a good cause to me. People’s needs aren’t just material/hedonic. They also need freedom of expression and thought. They also deserve the infrastructure to deal critically with invasive memes. There is a power imbalance between people and memes. It’s wrongness can be compared to child brides. I often feel most people who have not experienced being enthralled by aggressive memetic structures will underestimate its sheer grossness. The betrayal you feel after losing cherished beliefs that you held for years, the beliefs that colored so much of your perception and identity, that shaped your social life so much. It’s like you had lived a slave to the meme.
11 comments
Comments sorted by top scores.
comment by Chris_Leong · 2020-06-17T01:04:26.620Z · LW(p) · GW(p)
I guess there is The Motte on Reddit, but I could see benefits of someone creating a separate community. One problem is that far more meta discussion needs to occur on how to have these conversations.
comment by gilch · 2020-06-17T23:42:21.220Z · LW(p) · GW(p)
Kialo was an interesting attempt at creating an infrastructure for discussing controversial topics. It's worth a look at to understand what I'm talking about, but I can't recommend it. I tried it out and it doesn't work well.
But maybe something similar based on collaboratively building Bayesian networks instead of pro/con debate points could work better. Maybe each user could estimate upper- and lower-bound weights for each node, and the computer would run the calculations. It would make it easier to nail down exactly where disagreements are, in the spirit of Double Crux, but with more than two participants. This kind of thing also sounds useful for debugging one's own thinking on a topic.
I'm not calling this a complete solution (and I have no implementation), it's just a suggestion for thinking about the infrastructure.
Replies from: ramiro-p↑ comment by Ramiro P. (ramiro-p) · 2020-06-21T17:44:17.564Z · LW(p) · GW(p)
Kialo is totally underrated.
comment by Dagon · 2020-06-17T17:04:13.883Z · LW(p) · GW(p)
There is no general solution to high-quality large-membership deeply-unpopular discussion. Quality requires discussion filtering, and unpopular requires membership filtering, both of which require long-term identity (even if pseudonymous, the pseudonym is consistent and _will_ leak a bit into other domains). Important and unpopular topics will be attacked by mobs of semi-coordinated actors, sometimes (depending on topic and regime) supported by state-level agencies.
Rational discussion far outside the https://en.wikipedia.org/wiki/Overton_window is indistinguishable from conspiracy, and part of the right answer is to just keep such topics off of the public well-known and somewhat respectable fora. "Politics is the mind-killer" may not be exactly right, but politics is the site-killer is a worse slogan while being more true.
We _do_ need places to discuss such things, and in fact they exist. But they're smaller, more diverse and distributed, harder to find, and generally somewhat lower quality (at least the ones that'll let me in). They are not open to everyone, fearing infiltration and disruption. They're not advertised on the more legit sites, for fear of reputational taint (to the larger site). And they tend to drift towards actual craziness over time, because they don't have public anchors for the discourse. And also because there is a very real correlation between the ability to seriously consider outlandish ideas and the propensity to over-focus on the useless or improbable.
Replies from: rudi-c↑ comment by Rudi C (rudi-c) · 2020-06-17T21:28:49.548Z · LW(p) · GW(p)
I have always assumed longterm pseudonyms will be traceable, but I have not seen much analysis or datapoints on it. Do you have some links on that?
On coordinated attacks; Can’t a “recursive” karma system that assigns more weight to higher-karma users’ votes, combined with a good moderation team, and possibly an invite-based registry system work? I think you’re too pessimistic. Have many competent people researched this problem at all?
Dr. Hsu is now being “cancelled.” He is using a Google Docs to gather signatures in his defense. That Google Docs was very hard to sign, possibly because of high genuine traffic or DDoS attacks. It’s clear that we have no machinery for coordinating again cancellation. I am no expert, but I can already think of a website that gathers academics, and uses anonymized ring signatures for them to support their peers against attacks.
Honestly, the only single accomplishment I have seen in this area is Sam Harris. He understood the danger early, and communicated that to his followers, subsequently building his platform via direct subscriptions that is somewhat “cancelproof.”
comment by Gordon Seidoh Worley (gworley) · 2020-06-16T15:26:32.891Z · LW(p) · GW(p)
Currently there are three active forks of Lesswrong; Itself, the alignment forum and the EA forum. Can adding a new fork that is more focused on good discourse on controversial/taboo topics be a good idea?
Minor correction: the alignment forum isn't really a fork. Instead AF is like a subforum within LW.
comment by lsusr · 2020-06-16T21:51:18.381Z · LW(p) · GW(p)
I think anonymity will help a lot.
Less Wrong does not require real names. What's wrong with just using a pseudonym here?
Replies from: FactorialCode↑ comment by FactorialCode · 2020-06-17T02:45:30.834Z · LW(p) · GW(p)
I'll quote myself [LW(p) · GW(p)]:
Many of the users on LW have their real names and reputations attached to this website. If LW were to come under this kind of loosely coordinated memetic attack, many people would find themselves harassed and their reputations and careers could easily be put in danger. I don't want to sound overly dramatic, but the entire truth seeking and AI safety project could be hampered by association.
Replies from: lsusrThat's why even though I remain anonymous, I think it's best if I refrain from discussing these topics at anything except the meta level on LW. Even having this discussion strikes me as risky. That doesn't mean that we shouldn't discuss these topics at all. But it needs to be on a place like r/TheMotte where there is no attack vector. This includes using different usernames so we can't be traced back here. Even then, the reddit AEO and the admins are technically weak points.
↑ comment by lsusr · 2020-06-17T03:49:31.217Z · LW(p) · GW(p)
Ah. Many people on Less Wrong use real names or traceable pseudonyms. If Less Wrong becomes associated with [unspeakable] then anyone who uses [traceable name] on Less Wrong could, by association, be threatened by a mob regardless of whether [traceable name] in particular endorses [unspeakable] because terrorist mobs are not known for their precise discrimination of targets.
You illustrate this with a real-world example.
comment by Ericf · 2020-06-18T01:15:51.890Z · LW(p) · GW(p)
Suggestion: make it a wiki, not a forum. That way there is no author associatable to any given idea, and bad ideas can be called out in place eg: The Illuminati (note: there is no credible evidence that The Illuminati exist <link>) support the Democrat party (note: Democrat party is a disrespectful term used by the opponent of the United States Democratic Party) via sales of pot laced baked goods.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-06-23T11:14:12.292Z · LW(p) · GW(p)
That's not how most Wikis work in practice. You ususally have norms where bad content on Wiki pages either gets removed or when it happens on talk pages it gets argued against.