Doing discourse better: Stuff I wish I knew

post by dynomight · 2020-09-29T14:34:55.913Z · LW · GW · 11 comments

This is a link post for https://dyno-might.github.io/2020/09/29/doing-discourse-better-stuff-i-wish-i-knew/

Contents

  Talking about sugar
  Truth vs fitness
  Cialdini
  Reverse Cialdini
  Skepticism
  Dimensions of forums
  Ideas for forums
  Journals as steampunk forums
  Failure modes
None
11 comments

Talking about sugar

Nick and Maria want to talk about sugar. Does it reduce lifespan? They disagree, but are honest and principled. They want to find the truth and agree on it.

How should they talk? Take turns? Try hard to be polite and respectful? Allow interruptions? Search for origins of disagreement? Areas of agreement? Make bets? Get a moderator?

Or maybe Nick and Maria want to talk about tax-exemption for churches. It might be hard to get consensus, since this is a matter of opinion. Still, they'd like to reduce their disagreement to different underlying values. What should they do? Repeat the other's argument? Tell personal stories? Run ideological Turing tests?

Do conversations have known best practices? How much do they improve the odds of landing on the truth?

Truth vs fitness

In East Africa 70,000 years ago, humans made their first steps towards language. This didn't happen for truth, it happened to increase reproductive fitness. Overly open-minded people were probably easy to manipulate. Scrupulously honest people were probably bad at manipulating. Many false beliefs still enhanced reproductive fitness (typically by improving cooperation).

So, one might argue, of course we are terrible at finding truth via conversation! Why are we surprised that our instincts are bad at something we never evolved to be good at?

Cialdini

Doing some research, Maria finds the work of Robert Cialdini, who says we are persuaded by reciprocity, scarcity, authority, consistency, liking, and consensus. But these don't seems useful to Maria. They are mostly "dark patterns" of persuasion, that work for anything regardless of its truth.

Instead, are there "non-dark" patterns of persuasion, that work well for true stuff but not for false stuff?

Reverse Cialdini

Over a few months, Nick and Maria have a dozen of these conversations. The results are exactly what you'd expect: No one ever changes their mind about anything. Maria has a feeling she can't be right about everything. But try as she might, she can't find a case where Nick's arguments convince her.

Eventually, she has a strange idea. Maybe she should fight fire with fire, cognitive-bias wise. She decides that in each discussion they should both intentionally subject themselves to as much dark-pattern manipulation as possible. Her theory is that there is an "energy barrier" that prevents them from getting into a mind-space where it's even possible to appreciate the other position fairly.

By temporarily brainwashing themselves a little bit, can they can actually give the other position a fair chance?

Skepticism

It's reasonable to be skeptical about the above ideas. Humans have been talking for 70,000 years. If there was a trick to talking better, wouldn't we have found it already? I showed some of the above thoughts to a friend, who responded, "LOLOLOLOLOL, dynomight, you sweet wide-eyed gazelle. The rare Nicks and Marias of the world do great already. The problem is dishonest, unprincipled people."

So perhaps better conversation can't save the world. Still, I think a different type of discourse has much more room for improvement: That in online forums. It sounds hyperbolic, but I genuinely believe this could move the needle on the future of humanity.

Just think about it. The world is a complex place. Want to understand the effects of the minimum wage? The long-term trajectory of population growth? The effects of vitamin C? No one human brain can process all the relevant information for such questions. But shouldn't groups of brains, if we could "network" them properly, have much greater capabilities?

Dimensions of forums

Online forums have various design dimensions.

It's hard to understand the influence of most of these choices, since popular forums vary along many dimensions at the same time. Most forums today are optimized for the goal of "make forum owners rich." We don't know the decision-making process, or what tradeoffs would be made with different goals.

I don't think it's possible to sit around and figure out what effects a given forum design will have. Human beings and social behavior are too complex. We need to systematically test the different designs, and see what actually happens empirically. Is anyone doing that?

Ideas for forums

Here are some ideas:

These ideas are all probably terrible. I'm just trying to say that there's a lot of possibilities, and some of them are surely good.

Suppose we could look 30 years into the future. Forums will no doubt look very different. Probably some of the differences rely on exogenous technological innovation. But surely some differences could be "back-ported" to today, if only we knew what they were. What are they, and why is the pace of innovation in online forums so slow right now?

Journals as steampunk forums

If we want a historical example of "public forum with rules and norms carefully derived to find truth" I think the best we can do is the system of journal publications. For all their imperfections, these have done a decent job of uncovering truth for several hundred years. What lessons do these offer?

I think for our purposes, the three biggest differences vs. online forums today are:

First, journals have a crazy focus on credit attribution. There is a formalized system of citations. Reviewers check that the claimed new ideas in papers really are new, and that credit (citations) have been provided to all (most? some?) related work. One paper can reply (cite) to many other papers. (Journals share this with 4chan!)

Second, journals provide strong quality signals coupled to heavy "moderation". In most fields there is a fairly clear status hierarchy. The moderators (reviewers/editors) at top journals spend a lot of effort moderating, and are very picky about what they accept. This provides strong signaling for the papers that are accepted.

Third, there are extremely strong external incentives for people to appear in the top journals. (Ultimately, jobs and money are on the line.)

There other differences (e.g. journals are slow and cost money) but I think the three above are the most significant. What would happen if we added these to online forums today? Unfortunately, it's a bit hard to say. These wouldn't be easy to copy. The first is laborious, and the others are as much properties of the society the journal is embedded in as the journal system itself.

Failure modes

Maybe figuring out how to improve forums is hard. As a first step, maybe we can at least understand where things go wrong? Here's some proposed failure modes.

The user death spiral. Some cool people start a forum and have cool conversations. Random trolls show up sometimes, but they are easily banned. Eventually, some not-quite-as-cool people find their way to the forum. They aren't misbehaving and make some good points sometime. It feels tyrannical to ban them so no one does. Still, the coolest people become slightly more likely to drift away. New very-cool people become slightly less likely to join. Eventually the median shifts enough that barely-cool-at-all people are joining. Gradually, the average coolness of people in the forum decreases to zero.

The tyranny of the minority. Human experience is vast. There are people out there who truly, passionately believe that bestiality should be legal and accepted. Some are smart and compelling writers. Almost all forums block these people. You don't, figuring that you believe in free speech, these people are a tiny minority, and the truth will emerge as they argue with the majority. Suddenly, in every thread people are findings connections to the "injustice" of the current prohibition on bestiality. Why? Because every other high-status place on the internet prohibited these people, and they've all been funneled to you.

The village becomes anonymous. In small forums, you see the same person repeatedly. This has two advantages: A) You know people will remember you, and you want them to be nice to you, so you try not to act like a jerk. B) After seeing the same people for a while, you have some context for their comments. The conversation can actually evolve and grow over time. After a certain number of users, each comment must stand alone. The forum has "amnesia". Why be nice or clarify your argument when no one will remember what happened?

The dark shift. You run a nice happy forum, with a fixed set of users. One day, Maria is in a bad mood, and write a funny but slightly dismissive response. Since it's funny, everyone laughs. The target of her comment resents it and eventually finds a way to score points back on Maria. Others notice and start imitating this behavior. These comments get more common, and then meaner. People become very cautious when posting, making sure they don't leave an opening to attack. In fact, why make a constructive argument at all? Much easier to find someone on the other side and attack them! Nuanced discussion becomes impossible.

The Necessary Despot. You run a forum. It gets overrun with bestiality people. You ban them. The users start getting rude. You ban them. The banned users come back. You find sneaky ways to make them invisible and let them scream into the void instead. Some topics are popular but always lead to your users arguing. You ban those topics. The forum survives, but you work all the time enforcing your tyrannical-seeming constraints. Users grumble. You drink too much and worry the forum just isn't that interesting anymore.

The tidal wave. You're some kind of genius. No one knows how you do it, but your forum grows and grows while maintaining quality. Every comment is well-reasoned, polite, insightful, and hilarious. Still, things feel slightly out of control. Every thread has thousands of comments. The discussion naturally breaks up into different sub-topics. You notice that the same sub-topics emerge in different parts of a thread. These conversations, often focus on different data and- ominously- arrive at different conclusions. You try to focus threads better, confining discussion to smaller sub-topics. But the comments just keep coming. Each sub-topic is re-proposed again and again. Each thread breaks into sub-sub-sub-sub topics. Some users valiantly try to link related discussions, but they can't keep up. You have growing doubts about the existence of objective truth and start having nightmares where hordes of comments start to spill over the walls of your bunker.

It's fun to speculate about how badly different forums suffer from each of these modes. Here's my totally subjective rankings of a few. (Here I'm picking on Marginal Revolution just as an example of "what always happens when an old-school blog with high quality content uses laissez-faire moderation.")

Failure ModeMarginal RevolutionHacker News/r/slatestarcodexTwitter
User-quality death spiral💀💀💀𐄂💀𐄂
Tyranny of the minority💀𐄂𐄂𐄂
Village becomes anonymous𐄂💀💀💀
Dark shift💀💀𐄂𐄂💀💀💀
Necessary despot𐄂💀💀💀💀𐄂
Tidal wave𐄂💀💀💀💀💀💀

Are these the most common failure modes? I don't know. And really, what are the answers to all the questions in this post? I don't know! But we should keep re-asking important questions until we have answers.

11 comments

Comments sorted by top scores.

comment by Dagon · 2020-09-29T18:30:38.266Z · LW(p) · GW(p)

Beware mixing up different kinds and purposes of communication.  Your friend's LOLOLOLOLOL is understating the complexity by a long way.  

For two-person conversations, where both (claim to be) seeking truth rather than signaling dominance or quality, and where both are reasonably intelligent and share a lot of cultural background, and where there's time and willingness to invest in the topic, https://www.lesswrong.com/tag/double-crux [? · GW] is an awesome technique.  Very often you won't resolve the answer, but you'll identify the un-resolvable differences in model or weight of utility you each have.  And you'll be able to (if you're lucky) identify portions of the topic where you can actually change your beliefs (and your partner may change some beliefs as well, but it's important for this not to be a goal or a contest - it doesn't matter who started out more wrong, if you can jointly be less wrong).

Where these conditions do not hold (more than two people, some participants less committed to truth-seeking, no face-to-face communication to help reinforce the purpose of this part of the relationship, not everyone with similar background models or capability of understanding the same level of discussion, etc.), the mix between truth-seeking and signaling changes, and there is a tipping point at which truth-seeking becomes obscured.  Your failure mode list is not sufficient, even if we had working counters to them - there are unique modes for every site, and they blend together in different ways over time.  To paraphrase Tolstoy: great communities are all alike, bad communities fail each in it's own way.

I recommend you also include temporal value in your analysis of success or failure of a site/community/forum.  Even if the things you list do succumb to death spirals, they were insanely valuable successes for a number of years, and much of that value remains long after they stop generating very much good new discussion.  

comment by dynomight · 2020-09-29T19:20:53.726Z · LW(p) · GW(p)

Totally agree that the different failure modes are in reality interrelated and dependent. In fact, one ("necessary despot") is a consequence of trying to counter some of the others. I do feel that there's enough similarity between some of the failure modes at different sites that's it's worth trying to name them. The temporal dimension is also an interesting point. I actually went back and looked at some of the comments on Marginal Revolution posts years ago. They are pretty terrible today, but years ago they were quite good.

comment by sen · 2020-09-30T03:09:50.656Z · LW(p) · GW(p)

Logic and reason indicate the robustness of a claim, but you can have lots of robust, mutually-contradictory claims. A robust claim is one that contradicts neither itself nor other claims it associates with. The other half is how well it resonates with people. Resonance indicates how attractive a claim is through authority, consensus, scarcity, poetry, or whatever else.

Survive and spread through robustness and resonance. That's what a strong claim does. You can state that you'll only let a claim spread into your mind if it's true, but the fact that it's so common for two such people to hold contradictory claims indicates that their real metric is much weaker than truth. I'll posit that the real metric in such scenarios is robustness.

Not all disagreements will separate cleanly into true/false categorizations. Godel proved that one.

comment by ChristianKl · 2020-09-30T13:43:13.075Z · LW(p) · GW(p)

Sometimes people have useful ideas, but give them in a long, boring, hard to read form. Can we allow users to edit each others' content?

You need a lot of shared agreement for that working well. Wikipedia technically allows users to edit each others content in discussions but has strong social norms against doing so for reasons of someone expressing themselves in a way that's too long. On the other hand StackOverflow does allow for editing of questions and answers.

comment by Zachary Robertson (zachary-robertson) · 2020-09-30T13:51:20.322Z · LW(p) · GW(p)

It could still be useful to see different ‘versions’ of an article and then just vote on the ones that are best.

comment by ChristianKl · 2020-09-30T22:30:40.484Z · LW(p) · GW(p)

When it comes to a forum like this, it's important to incentivise people who write posts. Part of the incentive is that people control the posts they write to say what they want to say. A system that works like Google Docs where the author can choose to accept or deny requests for change would likely work better.

comment by Zachary Robertson (zachary-robertson) · 2020-10-01T12:48:18.789Z · LW(p) · GW(p)

Yes, but StackExchange has community posts that editable and I think this is nice. I believe edits for normal posts work like you say.

comment by Jalex Stark (jalex-stark-1) · 2020-09-29T20:05:55.274Z · LW(p) · GW(p)

Kialo is some kind of attempt to experiment with the forum dimension stuff.

(EDIT: I don't know how to make external links in the LW dialect of markdown.)

comment by habryka (habryka4) · 2020-09-29T20:15:54.580Z · LW(p) · GW(p)

Fixed your link. Just make sure to include the "http://" in your link.

comment by Pattern · 2020-09-30T14:58:46.069Z · LW(p) · GW(p)

These ideas are all probably terrible. I'm just trying to say that there's a lot of possibilities, and some of them are surely good.

I thought those ideas were good. (A design might not serve the intended purpose, but it might have other benefits/work well doing something else.)

 

journal publications. For all their imperfections, these have done a decent job of uncovering truth for several hundred years. What lessons do these offer?

It might be good to look at the other systems around that, that might have also led to progress. (Correspondence, coworkers, collaboration, Conferences, not to mention education (by others or by self), just to name a few.

comment by zby · 2020-09-30T05:40:34.560Z · LW(p) · GW(p)
  1. There is also the universal Girardian mimetic failure mode. It is a spiral of ever increasing desire for things and status, where we want things because someone other wants it. I once wrote an essay on that in the context of internet discussions: https://blog.p2pfoundation.net/online-conflict-in-the-light-of-mimetic-theory/2009/11/25

  2. Another failure mode: the replication crisis in science - where only new and surprising theses are being published, but there is no mechanism for reinforcing existing theories. This also happens in social media - people always want to learn new things. And probably more generally all the other things from https://www.gwern.net/Littlewood

  3. https://www.lesswrong.com/posts/ZdtFBCtixqay5aoWF/design-thoughts-for-building-a-better-kind-of-social-space [LW · GW]