The silence is deafening – Devon Zuegel
post by Ben Pace (Benito)
This is a link post for https://devonzuegel.com/post/the-silence-is-deafening
Imagine you're at a dinner party, and you're getting into a heated argument. As you start yelling, the other people quickly hush their voices and start glaring at you. None of the onlookers have to take further action—it's clear from their facial expressions that you're being a jerk.
In digital conversations, giving feedback requires more conscious effort. Silence is the default. Participants only get feedback from people who join the fray. They receive no signal about how the silent onlookers perceive their dialogue. In fact, they don't receive much signal that onlookers observed the conversation at all.
As a result, the feedback you do receive in digital conversations is more polarized, because the only people who will engage are those who are willing to take that extra step and bear that cost of wading into a messy conversation.
It's a great post, and has a really solid UI idea in the footnotes.
One idea I'd really like to see platforms like Twitter or Reddit try is to provide a mechanism for low-friction, private, negative feedback. For example, you could imagine offering a button where you can downvote or thumbs-down content (i.e. the opposite of a Like), but the count is only visible to the OP and not to anyone else.
The LW team has been thinking about building private responses like this for a while, but in comment form. Buttons that give more constrained private info are very interesting...
Comments sorted by top scores.
comment by Jameson Quinn (jameson-quinn) ·
2020-07-04T02:54:49.287Z · LW(p) · GW(p)
What about trolls? What about pile-ons?
Trolls: some people are not upset by negative feedback or even actively seek it. I think this could be structured such that this negative feedback would not be rewarding to such people, but it merits consideration, because backfire is at least in principle possible.
Pile-ons: There are documented cases of organized downvote brigades on various platforms, who effectively suppress speech simply because they disagree with it. Now, I wouldn't object to a brigade of mathematicians on a proof wiki downvoting and pages they disagreed with and thereby censoring the pages or driving away their authors; but in most other cases, I think such brigades would be a problem. Again, you might be able to design a version that successfully discouraged such brigades (for instance: have "number of downvotes", "correlation with average downvoter", and "correlation with most-similar downvoter" all visible in someone's profile?), but it merits thought. Replies from: gwern
↑ comment by gwern ·
2020-07-04T17:26:17.108Z · LW(p) · GW(p)
I'm sure you could think of a dozen solutions to fill this out into a well-defined system if you spent 5 minutes thinking about it.
Zuegel's point is that you want some people to be able to express implicit or tacit disapproval in a less legible way than leaving a public criticism. To continue the dinner party analogy: you don't go to a dinner party with 10 people chosen at random from billions of people; they are your friends, relatives, coworkers, people you look up to, famous people etc. A look of disapproval or a conspicuous silence from them is very different from context collapse causing a bunch of Twitter bluechecks swarming your replies to crush dissent. So the question is who to choose.
You could, for example, just disable these implicit downvotes for anyone you do not 'follow', or anyone you have not 'liked' frequently. You could have explicit opt-in where you whitelist specific accounts to enable feedback. You could borrow from earlier schemes for soft-voting or weighting of votes like Avogadro: votes are weighted by the social graph, and the more disconnected someone is from you, the less their anonymous downvote counts (falling off rapidly with distance).
Replies from: Benito
comment by ChristianKl ·
2020-07-05T07:08:22.302Z · LW(p) · GW(p)
It seems to me that LessWrong already has a downvote button and that downvote button is effectively used to drive out content that the community doesn't want to see in the way that's described.Replies from: rockthecasbah, Kenny
↑ comment by rockthecasbah ·
2020-07-06T18:19:01.580Z · LW(p) · GW(p)
So far I have found the LW voting behavior instructive and reasonable. It seems like LW'ers do vote on your epistemology rather than the content of your post (like in reddit). It's very cool.
↑ comment by Kenny ·
2020-07-07T18:53:28.779Z · LW(p) · GW(p)
I don't think the post was about LessWrong specifically (at all); think Twitter or Facebook or random blog comments.
Here on this site, yes both downvotes and the absence of upvotes are strong mostly-legible signals.
Replies from: ChristianKl
comment by Kaj_Sotala ·
2020-07-08T10:58:58.397Z · LW(p) · GW(p)
Interesting - I had previously been thinking about the problems that arise from there being so few approving looks on the Internet (upvotes, "likes" etc. are a step in that direction, but still not the same). It hadn't occurred to me consider the reverse as well.