David Bravo's Shortform
post by David Bravo (davidbravocomas)
Comments sorted by top scores.
comment by David Bravo (davidbravocomas) ·
2023-04-06T09:14:37.198Z · LW(p) · GW(p)
I have been using the Narwhal app for the past few days, a social discussion platform "designed to make online conversations better" that is still at its prototype stage. This is how it basically works: there are several topics of discussion posted by other users, formulated with an initial question (e.g. "How should we prioritise which endangered species to protect?" or "Should Silicon Valley be dismantled, reformed, or neither?") and a description, and you can comment on any or reply to others' comments. You can also suggest your own discussions.
Here are my initial impressions:
- Like writing a post or like commenting on LessWrong, in a way it is a tool that helps you think. Expressing ideas helps to form them. Having a space for answering to a specific question motivates you to think about it, which you wouldn't have done in such depth otherwise. It inspires you to expand your web of original ideas.
- So far there's a high level of respect in the conversations. This could be due to how the app is designed; discussion topics have to be approved by the Narwhal team, comments might be moderated, comments need to be +140 characters long...Or it could be because it's just starting, with possibly no more than 10 users actively commenting, one or several of the most prominent ones being part of the Narwhal team, most of the rest having joined because we were precisely pursuing a platform for respectful discussions, and only around 2 discussions being posted each day. Or it could be a combination of both. It remains to be seen how this would change when more people, not all well-intended, joined in, with multiple discussions happening at the same time and a user being unable to keep track of them all. So far the algorithm hasn't really needed to be tested.
- Respect is promoted, but I haven't seen many disagreements or a collaborative practise of working to uncover cruxes and change one's beliefs. You can give 'Aha's to enlightening comments or rate them as 'provocative', 'clarifying' or 'new to me', but this is all positive feedback that doesn't encourage changing your mind. There are no negative options to signal when there's disagreement or a conflict of points of view that needs to be cleared.
- Relatedly, my main concern is that there aren't strong enough incentives for rational replies that allow for real progress on key questions. The number of 'Aha's a user has ever received only appears once you click on the profile and not when she/he leaves a comment. So having accumulated many 'Aha's doesn't mean too much. Most importantly, you don't get penalised when you make a bad or untruthful comment. Probably it can get deleted by the moderators if it is disrespectful, but if you fail to clarify a point, if you argue irrationally from your trench, or if you are overly biased, it might be irrelevant for your reputation. The incentive is for the comment to be good enough, not as good as it can be. Hence, I much prefer LessWrong's options to upvote or downvote comments and state how much you agree or disagree with them, with positive and negative karma.
- There are several other features missing, such as direct messages to continue the conversations in private, the ability to filter or search for topics, or to follow other people or topics. But I suppose the Narwhal team already has these in mind.
- Besides, the quality of responses to AI, especially AI alignment and safety, seems a bit low, at least compared to the LW community.
All in all, it feels like a quiet, peaceful plaza for discussions, an arena for people looking for a respite from the lousy environments of conventional social media like Twitter, but I have questions about whether it really motivates progress or it's mostly signalling and trying to sound convincing, and how much can be extrapolated from this initial experience regarding how the app will scale up.