Open Problems in Archipelago
post by Raemon · 2019-04-16T22:57:07.704Z · LW · GW · 14 commentsContents
14 comments
Over a year ago, I wrote about Public Archipelago [LW · GW] and why it seemed important for LessWrong. Since then, nothing much has come of that. It seemed important to acknowledge that. There are more things I think are worth trying, but I've updated a bit that maybe the problem is intractable or my frame on it might be wrong.
The core problems Public Archipelago was aiming to solve are:
- By default, public spaces and discussions force conversation and progress to happen at the lowest common denominator.
- This results in a default of high-effort projects happening in private, where it is harder for others to learn from.
- The people doing high-effort projects have lots of internal context, which is hard to communicate and get people up to speed on in a public setting. But internally, they can talk easily about it. So that ends up being what they do by default.
- Longterm, this kills the engine by which intellectual growth happens. It's what killed old LessWrong – all the interesting projects were happening in private, (usually in-person) spaces, and that meant that:
- newcomers couldn't latch onto them and learn about them incidentally
- at least some important concepts didn't enter the intellectual commons, where they could actually be critiqued or built upon
The solution was a world of spaces that were public, but with barriers to entry, and/or the ability to kick people out. So people could easily have the high-context conversations that they wanted, but newcomers could slowly orient around those conversations, and others could either critique those ideas in their own posts, or build off them.
Since last year, very few of my hopes have materialized.
(I think LessWrong in general has done okay, but not great, and Public-Archipelago-esque things in particular have not happened, and there's continued to be interesting discussion in private areas that not everyone is privy to [LW · GW])
I think the only thing that came close is some discussion on AI Alignment topics, which benefited from being technical enough to automatically have a barrier to entry, and created a discussion shaped in such a way that it was harder to drag it into Overton Window Fights.
The core problem is that maintaining a high-context space requires a collection of skills that few people have, and even if they do, it requires effort to maintain.
The moderation tools we built last year still require: a lot of active effort on the part of the individual users, and that effort is kinda intrinsically aversive (telling people to go away is a hard skill and comes with social risks), and it also requires people to have lots of ideas that are interesting enough in the first place to build a high-context conversation around.
The current implementation requires all three of those skills in a single person.
There are a few alternate implementations that could work, but requires a fair amount of dev work, and meanwhile we have other projects that seem higher priority. Some examples:
- People have asked for subreddits for awhile. Before we build that, we want to make sure that they're designed such that good ideas are expected to "bubble up" to the top of LessWrong, rather than stay in nested filters forever.
- Opt in rather than opt out moderation (i.e. people might have a list of collaborators, and only collaborators can comment on their posts, rather than a banned list). This is basically what FB and Google Docs does.
- I had some vague ideas for "freelance moderators". We give authors with 2000 karma the ability to delete comments and ban users, but this is rarely used, because it requires someone who is both willing to moderator and who can write well. Splitting those into two separate roles could be useful.
I'm most optimistic about the second option.
I think subreddits are going to be a useful tool that I expect to build sooner or later, but they won't accomplish most-of-the-thing. Most of what I'm excited about are not subreddits by topic, but highly-context-driven conversations with some nuanced flavor that doesn't neatly map to the sort of topics that subreddits tend to have. Plus, subreddits still mean someone has to do the work of policing the border, which is the biggest pain point of the entire process.
If I were to try the second option and it still didn't result in the kinds of outcomes I'm looking for, I'd update away from Public Archipelago being a viable frame for intellectual discourse.
(I do think the second option still requires a bit of effort to get right – it's important that the process be seamless and easy and a salient option to people. And thus, it'll probably still be a while before I'd have the bandwidth to push for it)
14 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2019-04-16T23:25:34.778Z · LW(p) · GW(p)
In a broader sense, I do kind of feel like from a UI and culture perspective, we never really gave the Archipelago stuff a real shot. I do think we should make a small update that the problem can't just be solved by giving a bunch of people moderation power and allowing them to set their own guidelines, but I think I already modeled the problem as pretty difficult and so this isn't a major update.
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
That said, I don't actually think this should be our top priority, though the last few weeks have updated me more towards a bunch of problems in this space being things we need to start tackling again soon. My current model is that the top priority should be more about establishing the latter stages of the intellectual progress funnel with stuff like Q&A, and that some of those things are actually more likely to solve a lot of the things that the Archipelago was trying to solve (as an example, I expect spaces oriented around a question to generate less conflict-heavy discussions, which I expect will make people more interested in writing up their ideas publicly. I also expect questions to more naturally give rise to conversations oriented around some concrete outcome, which I also expect to create a more focused atmosphere and support more archipelago-like conversations)
Replies from: Raemon↑ comment by Raemon · 2019-04-16T23:33:37.052Z · LW(p) · GW(p)
Nod.
I had had thoughts re: Archipelago that were also more about in person communities, which in my mind were clustered together with the online stuff, and in both cases I think it turned out to be harder than I'd been imagining. (I do agree that re: online we never really gave it a fair shot)
I had been excited about things like the Dragon Army experiment, and I had vague plans to do something in a similar space of the form "establish an in-person space with higher standards."
Project Archipelago was originally a refactoring of Project Hufflepuff, designed to solve things at the more general level of "give people space to try hard things that the community doesn't currently incentivize", as opposed to "incentivize the specific cluster of Hufflepuff Virtues."
But what I found was that I didn't have much time for that. I might have had time in New York, where I was more of a community organizer than a person working fulltime on LW community stuff.
Basically, everyone who had the time and competence to do a good job with it... and was working in the context of an organization with clearly defined goals.
I still think if you're a small-scale community organizer, Archipelago-esque approaches are probably still better than Not That, but it's either going to be an incremental improvement at best, or you probably aren't going to stay an small-scale community organizer for long.
Realizing this changed a lot of my thoughts on how to go about the problem.
Replies from: ryan_b↑ comment by ryan_b · 2019-04-17T17:20:48.974Z · LW(p) · GW(p)
The bit about bundling in person and online communities caused me to think of the Literature Review: Distributed Teams [LW · GW] post.
It feels to me like the same trust and communication mechanisms from distributed teams stand a good chance of applying to distributed communities. I'm tempted to take the Literature Review article and go back through the Old LW postmortem post to see how well the predictions match up. From this post:
Longterm, this kills the engine by which intellectual growth happens. It's what killed old LessWrong – all the interesting projects were happening in private, (usually in-person) spaces, and that meant that:
newcomers couldn't latch onto them and learn about them incidentally
at least some important concepts didn't enter the intellectual commons, where they could actually be critiqued or built upon
From the Distributed Teams post:
If you must have some team members not co-located, better to be entirely remote than leave them isolated. If most of the team is co-located, they will not do the things necessary to keep remote individuals in the loop.
I feel like modelling LessWrong as a Distributed Team with strange compensation might be a useful lens.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-17T18:09:01.687Z · LW(p) · GW(p)
I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.
comment by Evan_Gaensbauer · 2019-04-19T03:26:38.247Z · LW(p) · GW(p)
As someone who was inspired by your post from a year ago, and who was thinking of contributing to LessWrong as a public archipelago, here are some things that stopped me from contributing much. Maybe other people have these things in common with me and why they wanted to but failed to contribute in the last year.
1. There is less interest in the rationality community for the things I would be interested in writing about on LessWrong, or the rationality community is actively disinterested in things I am interested in writing about. This demotivates me to post on LW. I am in private group chats and closed Facebook groups largely populated by members of the rationalist diaspora. These discussions don't take place on LessWrong, not only because there might be relatively few people who would participate in LessWrong, but because they're discussions of subjects the rationality community is seen as hostile, indifferent, or disinterested in, such as many branches of philosophy. This discourages these discussions on the public archipelago. I expect there is a lot of people who don't post on LessWrong because they share this kind of perception. It's possible to find people with whom to have private discussions, but having them be on a public archipelago on LW, it if was possible to satisfy people, would make it easier and better from my viewpoint.
2. One particular worry I and others have is that, as in mainstream culture more and more things become politicized, more and more types of conversations on LW would be discouraged as 'politically mindkilling.' I personally wouldn't know what to expect as what the norms are here, though I am not as worried as others because I don't see it as much of a loss for there to be fewer half-baked speculations on political subjects online. A fear that the list of subjects discouraged as being too overtly 'political' could endlessly grow is discouraging.
3. The number of people who are interested in the subjects I am interested in on LessWrong is too small to motivate me to write more. I haven't explored this as much, and I think I have been too lazy in not trying. Yet a decent quantity of feedback, of sufficiently engaging and deep quality, seems like to me what would motivate I know to participate more on LW. One possibility is getting people I find who are not currently part of the rationality community, or a typical LW user, to read my posts on LW, and build something new out of it. I think this is fine to talk about, and I really agree with the shift since LW2.0 to develop LW as its own thing, still working with but distinct and independent from MIRI and AI alignment, CFAR, and the rationality community. So cleaving new online spaces on LW, which maybe can be especially tailored due to how much control I have over my own posts as a user, is something I am still open to trying.
comment by Said Achmiz (SaidAchmiz) · 2019-04-17T03:26:43.160Z · LW(p) · GW(p)
We give authors with 2000 karma the ability to delete comments and ban users, but this is rarely used, because it requires someone who is both willing to moderator and who can write well.
Well… there’s also the fact that the UI gives absolutely no indication of any this.
In fact, after I read this line in your post (and vaguely remembered hearing about this before), I went over to LW, logged in, and tried to figure out how I would delete a comment. I… did not have any success. I haven’t the faintest idea how I’d delete someone’s comment, or how I’d know if I can, or… anything.
I think I figured out how to ban a user: by entering their name into the “Banned Users (All)” field on my account settings page. Is that right? (There was no tooltip or explanatory label or anything, so I can’t be sure…) If so, that’s extremely counterintuitive.
(By the way, I couldn’t even figure out how to delete one of my own comments. What am I missing…?)
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-17T03:38:22.330Z · LW(p) · GW(p)
Yeah, this is roughly what I meant with not really giving it a shot in terms of UI.
Replies from: ioannes_shade↑ comment by ioannes (ioannes_shade) · 2019-04-17T17:57:01.825Z · LW(p) · GW(p)
lol
comment by habryka (habryka4) · 2019-04-16T23:11:34.800Z · LW(p) · GW(p)
There is still the whole off-topic/on-topic thing that lots of people were excited about in the Archipelago thread. That still seems like plausibly a good idea to me.
Replies from: Raemoncomment by Chris_Leong · 2019-04-16T23:30:35.872Z · LW(p) · GW(p)
I'm very optimistic about sub-reddits - there are many examples such as AskPhilosophy, ChangeMyView and Slatestarcodex that demonstrate how powerful they can be. One major advantage of LW vs. Reddit is that it draws users from a different demographic. LW users are much less likely to stir up trouble or post low quality comment, so there'll probably be minimal work policing boundaries.
Replies from: gworley, Raemon↑ comment by Gordon Seidoh Worley (gworley) · 2019-04-16T23:59:23.690Z · LW(p) · GW(p)
Although I guess there's also the question of, why don't we just create an archipelago of subreddits on reddit if that's the direction we want to go? Just prepend the name of each subreddit with "LessWrong" and link them together somehow and be done with it.
I think we all know the answer, though: LW has certain standards and does a better job of keeping out certain kinds of noise than reddit does, even with active moderation. LW today attracts certain folks, deters others, and its boundaries make it a compelling garden to hang out in, even if not everyone agrees on, say, whether we should allow only flowering plants in our garden or if ferns and moss are okay.
I like the direction of having LW, EA Forum, and Alignment Forum being semi-connected; I would love if EA Forum functions more like the Alignment Forum does in relation to LW, and I think it would be cool to potentially see one or two additional sites branch off if that made sense, but I also don't feel like there's enough volume here that I'd enjoy seeing us fracture too much, because there's a lot of benefit too in keeping things together and exposing folks to things they otherwise might not see because it happens to be loosely connected enough to the things they do want to see that they end up encountering it. I enjoy stumbling on things I had no idea I would learn something from, but others are less open in this way and have different preferences.
↑ comment by Raemon · 2019-04-16T23:34:36.958Z · LW(p) · GW(p)
Maybe, but the specific reasons I think this whole thing is necessary is because of areas where LW already struggles to police boundaries, and I don't have much sense that subreddits would improve that.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2019-04-17T03:08:02.760Z · LW(p) · GW(p)
Policing is only one aspect. Listing rules sets norms and the effect of selecting for people with more than just a casual interest in a topic helps as well.