How ForumMagnum builds communities of inquiry
post by Jim Fisher (james-fisher) · 2023-09-04T16:52:11.037Z · LW · GW · 21 commentsContents
What is ForumMagnum for? What are Communities of Inquiry? How ForumMagnum builds Communities of Inquiry Technique 1: State your norms Technique 2: Friction Technique 3: Controlled feedback loops Technique 4: Zoning How ForumMagnum can use CoI research How is ForumMagnum really developed? None 21 comments
The website you're currently using is powered by ForumMagnum. But what really is ForumMagnum? What is it for, and why was it designed this way? In this post, I cast ForumMagnum as a medium for building communities of inquiry. I show how ForumMagnum is designed to build norms like rationality and long-form. Lastly, I suggest how the ForumMagnum developers could use the body of CoI research to guide their future product design.
What is ForumMagnum for?
ForumMagnum describes itself as "the codebase powering LessWrong and the Effective Altruism Forum."[1] That's the what ... but what's the why? Here's why I believe ForumMagnum exists:
ForumMagnum is a medium for building online communities of inquiry. It's a web forum that embeds norms of rationality and long-form comms.
That is: ForumMagnum is not defined by its users, or its features, or its codebase! ForumMagnum is a medium designed to carry a message, and its message is a set of social norms. Let's see what those norms are, and how ForumMagnum's features are designed to build those social norms.
What are Communities of Inquiry?
In the most cited paper of all time, Garrison et al. set out to "investigate the features of the written language used in computer conferences [e.g., forums] that seem to promote the achievement of critical thinking."[2] They built on the concept of a Community of Inquiry (CoI), which Wikipedia nicely defines:
The community of inquiry (CoI) is a concept first introduced by early pragmatist philosophers C. S. Peirce and John Dewey, concerning the nature of knowledge formation and the process of scientific inquiry. The community of inquiry is broadly defined as any group of individuals involved in a process of empirical or conceptual inquiry into problematic situations. This concept was novel in its emphasis on the social quality and contingency of knowledge formation in the sciences ...[3]
The LessWrong community, the EA community, the Alignment community, the Progress community — what unites them? It's not just that each community uses ForumMagnum! It's that each community is a CoI. Let's see how each community describes itself.
LessWrong is online forum/community that was founded with the purpose of perfecting the art of human rationality. ... LessWrong is a good place for [those] who want to work collaboratively with others to figure out what's true.[4]
Effective altruism is ... both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.[5]
The Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values.[6]
The Progress Forum is ... a place for long-form discussion of progress studies and the philosophy of progress. ... The broader goal is to share ideas, strengthen them through discussion and comment, and over the long term, to build up a body of thought that constitutes a new philosophy of progress ...[7]
None of these communities explicitly identifies as a Community of Inquiry. Nevertheless, each community neatly fits the definition (at least, aspirationally!). Each community emphasizes its own problematic situation, to be solved with inquiry, and which is best done socially.
How ForumMagnum builds Communities of Inquiry
As such, the ForumMagnum communities share similar norms.[8] They're nicely summarized below each comment box:
- Aim to explain, not persuade
- Try to offer concrete models and predictions
- If you disagree, try getting curious about what your partner is thinking
- Don’t be afraid to say ‘oops’ and change your mind
But how does ForumMagnum build these norms? There's one norm-building technique everyone's aware of: moderation, or manual brute-force. Since this technique is very well-known, in this post, I'll focus on subtler techniques.
Technique 1: State your norms
We actually just met the simplest norm-building technique. I call it state-your-norms. The norms are written below the comment box, shown at exactly the right time. I believe this technique is extremely powerful. I'm willing to bet that it's as influential as all manual moderation.
Yet it's surprisingly under-used! I bet your enterprise Slack or Notion has no such norms stated anywhere, let alone next to the comment boxes.
Technique 2: Friction
Speaking of that comment box: consider the behavior of the ⏎Enter
button there. There are two viable behaviors ⏎Enter
could have: either to post your comment, or to create a new paragraph. In ForumMagnum, ⏎Enter
creates a new paragraph, and there is no keyboard shortcut for “Submit”. This is deliberate.
If the ⏎Enter
button sends my message, I’ll write short messages. But if creates a new paragraph, I’ll write longer comments. This product design builds the norm of long-form, async communication. This is an important norm on these sites, although not usually made explicit.
This is an example of a norm-building technique that I call friction. The word “friction” in UX design is often used negatively, but friction is a powerful way to steer users towards desired behavior! ForumMagnum uses several frictions to build the long-form, async norm. Notice there are no realtime notifications, and timestamps are only accurate to the hour.
Technique 3: Controlled feedback loops
An aside: I bet you’ve experienced what I call Shift-Enter anxiety. Visiting a new app, will the ⏎Enter
key create a new paragraph, or will it prematurely send my message? With this micro-stress, I must make a guess: does it look like this app wants long-form writing? Is the input box large? Are other people posting multiple paragraphs? If so, I hit ⏎Enter
, and pray for a new paragraph.
That example shows norming feedback loops at play. After I post my longer comment, I help build the norm that this is a place for long-form. Nudge theory told us that defaults are powerful. But in multi-user platforms, defaults are all-powerful. As @Viliam [LW · GW] wrote, you become the UI you use [LW · GW].
Media designers can strengthen feedback loops with voting. This lets users reinforce norms. But it's risky: if a bad behavior becomes a norm, it will also be reinforced by voting systems!
One way ForumMagnum guards against runaway feedback loops is with named reactions. In most apps, you can vote/react with emojis. But emojis can be very ambiguous (even causing legal issues!). In contrast, ForumMagnum's reactions have labels, like: “Changed My Mind”, “Insightful”, or “Good Facilitation”. This incorporates the state-your-norms technique, by embedding the norms in the reactions. It also incorporates the friction technique, by making it harder to react in undesired ways (for example, there's no "Too Long, Didn't Read" reaction).
Technique 4: Zoning
I'll end with a norm-building technique that I call zoning. Counter-intuitively, one way to build a norm is to build a feature for its opposite norm! It's like how urban planners try to move the pollution to an industrial zone, or the prostitution to a red-light district. LessWrong has two such zoning features.
One zoning feature is called Shortform. “Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.” Implicit in this description is that such content is not generally welcome elsewhere. The “shortform” feature says: “Normal posts are long-form and carefully edited.”
Another zoning feature is called agreement voting. So many other forums are plagued by group-think. It's a malignant norm. ForumMagnum attempts to zone the groupthink by moving it to a separate voting axis. The normal voting axis is the traditional “How much do you like this?”. The second axis is: “How much do you agree with this, separate from whether you think it’s a good comment?”. The “agreement voting” feature says: “Normal voting should not consider agreement.”
How ForumMagnum can use CoI research
Anyway, back to Garrison et al., and their community's 25-year-long investigation into how text-based media can promote rationality. How can CoI help ForumMagnum and its communities?
The CoI community provides a framework for defining inquiry. It has some great insights. For example, that social presence (the ability to present yourself as a "real person") is essential to inquiry. I suspect that the lack of social presence [LW(p) · GW(p)] was one reason for the failure of Arbital [LW · GW] (a sort of wiki-structured LessWrong).
The CoI community provides a survey for measuring inquiry, and attempts to measure inquiry using AI. The idea of "measuring intangibles" is central to the ForumMagnum communities. Despite this, each community defines its aspirational norms, but has never measured whether they are in fact norms.
The CoI community provides empirical research into how to build inquiry. For the practically minded, try Holly Fiock's big list of concrete recommendations!
How is ForumMagnum really developed?
I described all this as an outsider looking in. How do the ForumMagnum developers really approach product design? I don't know! I believe most development is done by the Centre for Effective Altruism. If any of the developers are around, I'd love to hear their view!
- ^
Quoted from the ForumMagnum README.
- ^
Garrison et al., "Critical Inquiry in a Text-Based Environment". Found at position 6884 on Lens.org's list of works sorted by citation count.
- ^
- ^
LessWrong, "New User's Guide" [LW · GW]
- ^
EA Forum, "What is effective altruism?" [? · GW]
- ^
AI Alignment Forum, "Welcome & FAQ [AF · GW]"
- ^
Progress Forum, "About us and FAQ"
- ^
For more detail, see an unofficial LessWrong norm list [LW · GW], the official EA norms [EA · GW], and the Progress Forum participation guide. I think if you extract the commonalities, you get something like the guidelines below the comment box.
21 comments
Comments sorted by top scores.
comment by Ruby · 2023-09-04T19:53:42.108Z · LW(p) · GW(p)
Very interesting! You've identified many of the reasons for many of the decisions.
Cmd-enter will submit on LW comment boxes.
Forum Magnum was originally just the LessWrong codebase (built by the LessWrong Team that later renamed/expanded as Lightcone Infrastructure), and the EA Forum website for a long while was a synced fork of it. In 2021 we (LW) and EA Forum decided to have a single codebase with separate branches rather than a fork (in many ways very similar, but reduced some frictions), and we chose the name Forum Magnum for the shared codebase.
You can see who's contributed to the codebase here: https://github.com/ForumMagnum/ForumMagnum/graphs/contributors
jimrandomh, Raemon, discordious, b0b3rt and darkruby501 are LessWrong devs.
↑ comment by Jim Fisher (james-fisher) · 2023-09-05T07:17:15.147Z · LW(p) · GW(p)
Thank you! ( I just submitted this reply too early by trying Cmd-Enter. I suggest that this feature is deliberately hidden to discourage its frequent use :)
Ah yes, I saw that the original LW was actually based on Reddit! It would be very interesting to see the original discussions showing the motivations for developing ForumMagnum. For example, did the Reddit-based forum lead to some undesirable norms?
I hadn't heard of Lightcone Infra. Their LessWrong page is another clue about how ForumMagnum is really developed. Sounds like they're thinking along similar lines - media optimized for inquiry/rationality.
Replies from: Viliam, Ruby↑ comment by Viliam · 2023-09-05T14:09:51.842Z · LW(p) · GW(p)
One big motivation for switching from Reddit codebase was that we had a dedicated spammer we couldn't stop. Imagine someone creating hundreds of accounts, upvoting himself, downvoting people he didn't like. The existing moderation tools were insufficient; fighting this one person wasted a lot of moderator time. We needed solutions in code. But the Reddit code was very difficult to understand and modify, despite having a lot of software developers in this community. Ultimately, it was easier to rewrite from scratch, even if that took months (or years? not sure) of work.
Adding new features that we always wished (plus a few more were weren't sure about but wanted to try) was also nice. But the opportunity costs were high. I think it was the spammer who changed the perception of rewriting the code from "would be nice to have" to "we must do this, or this community dies".
EDIT:
A few rules (I don't remember the rules exactly) were specifically designed against this kind of attack. Not just one person creating hundreds of accounts, which probably could be detected by using the same IP address or some other heuristics, but imagine hundred new people joining at the same time. e.g. because LW was linked from some belligerent online community. So, for example, the votes of existing members are stronger than the votes of new members. Making new accounts can temporarily be turned off. I suspect that moderators also have some automated tools for checking suspicious behavior of new users.
Replies from: james-fisher↑ comment by Jim Fisher (james-fisher) · 2023-09-06T22:11:04.495Z · LW(p) · GW(p)
So the ultimate trigger to move wasn't some hifalutin' desires to apply media ecology theories to optimize for inquiry (as was my theory), but much more mundane and urgent needs to fight spam and trolls!
I found this announcement of LessWrong 2.0 [LW · GW], which indeed mentions spam and trolls. The main innovation seems to be the delightfully named "Eigenkarma", which I think is approximated by ForumMagnum by making your vote strength approximately the log of your karma.
↑ comment by Ruby · 2023-09-05T17:45:10.436Z · LW(p) · GW(p)
The LW1.0 was a fork of the Reddit codebase, I assume because it was available and had many of the desired features. I wasn't there for the decision to build LW2.0 as a new Forum, but I imagine doing so allowed for a lot more freedom to build a forum that served the desired purpose in many ways.
how ForumMagnum is really developed
Something in your framing feels a bit off. Think of "ForumMagnum" as an engine and LessWrong, EA Forum as cars. We're the business of "building and selling cars", not engines. LW and EA Forum are sufficiently similar to use the same the engine, but there aren't Forum Magnum developers, just "LW developers" and "EAF developers". You can back out an abstracted Forum Magnum philosophy, but it's kind of secondary/derived from the object level forums. I suppose my point is against treating it as too primary.
Replies from: Viliam↑ comment by Viliam · 2023-09-05T19:15:16.306Z · LW(p) · GW(p)
there aren't Forum Magnum developers, just "LW developers" and "EAF developers".
Could you please expand on this? How is the codebase organized: is all code shared, or are there separate plugins for individual websites? How do "LW developers" and "EAF developers" coordinate when they want to make changes in the shared code?
Replies from: Ruby↑ comment by Ruby · 2023-09-05T19:56:12.830Z · LW(p) · GW(p)
There's a single codebase. It's React and the site is composed out of "components". Most components are shared but can have some switching logic within them changes behavior. For some things e.g. frontpage, each site has its own customized component. There are different "style sheets" / "themes" for each them. When you run in instance for Forum Magnum, you tell it whether it's a LW instance, EA Forum instance, etc. and it will run as the selected kind of site.
Coordination happens via Slack, GitHub, and a number of meetings (usually over Zoom/Tuple). Many changes get "forum-gated" so they only apply to one site.
comment by Raemon · 2023-09-04T20:20:38.659Z · LW(p) · GW(p)
This was a pretty interesting read. Here are some notes, as a developer:
Shortform.
The actual deal with shortform, from my perspective, is that I wanted users throughout the site to feel more comfortable writing up their off-the-cuff ideas. But they often reported not feeling comfortable doing so. Shortform was an attempt to say "look, here at least you should feel comfortable doing so." It does make sense that this creates a vague sense that other places aren't supposed to be for off-the-cuff thinking.
(I think shortform hasn't quite worked the way I hoped, although it seems like EA Forum's Quick Takes feature that highlights shortform on the frontpage a bit more has helped a bunch and I think we might want to copy it or do something similar. The main problem with Shortform is that it's just a bit too buried and I don't end up getting discussion from the people I'm most excited to get discussion from)
Agree voting
In practice I don't think agree-voting successfully silo's the groupthink, although I do think it helps at least distinguish it a bit.
A long while ago I had an idea that we could design a voting system where the biggest, most satisfying-to-click button didn't count for your longterm karma, and people had to click a smaller, less exciting button for "actually this post demonstrates good virtues and/or is worth reading", so that tribal/groupthink-y voting didn't contribute as disproportionately to site power. I think we basically haven't built this. The most natural thing to do is still to strong upvote things that you feel a strong tribal/groupthinky pull towards.
One natural idea is to put agree-voting first and approval-voting separate, but that feels like it'd still have not-the-right-effect, where I think people's actual natural first inclination is to do their all-things-considered-take, which includes agreement, approval, "I got value", etc.
Two other options that feel kinda-reasonable: When you strong-upvote, you have to actually check "why should this comment be strong-upvoted?", with things like:
- "This says something important for people to read"
- "This was undervalued by other people",
- "I agree with this"
- "I personally learned/got-value from this
Maybe also the React Palette pops open and you're encouraged to give at least one react if you strong upvote?
Reacts.
It was definitely one of my hopes for reacts to signal norms, and give you a sense of what sort of site LW is trying to be. (Note that EA forum has different reacts and Progress forum has none). We've had reacts for a couple months now and I'm curious to here, both from old-timers and new-timers, what people's experience of them was, and how much they shape their expectations/culture/etc.
Replies from: tslarm, james-fisher↑ comment by tslarm · 2023-09-05T12:52:00.706Z · LW(p) · GW(p)
We've had reacts for a couple months now and I'm curious to here, both from old-timers and new-timers, what people's experience of them was, and how much they shape their expectations/culture/etc.
I received (or at least, noticed receiving) a react for the first time recently, and honestly I found it pretty annoying. It was the 'I checked, it's False' one, which basically feels like a quasi-authoritative, quasi-objective, low effort frowny-face stamp where an actual reply would be much more useful.
Edit: If it was possible to reply directly to the react, and have that response be visible to readers who mouse over the react, that would help on the emotional side. On the practical side, I guess it's a question of whether, in the absence of reacts, I would have got a real reply or just an unexplained downvote.
Replies from: Raemon↑ comment by Raemon · 2023-09-05T14:22:49.879Z · LW(p) · GW(p)
Another reason we created reacts is that people would often complain about anonymous downvotes, and reacts were somewhat aiming to be a level-of-effort in between downvote and comment.
It’s hard to tell exactly how this effect has played out - reacts and comments are voting are all super noisy and depend on lots of factors. But I have a general sense that people are comparing both votes and reacts to an idealized ‘people wrote out a substantive comment engaging with me’, when alas people are just pretty busy and that’s not realistic to expect a lot of the time.
I do generally prefer people do in-line reacts rather than whole-comment reacts, since that at least tells you what part of the comment they were reacting to. (Ie select part of the comment and react just to that)
Replies from: Raemon↑ comment by Jim Fisher (james-fisher) · 2023-09-05T08:42:05.587Z · LW(p) · GW(p)
Shortform: ah, so it isn't intended as zoning. More that short-form and long-form are both valuable, but each needs a separate space to exist. (This seems to be a law of online media: short-form and long-form can't naturally share the same space. Same for sync and async. See e.g. Google Wave failure. I don't entirely understand the reasons, though.)
Agree-voting: I too end up incorporating "agreement" into the "overall" vote, despite the separate axis. I think "overall" almost implies I should do that! (Perhaps if "overall" were renamed to e.g. "important"? "How important is this comment?")
Possible future changes: I like your suggestions! Though (in line with the CoI research) I'd like to think about: can we measure (e.g. with split testing) whether those changes affect behavior in the right direction? Or can we draw on empirical CoI research instead of testing it ourselves?
Replies from: Raemon↑ comment by Raemon · 2023-09-05T14:25:49.937Z · LW(p) · GW(p)
Overall is explicitly supposed to be overall.
Replies from: james-fisher↑ comment by Jim Fisher (james-fisher) · 2023-09-06T21:10:00.605Z · LW(p) · GW(p)
Oh, wow, so I'd misunderstood that one as well! Apparently, my expectation so strong that the main axis was supposed to exclude "agreement", that I actively misinterpreted the word "overall". I just discovered this announcement of "Agree/Disagree Voting" [LW · GW] which mostly confirms that yes, overall is supposed to be overall.
comment by Alex K. Chen (parrot) (alex-k-chen) · 2023-09-06T17:52:27.672Z · LW(p) · GW(p)
This product design builds the norm of long-form, async communication. This is an important norm on these sites, although not usually made explicit.
This is an example of a norm-building technique that I call friction. The word “friction” in UX design is often used negatively, but friction is a powerful way to steer users towards desired behavior! ForumMagnum uses several frictions to build the long-form, async norm. Notice there are no realtime notifications, and timestamps are only accurate to the hour.
Quora used to advertise itself as being "long-form" and "forever" (the place where you would write THE best answer to every question, and ideally edit your answer years after making the original answer [I don't see people constantly editing their old content on LessWrong]), but the answer ranking of each question wrecked it, because now the algorithm surfaces answers that attract more views ("feel good" answers) rather than answers that are objectively better. Because many higher-quality answers are now buried down the list of Quora answers, I move my better answers to other platforms like forum.longevitybase.org or crsociety.org
I am super-ultra attracted to long-form (want all of my content to be easily accessible by all) for reasons similar to my obsession with longevity/archiving old content, and sometimes post responses to threads that have not gotten attention in years (just to make more complete threads). People are not aware enough of this, however.
https://www.quora.com/What-was-your-biggest-regret-on-Quora/answer/Alex-K-Chen (my biggest distillation from being arguably the most important user on Quora)
The upvoting/downvoting system penalizes people who want to post threads about threads that aren't rationalist fad/zeitgeist-related (esp ones related to alignment that they don't think are frontpageable, but which are still relevant for rationality (or progress studies!) and could still attract momentum/attention years down the line This is why I do not post much on LessWrong (I have extremely broad interests so I naturally end up discovering LW, but my views/opinions on what's important are way different from those of most LW/EA, so I know my niche interests won't get much attention here). I don't feel the same kind of inhibition when posting content to the progress studies forum, which is smaller (small enough that you don't care at all about upvote/downvote dynamics) and way less prone to groupthink. Effective Altruism has historically valued neglectedness, but this does not show with forum upvoting patterns...
There are many scientific areas (and people with niche interests - the castration thread on LW is uniquely great for example!) that could be discussed on LessWrong, and analyzed/vetted via CFAR/rationality/Bayes updating/superforecasting techniques, but which are not, simply because many people averse to the groupthink dynamics on LW don't feel like LW would value their content. A long-form platform should ideally insulate them from local upvote/downvote fads (as useful as that input is). For what it's worth, upvotes (from quality users) used to be the primary factor that drove answer rankings on Quora (back when "all the smart SV people used it"), but with Quora's dilution, it seems almost as if people no longer care about upvotes (now that upvotes almost all come from people I don't know, rather than people I do know, I don't care about upvotes anymore, but I remember the golden days when I wrote answers that everyone on the Quora team upvoted...) Once you've been on a forum for years, how good the post is (even if edited a thousand times enough not for initial upvoters to have seen the better post) [as well as what comments it attracts] is more rewarding than how upvoted it is...
Stack Exchange is in some ways a better platform for long-form content (and makes it ultra-easy to find content that is many years old and makes it ultra-easy for people not to post duplicate threads), especially because it gives you multiple ways of organizing/ranking all your old content, making it easily accessible and for you to want to come back and edit multiple times. It just has moderators who are quick to mute/delete threads they don't like, making it much harder to post about niche interests.
[but again, these don't make up for how there don't seem to be many threads where comments are made years after the original post]
--
It's also nice to reference other forum communities that have lasted for years (even if reddit was the original forum-killer).
comment by MondSemmel · 2023-10-06T16:51:08.937Z · LW(p) · GW(p)
Quoting Ruby from upthread [LW(p) · GW(p)]:
I wasn't there for the decision to build LW2.0 as a new Forum, but I imagine doing so allowed for a lot more freedom to build a forum that served the desired purpose in many ways.
Oliver Habryka [LW · GW] (aka Discordius in this comment [LW(p) · GW(p)]), who runs Lightcone Infrastructure, described some of the history of LW 2.0, and of his thoughts about reviving the site, in the ~first subheading of this podcast transcript [LW · GW]. And he goes into more depth on his thinking about LW in the rest of the big first section [LW · GW]. (The rest of the looong podcast transcript covers other topics which are likely not of general interest.)
comment by jp · 2023-09-05T19:15:45.076Z · LW(p) · GW(p)
Thanks for writing about ForumMagnum! This software is so much of my life, but understandably gets little attention an an object in its own sake.
That's changing a bit now, and more people are reaching out about using it. — I think it's the best forum software out there.
If someone reading this wants to build an instance, feel free to reach out.
Replies from: Viliam↑ comment by Viliam · 2023-09-06T11:19:29.044Z · LW(p) · GW(p)
How difficult it would be to run an instance with minimum customization? That is, suppose that I am happy with all the default options (wherever that makes sense). I just need a forum; I don't care about details.
Replies from: jp↑ comment by jp · 2023-09-06T22:04:13.418Z · LW(p) · GW(p)
I've historically said 1-2 weeks of skilled engineering work. That will lower by a factor of 2 after this branch, plus some follow ups, get merged.
comment by Sinclair Chen (sinclair-chen) · 2023-09-14T06:35:57.131Z · LW(p) · GW(p)
I'm happy you analyzed the design of this site! I greedily want rationalists to discuss social media design more. Info tech is truth tech (or it can be!) and despite advances in mediums, text is king.
comment by Viliam · 2023-09-05T14:35:38.705Z · LW(p) · GW(p)
In contrast, ForumMagnum's reactions have labels, like: “Changed My Mind”, “Insightful”, or “Good Facilitation”. This incorporates the state-your-norms technique, by embedding the norms in the reactions.
Yes, making labels for behavior we want (and not making labels for behavior we do not want) is an interesting tool for nudging behavior.
However, users can subvert the intended meaning. Did not happen here, as far as I know, but for example on Facebook, the "laughing" reaction was originally meant positively ("I appreciate your joke") but these days is often used negatively ("I am laughing at you"). Also, the eggplant emoji.
So, with rude users, I could imagine some reactions getting an alternative meaning also here. The reason this (hopefully) will not happen, is that we have a community norm against rude behavior. You can nudge users, but if they disagree strongly with the proposed norms, they will find a way.