Disincentives for participating on LW/AF

post by Wei Dai (Wei_Dai) · 2019-05-10T19:46:36.010Z · LW · GW · 45 comments

I was in a research retreat recently with many AI alignment researchers, and found that the vast majority of them do not participate (post or comment) on LW/AF or participate to a much lesser extent than I would prefer. It seems important to bring this up and talk about whether we can do something about it. Unfortunately I didn't get a chance to ask them about why that is the case as there were other things to talk about, so I'm going to have to speculate here based on my personal experiences and folk psychology. (Perhaps the LW team could conduct a survey and get a better picture than this.)

One meta problem is that different people have different sensitivities to these disincentives, and having enough disincentives to filter out low-quality content from people with low sensitivities just necessarily means some potential high-quality content from people with high sensitivities are also kept out. But it seems like there are still some things worth doing or experimenting with. For example:

There's a separate issue that some people don't read LW/AF as much as I would prefer but I have much less idea what is going on there.

On a tangentially related topic, is LW making any preparations (such as thinking about what to do) for a seemingly not very far future where automated opinion influencers are widely available as hirable services? I'm imagining some AI that you can hire to scan internet discussion forums and make posts or replies in order to change readers' beliefs/values in some specified direction. This might be very crude at the beginning but could already greatly degrade the experience of participating on public discussion forums.

45 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2019-05-11T15:05:44.457Z · LW(p) · GW(p)

Disincentives for me personally:

The LW/AF audience by and large operates under a set of assumptions about AI safety that I don't really share. I can't easily describe this set, but one bad way to describe it would be "the MIRI viewpoint" on AI safety. This particular disincentive is probably significantly stronger for other "ML-focused AI safety researchers".

More effort needed to write comments than to talk to people IRL

By a lot. As a more extreme example, on the recent pessimism for impact measures post, TurnTrout and I switched to private online messaging at one point, and I'd estimate it was about ~5x faster to get to the level of shared understanding we reached than if we had continued with typical big comment responses on AF/LW.

Replies from: Raemon, Wei_Dai
comment by Raemon · 2019-05-11T19:27:16.292Z · LW(p) · GW(p)

Curious how big each of these active ingredients seemed, or if there were other active ingredients:

1) the privacy (not having any expectation that any onlookers would need to understand what you were saying)

2) the format (linear column of chats, with a small textbox that subtly shaped how much you said at a time)

3) not having other people talking (so you don't have to stop and pay attention to them)

4) the realtime nature (wherein you expect to get responses quickly, which allows for faster back-and-forth and checking that you are both on the same page before moving to the next point)

The overall problem with the status quo is that private conversations are worse for onboarding new people into the AI space. So I think it's quite likely that the best way to improve this is to facilitate private conversations, and them either make them public or distill them afterwards. But there are different ways to go about that depending on which elements are most important.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-05-12T04:18:12.875Z · LW(p) · GW(p)

Primarily 4, somewhat 1, somewhat 2, not at all 3. I think 1 and 2 mattered mostly in the sense that with comments the expectation is that you respond in some depth and with justification, whereas with messaging I just said things with no justification that only TurnTrout had to understand and only needed to explain the ones that we disagreed on.

I do think that conversation was uniquely bad for onboarding new people, I'm not sure I would understand what was said if I reread it two months from now. I did in fact post a distillation of it afterwards.

comment by Wei Dai (Wei_Dai) · 2019-05-11T17:51:24.088Z · LW(p) · GW(p)

The LW/AF audience by and large operates under a set of assumptions about AI safety that I don’t really share. I can’t easily describe this set, but one bad way to describe it would be “the MIRI viewpoint” on AI safety.

Are you seeing this reflected in the pattern of votes (comments/posts reflecting "the MIRI viewpoint" get voted up more), pattern of posts (there's less content about other viewpoints), or pattern of engagement (most replies you're getting are from this viewpoint)? Please give some examples if you feel comfortable doing that.

In any case, do you think recruiting more alignment/safety researchers with other viewpoints to participate on LW/AF would be a good solution? Would you like the current audience to consider the arguments for other viewpoints more seriously? Other solutions you think are worth trying?

TurnTrout and I switched to private online messaging at one point

Yeah, I think this is probably being done less than optimally, and I'd like to see LW support or encourage this somehow. One problem with the way people are doing this currently is that the chat transcripts are typically not posted, which prevents others from following along and perhaps getting a similar level of understanding, or asking questions, or spotting errors that both sides are making, or learning discussion skills from such examples.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-05-12T04:49:45.814Z · LW(p) · GW(p)
Are you seeing this reflected in the pattern of votes (comments/posts reflecting "the MIRI viewpoint" get voted up more), pattern of posts (there's less content about other viewpoints), or pattern of engagement (most replies you're getting are from this viewpoint)?

All three. I do want to note that "MIRI viewpoint" is not exactly right so I'm going to call it "viewpoint X" just to be absolutely clear that I have not precisely defined it. Some examples:

  • In the Value Learning sequence [? · GW], Chapter 3 and the posts on misspecification from Chapter 1 are upvoted less than the rest of Chapter 1 and Chapter 2. In fact, Chapter 3 is the actual view I wanted to get across, but I knew that it didn't really fit with viewpoint X. I created Chapters 1 and 2 with the aim of getting people with viewpoint X to see why one might have the mindset that generates Chapter 3.
  • Looking at the last ~20 posts on the Alignment Forum, if you exclude the newsletters and the retrospective, I would classify them all as coming from viewpoint X.
  • On comments, it's hard to give a comparative example because I can't really remember any comments coming from not-viewpoint X. A canonical example of a viewpoint X comment is this one [LW(p) · GW(p)], chosen primarily because it's on the post of mine that is most explicitly not coming from viewpoint X.
In any case, do you think recruiting more alignment/safety researchers with other viewpoints to participate on LW/AF would be a good solution?

This would help with my personal disincentives; I don't know if it's a good idea overall. It could be hard to have a productive discussion: I already find it hard, and of the people who would say they disagree with viewpoint X, I think I understand viewpoint X very well. (Also, while many ML researchers who care about safety don't know too much about viewpoint X, there definitely exist some who explicitly choose not to engage with viewpoint X because it doesn't seem productive or valuable.)

Would you like the current audience to consider the arguments for other viewpoints more seriously?

Yes, in an almost trivial sense that I think that other viewpoints are more important/correct than viewpoint X.

I'm not actually sure this would better incentivize me to participate; I suspect that if people tried to understand my viewpoint they would at least initially get it wrong, in the same way that often when people try to steelman arguments from some person they end up saying things that that person does not believe.

Other solutions you think are worth trying?

More high-touch in-person conversations where people try to understand other viewpoints? Having people with viewpoint X study ML for a while? I don't really think either of these are worth trying, they seem unlikely to work and are costly.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-12T22:04:07.629Z · LW(p) · GW(p)

It sounds like you might prefer a separate place to engage more with people who already share your viewpoint. Does that seem right? I think I would prefer having something like that too if it means being able to listen in on discussions of AI safety researchers with perspectives different from myself.

I would be interested in getting a clearer picture of what you mean by "viewpoint X", how your viewpoint differs from it, and what especially bugs you about it, but I guess it's hard to do, or you would have done it already.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-05-13T16:28:08.460Z · LW(p) · GW(p)
It sounds like you might prefer a separate place to engage more with people who already share your viewpoint.

I mean, I'm not sure if an intervention is necessary -- I do in fact engage with people who share my viewpoint, or at least understand it well; many of them are at CHAI. It just doesn't happen on LW/AF.

I would be interested in getting a clearer picture of what you mean by "viewpoint X"

I can probably at least point at it more clearly by listing out some features I associate with it:

  • A strong focus on extremely superintelligent AI systems
  • A strong focus on utility functions
  • Emphasis on backwards-chaining rather than forward-chaining. Though that isn't exactly right. Maybe I more mean that there's an emphasis that any particular idea must have a connection via a sequence of logical steps to a full solution to AI safety.
  • An emphasis on exact precision rather than robustness to errors (something like treating the problem as a scientific problem rather than an engineering problem)
  • Security mindset

Note that I'm not saying I disagree with all of these points; I'm trying to point at a cluster of beliefs / modes of thinking that I tend to see in people who have viewpoint X.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-14T14:36:35.996Z · LW(p) · GW(p)

I mean, I’m not sure if an intervention is necessary—I do in fact engage with people who share my viewpoint, or at least understand it well; many of them are at CHAI. It just doesn’t happen on LW/AF.

Yeah, I figured as much, which is why I said I'd prefer having an online place for such discussions so that I would be able to listen in on these discussions. :) Another advantage is to encourage more discussions across organizations and from independent researchers, students, and others considering going into the field.

Maybe I more mean that there’s an emphasis that any particular idea must have a connection via a sequence of logical steps to a full solution to AI safety.

It's worth noting that many MIRI researchers seem to have backed away from this (or clarified that they didn't think this in the first place). This was pretty noticeable at the research retreat and also reflected in their recent writings. I want to note though how scary it is that almost nobody has a good idea how their current work logically connects to a full solution to AI safety.

Note that I’m not saying I disagree with all of these points; I’m trying to point at a cluster of beliefs / modes of thinking that I tend to see in people who have viewpoint X.

I'm curious what your strongest disagreements are, and what bugs you the most, as far as disincentivizing you to participate on LW/AF.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-05-14T16:43:48.960Z · LW(p) · GW(p)
It's worth noting that many MIRI researchers seem to have backed away from this (or clarified that they didn't think this in the first place).

Agreed that this is reflected in their writings. I think this usually causes them to move towards trying to understand intelligence, as opposed to proposing partial solutions. (A counterexample: Non-Consequentialist Cooperation? [LW · GW]) When others propose partial solutions, I'm not sure whether or not this belief is reflected in their upvotes or engagement through comments. (As in, I actually am uncertain -- I can't see who upvotes posts, and for the most part MIRI researchers don't seem to engage very much.)

I want to note though how scary it is that almost nobody has a good idea how their current work logically connects to a full solution to AI safety.

Agreed.

I'm curious what your strongest disagreements are, and what bugs you the most, as far as disincentivizing you to participate on LW/AF.

I don't think any of those features strongly disincentivize me from participating on LW/AF; it's more the lack of people close to my own viewpoint that disincentivizes me from participating.

Maybe the focus on exact precision instead of robustness to errors is a disincentive, as well as the focus on expected utility maximization with simple utility functions. A priori I assign somewhat high probability that I will not find useful a critical comment on my work from anyone holding that perspective, but I'll feel obligated to reply anyway.

Certainly those two features are the ones I most disagree with; the other three seem pretty reasonable in moderation.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-15T01:54:46.623Z · LW(p) · GW(p)

I don’t think any of those features strongly disincentivize me from participating on LW/AF; it’s more the lack of people close to my own viewpoint that disincentivizes me from participating.

I see. Hopefully the LW/AF team is following this thread and thinking about what to do, but in the meantime I encourage you to participate anyway, as it seems good to get ideas from your viewpoint "out there" even if no one is currently engaging with them in a way that you find useful.

as well as the focus on expected utility maximization with simple utility functions

I don't think anyone talks about simple utility functions? Maybe you mean explicit utility functions?

A priori I assign somewhat high probability that I will not find useful a critical comment on my work from anyone holding that perspective, but I’ll feel obligated to reply anyway.

If this feature request [LW(p) · GW(p)] of mine were implemented, you'd be able to respond to such comments with a couple of clicks. In the meantime it seems best to just not feel obligated to reply.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-05-15T16:56:35.906Z · LW(p) · GW(p)
I encourage you to participate anyway, as it seems good to get ideas from your viewpoint "out there" even if no one is currently engaging with them in a way that you find useful.

Yeah, that's the plan.

I don't think anyone talks about simple utility functions? Maybe you mean explicit utility functions?

Yes, sorry. I said that because they feel very similar to me: any utility function that can be explicitly specified must be reasonably simple. But I agree "explicit" is more accurate.

In the meantime it seems best to just not feel obligated to reply.

That seems right, but also hard to do in practice (for me).

comment by Charlie Steiner · 2019-05-10T21:39:05.331Z · LW(p) · GW(p)

As someone basically thinking alone (cue George Thoroughgood), I definitely would value more comments / discussion. But if someone has access to research retreats where they're talking face to face as much as they want, I'm not surprised that they don't post much.

Talking is a lot easier than writing, and more immediately rewarding. It can be an activity among friends. It's more high-bandwidth to have a discussion face to face than it is over the internet. You can assume a lot more about your audience which saves a ton of effort. When talking, you are more allowed to bullshit and guess and handwave and collaboratively think with the other person, and still be interesting, wheras when writing your audience usually expects you to be confident in what you've written. Writing is hard, reading is hard, understanding what people have written is harder than understanding what people have said and if you ask for clarification that might get misunderstood in turn. This all applies to comments almost as much as to posts, particularly on technical subjects.

The two advantages writing has for me is that I can communicate in writing with people who I couldn't talk to, and that when you write something out you get a good long chance to make sure it's not stupid. When talking it's very easy to be convincing, including to yourself, even when you're confused. That's a lot harder in writing.

To encourage more discussion in writing one could try to change the format to reduce these barriers as much as possible - trying to foster one-to-one or small group threads rather than one-to-many, forstering/enabling knowledge about other posters, creating a context that allows for more guesswork and collaborative thinking. Maybe one underutilized tool on current LW is the question thread. Question threads are great excuses to let people bullshit on a topic and then engage them in small group threads.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-05-11T01:48:56.216Z · LW(p) · GW(p)

What if AI safety researchers hired a secretary to take notes on their conversations? If there was anything in the conversation that didn't make sense on reflection, they could say "oh it was probably the secretary's mistake in transcribing the conversation". Heck, the participants could even be anonymized.

Replies from: Raemon, gwern
comment by Raemon · 2019-05-11T02:07:25.238Z · LW(p) · GW(p)

Yeah, I think this is actually probably a decent solution, with one caveat being that people who have the background knowledge to effectively summarize the conversation probably (soon afterwards) have the skills necessary to do other things. (At least, this is what someone claimed when I asked them about the idea)

Replies from: John_Maxwell_IV, Pattern
comment by John_Maxwell (John_Maxwell_IV) · 2019-05-11T04:16:43.331Z · LW(p) · GW(p)

So it serves as a training program for aspiring researchers... even better! Actually, in more ways than one, because other aspiring researchers can read the transcript and come up to speed more quickly.

Replies from: Raemon
comment by Raemon · 2019-05-11T20:46:50.085Z · LW(p) · GW(p)

"soon afterwards" was meant to be more of a throwaway qualifier than a main point. The claim (not made my me and I'm not sure if I endorse it) is that the people who can write up the transcripts effectively (esp. if there's any kind of distillation work going on) would already have more important things they are already capable of doing.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-05-11T23:22:13.805Z · LW(p) · GW(p)

Well if they're incompetent, that enhances the plausible deniability aspect ('If there was anything in the conversation that didn't make sense on reflection, they could say "oh it was probably the secretary's mistake in transcribing the conversation".') It also might be a way to quickly evaluate someone's distillation ability.

comment by Pattern · 2019-05-18T18:12:35.328Z · LW(p) · GW(p)
who have the background knowledge to effectively summarize the conversation

This sounds like it might make more sense for the people having the conversation to do, perhaps at the end (possibly with the recorder prompting "What were the essential points in this discussion?").

Replies from: Raemon
comment by Raemon · 2019-05-18T22:11:02.620Z · LW(p) · GW(p)

But part of the problem aiming to be solved here is "the people who's conversations you want to record are busy, and don't actually get much direct benefit from having recorded or summarized their conversation, and one of the primary problems is anxiety about writing it up in a way that won't get misconstrued or turn out to be wrong later", and the whole point is to outsource the executive function to someone else.

And the executive function is actually a fairly high bar.

comment by gwern · 2019-05-18T23:07:57.931Z · LW(p) · GW(p)

A rapporteur?

comment by romeostevensit · 2019-05-10T22:23:09.149Z · LW(p) · GW(p)

One of the reasons feedback feels unpleasant is when it fails to engage with what actually interests you about the area. When you receive such feedback, there will then be the feeling of needing to respond for the sake of bystanders who might otherwise assume that there aren't good responses to the feedback.

Replies from: mr-hire, MichaelDickens, MichaelDickens
comment by Matt Goldenberg (mr-hire) · 2019-05-11T01:04:44.949Z · LW(p) · GW(p)

Yes, one of the frustrating things is getting criticism that just feels like "this is just not the conversation I want to be having." I'm trying to discuss how this particular shade of green effects the aesthetics of this particular object, but you're trying to talk to me about how green doesn't actually exist, and blue is the only real color. It's understandable but it's just frustrating.

comment by MichaelDickens · 2024-03-29T18:30:29.075Z · LW(p) · GW(p)

I find that sort of feedback more palatable when they start with something like "This is not related to your main point but..."

I am more OK with talking about tangents when the commenter understands that it's a tangent.

comment by MichaelDickens · 2024-03-29T18:29:09.420Z · LW(p) · GW(p)

I wonder if there's a good way to call out this sort of feedback? I might start trying something like

That's a reasonable point, I have some quibbles with it but I think it's not very relevant to my core thesis so I don't plan on responding in detail.

(Perhaps that comes across as rude? I'm not sure.)

Replies from: romeostevensit
comment by romeostevensit · 2024-03-29T20:24:07.435Z · LW(p) · GW(p)

Thanks, this actually highlighted for me another aspect of it. Namely, if someone objects to a claim I did not make, if I respond it feels as though people might go with 'he was actually tacitly making that claim.' Calling out that I'm not making that claim explicitly should help here.

comment by Jan_Kulveit · 2019-05-15T22:07:37.706Z · LW(p) · GW(p)

As a datapoint - my reasons for mostly not participating in discussion here:

  • The karma system messes up with my S1 motivations and research taste; I do not want to update toward "LW average taste" - I don't think LW average taste is that great. Also IMO on the margin it is better for the field to add ppl who are trying to orient themselves in AI alignment independently, in contrast to people guided by "what's popular on LW"
  • Commenting seems costly; feels like comments are expected to be written very clearly and reader-friendly, which is time costly
  • Posting seems super-costly; my impression is many readers are calibrated on quality of writing of Eliezer, Scott & likes, not on informal research conversation
  • Quality of debate on topics I find interesting is much worse than in person
  • Not the top reason, but still... System of AF members vs. hoi polloi, omegas, etc. creates some subtle corruption/distortion field. My overall vague impression is the LW team generally tends to like solutions which look theoretically nice, and tends to not see subtler impacts on the elephants. Where my approach would be to try move much of the elephants-playing-status-game out of the way, what's attempted here sometimes feels a bit like herding elephants with small electric jolts.
Replies from: Raemon
comment by Raemon · 2019-05-15T23:22:55.639Z · LW(p) · GW(p)
Not the top reason, but still... System of AF members vs. hoi polloi, omegas, etc. creates some subtle corruption/distortion field. My overall vague impression is the LW team generally tends to like solutions which look theoretically nice, and tends to not see subtler impacts on the elephants. Where my approach would be to try move much of the elephants-playing-status-game out of the way, what's attempted here sometimes feels a bit like herding elephants with small electric jolts

I'm not sure I understand this part, can you try restating the concern in different words?

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2019-05-16T22:12:58.123Z · LW(p) · GW(p)

Sure.

1) From the LW user perspective, the way AF is integrated in a way which signals there are two classes of users, where the AF members are something like "the officially approved experts" (specialists, etc.), together with omega badges, special karma, application process, etc. In such setup it is hard to avoid for the status-tracking subsystem which humans generally have to not care about what is "high status". At the same time: I went through the list of AF users, and it seems much better representation of something which Rohin called "viewpoint X" than the field of AI alignment in general. I would expect some subtle distortion as a result

2) The LW team seem quite keen about e.g. karma, cash prizes on questions, omegas, daily karma updates, and similar technical measures which in S2-centric views bring clear benefits (sorting of comments, credible signalling of interest in questions, creating high-context environment for experts,...). Often these likely have some important effects on S1 motivations / social interactions / etc. I've discussed karma and omegas before, creating an environment driven by prizes risks eroding the spirit of cooperativeness and sharing of ideas which is one of virtues of AI safety community, and so on. "Herding elephants with small electric jolts" is a poetic description of effects people's S1 get from downvotes and strong downvotes.

comment by paulfchristiano · 2019-05-12T06:34:48.973Z · LW(p) · GW(p)

I don't comment more because writing comments takes time. I think that in person discussions tend to add more value per minute. (I expect your post is targeted at people who comment less than I do, but the reasons may be similar.)

I can imagine getting more mileage out of quick comments, which would necessarily be short and unplished. I'm less likely to do that because I feel like fast comments will often reflect poorly on me for a variety of reasons: they would have frequent and sometimes consequential errors (that would be excused in a short in-person discussion because of time), in general hastily-written comments send negative signal (better people write better comments, faster comments are worse, full model left as exercise for reader), I'd frequently leave errors uncorrected or threads of conversation dropped, and so on.

comment by TedSanders · 2019-05-10T20:28:08.147Z · LW(p) · GW(p)

My hypothesis: They don't anticipate any benefit.

Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.

For me, this is why I rarely post on LessWrong.

Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already have high quality contributors (and perhaps having mechanisms to disincentivize the low quality contributors). It's a bit of a bootstrapping problem. LessWrong is doing well, but no doubt it could do better.

That's my initial reaction, at least. Hope it doesn't offend or come off as too negative. Best wishes to you all.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-05-11T01:54:52.074Z · LW(p) · GW(p)

Online discussions are much more scaleable [LW(p) · GW(p)] than in-person ones. And the stuff you write becomes part of a searchable archive.

I also feel that online discussions allow me to organize my thoughts better. And I think it can be easier to get to the bottom of a disagreement online, whereas in person it's easier for someone to just keep changing the subject and make themselves impossible to pin down, or something like that.

comment by steven0461 · 2019-05-11T20:43:00.345Z · LW(p) · GW(p)

The expectation of small numbers of long comments instead of large numbers of short comments doesn't fit with my experience of how productive/efficient discourse happens. LW culture expects posts to be referenced forever and it's hard to write for the ages. It's also hard to write for a general audience of unknown composition and hard to trust such an audience not to vote and comment badly in a way that will tax your future attention.

comment by Gordon Seidoh Worley (gworley) · 2019-05-13T18:13:47.773Z · LW(p) · GW(p)

An interesting pattern I see in the comments and have picked out from other conversations but that no one has called out, is that many people seem to have a preference for a style of communication that doesn't naturally fit "I sit alone and write up my thoughts clearly and then post them as a comment/post". My personal preference is very much to do exactly that, as talking to me in person about a technical subject is maybe interesting but actually requires more of my time and energy than it does for me to write about it. This suggests to me that the missing engagement is all folks who don't prefer to write out their thoughts carefully, and the existing engagement is largely from people who do prefer this.

I have some kind of pet theory here about different internet cultures (I grew up with Usenet and listservs; younger/other folks grew up with chat and texting), but I think the cause of this difference in preferences is not especially relevant.

comment by cousin_it · 2019-05-11T11:44:33.678Z · LW(p) · GW(p)

There's a common view that a researcher's output should look like a bunch of results: "A, B, C therefore X, Y, Z". But we also feel, subconsciously and correctly, that such output has an air of finality and won't attract many comments. Look at musical subreddits for example - people posting their music get few comments, people asking for help get more. So when I post a finished result on LW and get few comments, the problem is on me. There must be a better way to write posts, less focused on answering questions and more on making questions as interesting as possible. But that's easier said than done - I don't think I have that skill.

Replies from: Raemon
comment by Raemon · 2019-05-11T18:25:23.294Z · LW(p) · GW(p)

See Writing that Provokes Comments [LW · GW].

Replies from: Raemon, cousin_it
comment by Raemon · 2019-05-11T19:29:24.695Z · LW(p) · GW(p)

(this whole concept is part of why I'm bullish on LW shifting to focus more on questions than posts)

Replies from: habryka4
comment by habryka (habryka4) · 2019-05-11T19:55:53.838Z · LW(p) · GW(p)

*nods* I do think there is a lot of value in people just asking good questions, and that I would like to see more people doing that in the AI Alignment space.

comment by cousin_it · 2019-05-11T21:38:18.044Z · LW(p) · GW(p)

Thanks, great post! And good comments too. Not sure how I missed it at the time.

comment by Gordon Seidoh Worley (gworley) · 2019-05-10T23:38:48.339Z · LW(p) · GW(p)

I continue to be concerned with issues around downvotes and upvotes being used as "boos" and "yays" rather than saying something about the worthiness of a thing to be engaged with (I've been thinking about this for a while and just posted a comment about it over on EA forum [EA(p) · GW(p)]). The result is that to me votes a very low in information value, which is unfortunate because they are the primary feedback mechanism on LW. I would love to see a move towards something that made voting costlier, although I realize that might impact engagement. There are probably other solutions that overcome these issues by not directly tweaking voting but instead pulling sideways at voting to come up with something that would work better for what I consider the import thing you want votes for: to identify the stuff worth engaging with.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2020-04-22T22:59:34.243Z · LW(p) · GW(p)

Stack Exchange question votes are asymmetric, such that an upvote gives +10 rep and a downvote -2. I discuss the benefits of that system here. While LW is different from SE (do users who produce high-quality content value LW karma as much as SE rep?), it might be worth doubling karma awarded to votes on questions.

comment by Adam Zerner (adamzerner) · 2019-05-11T00:47:18.688Z · LW(p) · GW(p)

Personally, here are two things that prevent me from participating as much as I'd like that I suspect might apply to others:

1) Internet addiction

Whenever I post or comment, I can't help but start checking in to see if I get any responses rather obsessively. But I don't want to be in that state, so sometimes I withhold from posting or commenting. And more generally, being on LW means being on the internet, and for me, being on the internet tempts me to procrastinate and do things I don't actually want to be doing.

2. Relatively high bar for participation

If I'm going to comment or post, I want to say something useful. Which often means spending time and effort. a) I have a limited capacity for this, and b) if I'm going to be spending time and effort, I find that I often see it as more useful to apply it elsewhere, like reading a textbook.

With (b), not always though. There's a lot of other times where I do feel that applying my time/effort here on LW is the most useful place to apply it.

comment by Raemon · 2019-05-10T20:28:40.796Z · LW(p) · GW(p)

I definitely expect there to be lot of room for improvement here – each of the areas you point to is something we've talked about.

One quick check (too late in this case, but fairly high priority to figure out IMO) is "do they even think of LW as a place they should have considered reading?"

A thing I'd been considering for awhile is "make an actual alignment subforum on LessWrong", which includes both the high signal official AlignmentForum posts, as well as other random posts about alignment, so that if you've come to LessWrong explicitly for AI you can see everything relevant.

Meanwhile my guess is at least some of those people showed up on LessWrong, saw a bunch of random posts irrelevant to them, and then bounced off. (And meanwhile showed up on AlignmentForum.org and saw less commenting activity, although I'm not sure how big a deal that'd be)

(there's a question of what you'd want such a subforum to include – there's a carving that looks more like "math stuff" and there's a carving that includes things like AI policy or whatever. Also it's sort of awkward to have the two places to hang out be "The AlignmentForum" and "The Alignment Subforum [of LessWrong]")

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-10T22:56:16.341Z · LW(p) · GW(p)

I definitely expect there to be lot of room for improvement here – each of the areas you point to is something we’ve talked about.

That's good to hear.

One quick check (too late in this case, but fairly high priority to figure out IMO) is “do they even think of LW as a place they should have considered reading?”

A lot of them know of LW/AF and at least read some of the posts.

Also it’s sort of awkward to have the two places to hang out be “The AlignmentForum” and “The Alignment Subforum [of LessWrong]”

Agreed this seems really awkward/confusing, but it makes me realize we do need better ways to onboard people who are mainly interested in AI alignment as opposed to rationality and cater to their needs generally. If a new user tries to comment on a post on AF now, it just pops up a message "Log in or go to LessWrong to submit your comment." and there's not even a link to the same post on LW. This whole experience probably needs to be reconsidered.

Replies from: Raemon
comment by Raemon · 2019-05-10T23:02:08.128Z · LW(p) · GW(p)

a) Do you have a sense that these people think of LW/AF as a/the primary nexus for discussion alignment-related issues? (but didn't either because they didn't expect to get much benefit, or would endure too much cost)

b) I don't actually know if there's any other actual locus of conversation happening anywhere other than individual private google docs, curious if you know of any such thing? (such as mailing lists. Not asking for the specific details of any such mailing list, just wanting to check if such a thing exists at all).

Agree that the onboarding experience should be much better, one way or another.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-10T23:13:30.359Z · LW(p) · GW(p)

a) Do you have a sense that these people think of LW/AF as a/the primary nexus for discussion alignment-related issues? (but didn’t either because they didn’t expect to get much benefit, or would endure too much cost)

Again I didn't get a chance to talk much about this topic, but I would guess yes.

b) I don’t actually know if there’s any other actual locus of conversation happening anywhere other than individual private google docs, curious if you know of any such thing? (such as mailing lists. Not asking for the specific details of any such mailing list, just wanting to check if such a thing exists at all).

The only thing I personally know is a low-traffic private mailing list run by FHI which has non-FHI researchers on it but mostly consist of discussions between FHI researchers.