Open & Welcome Thread - September 2020

post by habryka (habryka4) · 2020-09-04T18:14:17.056Z · LW · GW · 73 comments

If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].

The Open Thread tag is here.

73 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2020-09-17T00:56:15.752Z · LW(p) · GW(p)

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

Periergo [LW · GW] is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Curi [LW · GW] has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.

The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

Replies from: max-kaye, Sherrinford, max-kaye, max-kaye
comment by Max Kaye (max-kaye) · 2020-09-18T08:12:30.370Z · LW(p) · GW(p)

Today we have banned two users, curi and Periergo from LessWrong for two years each.

I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)

I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.

Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.

On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack.

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

I'd like to note I am on that list. (like 1/2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should have an issue with it. I knew about being on that list earlier this year, before I returned to FI. Being on the list was not a factor in my decision.

There is nothing immoral or malicious about curi.us/2215. I can understand why you would find it distasteful, but that's not a decisive reason to ban someone or condemn their actions.

A few hours ago, curi and I discussed elements about the ban and curi.us/2215 on his stream. I recommend watching a few minutes starting at 5:50 and at 19:00, for transparency you might also be interested in 23:40 -> 24:00. (you can watch on 2x speed, should be fine)

Particularly, I discuss my presence on curi.us/2215 at 5:50

You say:

a long list of people who engaged with him and others in the Critical Rationalist community

There are 33 by my count (including me). The list spans a decade, and is there for a particular purpose, and it is not to publicly shame people in to returning, or to be mean for the sake of it. I'd like to point out some quotes from the first paragraph of curi.us/2215:

This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc. It would be good to find patterns about what goes wrong. People who left are welcome to come back and try again.

Notably, you don't end up on the list if you are active. Also, although it's not explicitly mentioned in the top paragraph; a crucial thing is that those on the list have left and avoided discussion about it. Discussion is much more important in FI than most philosophy forums - it's how we learn from each other, make sure we understand, offer criticism and assist with error correction. You're not under any obligation to discuss something, but if you have criticisms and refuse to share them: you're preventing error correction; and if you leave to evade criticism then you're not living by your values and philosophy.

The people listed on curi.us/2215 have participated in a public philosophy forum for which there are established norms that are not typical and are different from LW. FI views the act of truth-seeking differently. While our (LW/FI) schools of thought disagree on epistemology, both schools have norms that are related to their epistemic ideas. Ours look different.

It is unfair to punish someone for an act done outside of your jurisdiction under different established norms. If curi were putting LW people on his list, or publishing off-topic stuff at LW, sure, take moderation action. None of those things happened. In fact, the main reason you've provided for even knowing about that list is via the sockpuppet you banned.

Sockpuppet accounts are not used to make the lives of their victims easier. By banning curi along with Periergo you have facilitated a (minor) victory for Periergo. This is not right.

a history of threats against people who engage with him

THIS IS A SERIOUS ALLEGATION! PLEASE PROVIDE QUOTES

curi prefers to discuss in public so they should be easy to find and verify. I have never known curi to threaten people. He may criticise them, but he does not threaten them.

Notably, curi has consistently and loudly opposed violence and the initiation of force, if people ask him to leave them alone (provided they haven't e.g. committed a crime against him), he respects that.

being the historically most downvoted account in LessWrong history

This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

"a history of threats against people who engage with him" has not been established or substantiated.

he seems well-intentioned

I believe he is. As far as I can tell he's gone to great personal expense and trouble to keep FI alive for no other reason than that his sense of morality demands it. (That might be over simplifying things, but I think the essence is the same. I think he believes it is the right thing to do, and it is a necessary thing to do)

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from)

He has gained karma since returning to LW briefly. I think you should retract the part about him having negative karma b/c it misrepresents the situation. He could have made a new account and he would have positive karma now. That means your judgement is based on past behaviour that was already punished. This is double jeopardy. (Edit: after some discussion on FI it looks like this isn't double jeopardy, just double punishment. Double jeopardy specifically refers to being on trial for the same offense twice, not being punished twice.)

Moreover, curi is being punished for being honest and transparent. If he had registered a new account and hidden his identity, would you have banned him only based on his actions this past 1-2 months? If you can say yes, then fine, but I don't think your argument holds in this case the only part that is verifiable is based on your disapproval of his discussion methods. Disagreeing with him is fine. I think a proportionate response would be a warning.

As it stands no warning was given, and no attempt to learn his plans was made. I think doing that would be proportionate and appropriate. A ban is not.

It is significant that curi is not able to discuss this ban himself. I am voluntarily doing this, of my own accord. He was not able to defend himself or provide explanation.

This is especially problematic as you specifically say you think he was improving compared with his conduct several years ago.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon

This alone is not enough. A warning is proportionate.

are all low-karma

Unpopularity is no reason for a ban

and I assign too high of a probability that old patterns will repeat themselves.

How is this different to pre-crime?

I think, given he had deliberately changed his modus operandi weeks ago and has not posted in 13 days, this is unfair and overly judgmental.

You go on to say:

and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

What could curi have done differently which would have tipped the scales? If there is no acceptable thing he could have done, why was action not taken weeks ago when he was active?

I believe it is fundamentally unjust to delay action in this fashion without talking with him first. curi has an incredibly long track record of discussion, he is very open to it. He is not someone who avoids taking responsibility for things; quite the opposite. If you had engaged him, I am confident he would have discussed things with you.

and to generally err on the side of curating our userbase pretty heavily and maintaining high standards.

It makes sense that you want to cultivate the best rational forums you can. I think that is a good goal. However, again, there were other, less extreme and more proportionate actions that could have been taken first, especially seeing as curi had changed his LW discussion policy and was inactive at the time of the ban.

We presumably disagree on the meaning of 'high standards', but I don't think that's particularly relevant here.

This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

There were many alternative actions you could have taken. For example, a 1-month ban. Restricting curi to only posting on his own shortform. Warning him of the circumstances and consequences under conditions, etc.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site

I'm glad you've mentioned this, but LW is not a court of law and you are not bound to those standards (and no punishment here is comparable to the punishment a court might distribute). I think there are other good reasons for reconsidering curi's ban.

banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

I think there is a critical point to be made here: you could have taken no action at this time and put a mod-notification for activity on his account. If he were to return and do something you deemed unacceptable, you could swiftly warn him. If he did it again, then a short-term ban. Instead, this is a sledge-sized banhammer used when other options were available. It is a decision that is now publicly on LW and indicates that LW is possibly intolerant of things other than irrationality. I don't think this is reflective of LW, and I think it reflects poorly on the moderation policies here. I don't think it needs to be that way, though.

I think a conditional unbanning (i.e. 1 warning, with the next action being a swift short ban) is an appropriate action for the moderation team to make, and I implore you to reconsider your decision.

If you think this is not appropriate, then I request you explain why 2 years is an appropriate length of time, and why Periergo and curi should have identical ban lengths.

The alternative to pacificity does not need to be so heavy handed.

I’d also like to note that curi has published a post on his blog regarding this ban; I read it after drafting this reply: http://curi.us/2381-less-wrong-banned-me

Replies from: ChristianKl, Kaj_Sotala, lsusr, sil-ver, habryka4
comment by ChristianKl · 2020-09-18T10:12:25.853Z · LW(p) · GW(p)

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Unpopularity is no reason for a ban

That seems like a sentiment indicative of ignoring the reason for which he was banned. It was a utilitarian argument. The fact that someone gets downvoted is Bayesian evidence that it's not valuable for people to interact with him on LessWrong.

How is this different to pre-crime?

If you imprision someone who murdered in the past because you are afarid they murder again, that's not pre-crime in most common senses of the word.

Additionally even if it would be, LW is not a place with virtue ethics standards but one with utilitarian standards. Taking action to prevent things that are likely to negatively effect LW from happening in the future is perfectly fine with the idea of good gardening. 

If you stand in your garden you don't ask "what crimes did the plants commit and how should they be punished?" but you focus on the future.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-18T11:21:15.267Z · LW(p) · GW(p)

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Isn't it even worse then b/c no action was necessary?

But more to the point, isn't the determination X person is not good to have around a personal judgement? It doesn't apply to everyone else.

I think what habryka meant was that he wasn't making a personal judgement.

comment by Kaj_Sotala · 2020-09-18T19:52:53.660Z · LW(p) · GW(p)
This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.

The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind". Someone posting the kind of content that's unwanted on a forum seems like a reasonable reason to bar that person from the forum in question.

I agree with "being disliked is not a reason for punishment", but people also have the right to choose who they want to spend their time with, even if someone who they preferred not to spend time with viewed that as being punished. In my book, banning people from a private forum is more like "choosing not to invite someone to your party again, after they previously caused others to have a bad time" than it is like "punishing someone".

Replies from: GavinPalmer1984, max-kaye
comment by Gavin Palmer (GavinPalmer1984) · 2020-09-22T12:31:10.861Z · LW(p) · GW(p)

I'm a fan of solving problems with technology. One way to solve this problem of people not liking an author's content is to allow users to put people on an ignore list (and maybe for some period of time).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-09-22T14:45:07.365Z · LW(p) · GW(p)

How many people here remember Usenet's kill files?

comment by Max Kaye (max-kaye) · 2020-09-19T07:11:37.887Z · LW(p) · GW(p)

The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind".

You're using quotes but I am not sure what you're quoting, do you just mean to emphasize/offset those clauses?

but people also have the right to choose who they want to spend their time with,

Sure, that might be part of the reason curi hadn't been active on LW for 13 days at the time of the ban.

(continued)

even if someone who they preferred not to spend time with viewed that as being punished.

I don't know if curi think's it's punishment. I think it's punishment, and I think most ppl would agree that 'A ban' would be an answer to the question (in online forum contexts, generally) 'What is an appropriate punishment?' That would mean a ban is a punishment.

LW mods can do what they want; in essence it's their site. I'm arguing:

  1. it's unnecessary
  2. it was done improperly
  3. it reflects badly on LW and creates a hostile culture to opposing ideas
  4. (3) is antithetical to the opening lines of the LessWrong FAQ (which I quote below). Note: I'm introducing this argument in this post, I didn't mention it originally.
  5. significant parts of habryka's post were factually incorrect. It was noted, btw, in FI that a) habryka's comments were libel, and b) that curi's reaction--quoted below--is mild and undercuts habryka's claim.

curi wrote (in his post on the LW ban)

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I didn’t threaten anyone. I’m guessing it was a careless wording. I think habryka should retract or clarify it. Above habryka used “attack[]” as a synonym for criticize. I don’t like that but it’s pretty standard language. But I don’t think using “threat[en]” as a synonym for criticize is reasonable.

“threaten” has meanings like “state one's intention to take hostile action against someone in retribution for something done or not done” and “express one's intention to harm or kill“ (New Oxford Dictionary). This is the one thing in the post that I strongly object to.

from the FI discussion:

JustinCEO: i think curi's response to this libel is written in a super mild way

JustinCEO: which notably contrasts with being the sort of person who would have "a history of threats against people who engage with him" in the first place

LessWrong FAQ (original emphasis)

LessWrong is a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we want to develop and practice the art of human rationality.

To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.

I don't think the things people have described (in this thread) as seemly important parts of LW are at all reflected by this quote, rather, they contradict it.

Replies from: habryka4
comment by habryka (habryka4) · 2020-09-19T17:06:34.762Z · LW(p) · GW(p)

significant parts of habryka's post were factually incorrect.

I am not currently aware of any factual inaccuracies, but would be happy to correct any you point out. 

The only thing you pointed out was something about the word "threat" being wrong, but that only appears to be true under some very narrow definition of threat. This might be weird rationalist jargon, but I've reliably used the word "threat" to simply mean signaling some kind of intention of inflicting some kind punishment in response to some condition by the other person. Curi and other people from FI have done this repeatedly, and the "list of people who have evaded/lied/etc." is exactly one of such threats, whether explicitly labeled as such or not. 

The average LessWrong user would pretty substantially regret having engaged with curi if they later end up on that list, so I do think it's a pretty concrete punishment, and while there might be some chance you are unaware of the negative consequences, this doesn't really change the reality very much that due to the way I've seen curi active on the site, engaging with him is a trap that people are likely to regret.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2020-09-19T22:15:21.678Z · LW(p) · GW(p)

I've reliably used the word "threat" to simply mean signaling some kind of intention of inflicting some kind punishment in response to some condition by the other person. Curi and other people from FI have done this repeatedly, and the "list of people who have evaded/lied/etc." is exactly one of such threats, whether explicitly labeled as such or not.

This game-theoretic concept of "threat" is fine, but underdetermined: what counts as a threat in this sense depends on where the the "zero point" is [LW · GW]; what counts as aggression versus self-defense depends on what the relevant "property rights" are. (Scare quotes on "property rights" because I'm not talking about legal claims, but "property rights" is an apt choice of words, because I'm claiming that the way people negotiate disputes that don't rise to the level of dragging in the (slow, expensive) formal legal system, have a similar structure.)

If people have a "right" to not be publicly described as lying, evading, &c., then someone who puts up a "these people lied, evaded, &c." page on their own website is engaging in a kind of aggression. The page functions as a threat: "If you don't keep engaging in a way that satisfies my standards of discourse, I'll publicly call you a liar, evader, &c.."

If people don't have a "right" to not be publicly described as lying, evading, &c., then a website administrator who cites a user's "these people lied, evaded, &c." page on their own website as part of a rationale for banning that user, is engaging in a kind of aggression. The ban functions as a threat: "If you don't cede your claim on being able to describe other people as lying, evading, &c., I won't let you participate in this forum."

The size of the website administrator's threat depends on the website's "market power." Less Wrong is probably small enough and niche enough such that the threat doesn't end up controlling anyone's off-site behavior: anyone who perceives not being able to post on Less Wrong as a serious threat is probably already so deeply socially-embedded into our little robot cult, that they either have similar property-rights intuitions as the administrators, or are too loyal to the group to publicly accuse other group members as lying, evading, &c., even if they privately think they are lying, evading, &c.. (Nobody likes self-styled whistleblowers!) But getting kicked off a service with the market power of a Google, Facebook, Twitter, &c. is a sufficiently big deal to sufficiently many people such that those websites' terms-of-service do exert some controlling pressure on the rest of Society.

What are the consequences of each of these "property rights" regimes?

In a world where people have a right to not be publicly described as lying, evading, &c., then people don't have to be afraid of losing reputation on that account. But we also lose out on the possibility of having a public accounting of who has actually in fact lied, evaded, &c.. We give up on maintaining the coordination equilibrium [LW · GW] such that words like "lie" have a literal meaning that can actually be true or false, rather than the word itself simply constituting an attack [LW · GW].

Which regime better fulfills our charter of advancing the art of human rationality? I don't think I've written this skillfully enough for you to not be able to guess what answer I lean towards, but you shouldn't trust my answer if it seems like something I might lie or evade about! You need to think it through for yourself.

Replies from: Vaniver
comment by Vaniver · 2020-09-20T01:46:06.935Z · LW(p) · GW(p)

For what it's worth, I think a decision to ban would stand on just his pursuit of conversational norms that reward stamina over correctness, in a way that I think makes LessWrong worse at intellectual progress. I didn't check out this page, and it didn't factor into my sense that curi shouldn't be on LW.

I also find it somewhat worrying that, as I understand it, the page was a combination of "quit", "evaded", and "lied", of which 'quit' is not worrying (I consider someone giving up on a conversation with curi understandable instead of shameful), and that getting wrapped up in the "&c." instead of being the central example seems like it's defining away my main crux.

Replies from: Vaniver
comment by Vaniver · 2020-09-21T17:11:54.751Z · LW(p) · GW(p)

To elaborate on this, I think there are two distinct issues: "do they have the right norms?" and "do they do norm enforcement?". The second is normally good instead of problematic, but makes the first much more important than it would be otherwise. I see Zack_M_Davis as pointing out "hey, if we don't let people enforce norms because that would make normbreakers feel threatened, do we even have norms?", which is a valid point, but which feels somewhat irrelevant to the curi question.

comment by lsusr · 2020-09-18T12:13:23.682Z · LW(p) · GW(p)

If I understand you correctly then your primary argument appears to be that a ban is (1) too harsh a judgment where a warning would have sufficed, (2) that curi ought to have some sort of appeals process and (3) that habryka's top-level comment does not provide detailed citations for all the accusations against curi.

(1) Curi was [LW(p) · GW(p)] warned at least once.

(2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation.

(3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi's profile and even curi's response you linked to that curi is damaging to productive dialogue on Less Wrong.

The strongest claim against curi is "a history of threats against people who engage with him [curi]". I was able to confirm this via a quick glance through curi's past behavior on this site. In this comment [LW(p) · GW(p)] curi threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him.

Edit: grammar.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-18T12:53:45.966Z · LW(p) · GW(p)

lsusr said:

(1) Curi was [LW(p) · GW(p)] warned at least once.

I'm reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I'll check, though.

There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY [LW(p) · GW(p)]

gjm said:

I had not looked, at that point; I took "mirrored" to mean taking copies of whole discussions, which would imply copying other people's writing en masse. I have looked, now. I agree that what you've put there so far is probably OK both legally and morally.

My apologies for being a bit twitchy on this point; I should maybe explain for the benefit of other readers that the last time curi came to LW, he did take a whole pile of discussion from the LW slack and copy it en masse to the publicly-visible internet, which is one reason why I thought it plausible he might have done the same this time.

I don't think there is case for (1). Unless gjm is a mod and there are things I don't know?

lsusr said:

(2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation.

habryka explicitly mentions curi changing his LW commenting policy to be 'less demanding'. I can see the motivation for expedition, but the mods don't have to speedrun it. I think it's bad there wasn't any communication beforehand.

lsusr said:

(3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi's profile and even curi's response you linked to that curi is damaging to productive dialogue on Less Wrong.

I don't think that's the case. His net karma has increased, and judging him for content on his blog - not his content on LW - does not establish whether he was 'damaging to productive dialogue on Less Wrong'.

His posts on less wrong have been contributions, for example, www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental [LW · GW] is a direct response to of EY's posts and it was net-upvoted. He followed that up with two more net-upvoted posts:

This is not the track record of someone wanting to waste time. I know there are disagreements between LW and curi / FI. If that's the main point of contention, and that's why he's being banned, then so be it. But he doesn't deserve to mistreated and have baseless accusations thrown at him.

lsusr said:

The strongest claim against curi is "a history of threats against people who engage with him [curi]". I was able to confirm this via a quickly glance through curi's past behavior on this site. In this comment threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him.

We have substantial disagreements about what constitutes a threat, in that case. I think a threat needs to involve something like danger, or violence, or something like that. It's not a 'threat' to copy public discussion under fair use for criticism and commentary.

I googled the definition, and these are the two (for define:threat)

  • a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
  • a person or thing likely to cause damage or danger.

Neither of these apply.

Replies from: lsusr
comment by lsusr · 2020-09-18T13:00:19.468Z · LW(p) · GW(p)

I googled the definition, and these are the two (for define:threat)

  • a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
  • a person or thing likely to cause damage or danger.

Neither of these apply.

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace". I think the word "retribution" implies undue justice. A "threat" need only imply retaliation, not retribution, of hostile action.

We have substantial disagreements about what constitutes a threat,

Evidently yes, as do dictionaries.

Replies from: habryka4, max-kaye
comment by habryka (habryka4) · 2020-09-18T16:55:29.993Z · LW(p) · GW(p)

This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-19T06:57:05.271Z · LW(p) · GW(p)

This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.

This definition doesn't describe anything curi has done (see my sibling reply linked below), at least that I've seen. I'd appreciate any quotes you can provide.

https://www.lesswrong.com/posts/PkpuvsFYr6yuYnppy/open-and-welcome-thread-september-2020?commentId=H2tyDgoRFov8Xs8HS [LW(p) · GW(p)]

comment by Max Kaye (max-kaye) · 2020-09-19T06:55:23.718Z · LW(p) · GW(p)

define:threat

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace".

This definition seems okay to me.

undue justice

I don't know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don't think either prosecution or persecution describe anything curi's done on LW. If you have counterexamples I would appreciate it if you could quote them.

We have substantial disagreements about what constitutes a threat,

Evidently yes, as do dictionaries.

I don't think the dictionary definitions disagree much. It's not a substantial disagreement. thesaurus.com seems to agree; it lists them as ~strong synonyms. the crux is retribution vs retaliation, and retaliation is more general. The mafia can threaten shopkeeps with violence if they don't pay protection. I think retaliation is a better fitting word.

However, this still does not apply to anything curi has done!

Replies from: lsusr
comment by lsusr · 2020-09-19T07:49:04.807Z · LW(p) · GW(p)

I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough. I do not believe that elaborating upon my reasoning would get you to change your mind about the core disagreement. Elaborating upon my position would therefore waste both of our time.

The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.

Curi is being banned for wasting time with long, unproductive conversations. It would be ironic for me to embroil myself in such a conversation as a consequence.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-19T14:08:56.386Z · LW(p) · GW(p)

I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough.

I don't either.

The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.

Sure, we can stop.

Curi is being banned for wasting time with long, unproductive conversations.

I don't know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilling to add it to the rules, he should be unbanned, simple as that.

Maybe that idea is worth discussing? I think it's reasonable. If something is an offense it should be publicly stated as such and new and continuing users should be able to point to it and say "that's why". It shouldn't feel like it was made up on the fly as a special case -- it's a problem when new rules are invented ad-hoc and not canonicalized (I don't have a problem with JIT rulebooks, it's practical).

comment by Rafael Harth (sil-ver) · 2020-09-18T08:33:53.279Z · LW(p) · GW(p)

Arguably, if there is something truly wrong with the list, I should have an issue with it.

This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determined by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.

Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-18T08:50:36.624Z · LW(p) · GW(p)

Arguably, if there is something truly wrong with the list, I should have an issue with it.

This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determent by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.

I think this is fair, and additionally I maybe shouldn't have used the word "truly"; it's a very laden word. I do think that, on the balance of probabilities, my case does reduce the likelihood of something being foundationally wrong with it, though. (Note: I've said this in, what I think, is a LW friendly way. I'd say it differently on FI.)

One thing I do think, though, is that people's social anxiety does not make things in general right or wrong, but can be decisive wrt thinking about a single action.

Another thing to point out is anonymous participation in FI is okay, it's reasonably easy to use an anonymous/pseudonymous email to start with. curi's blog/forum hybrid also allows for anonymous posting. FI is very pro-free-speech.

Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.

I think that's okay, curi isn't trying to attract everyone as an audience, and FI isn't designed to be a forum which makes people feel comfortable, as such. It has different goals from e.g. LW or a philosophy subreddit.

I think we'd agree that norms at FI aren't typical and aren't for everyone. It's a place where anyone can post, but that doesn't mean that everyone should, sorta thing.

comment by habryka (habryka4) · 2020-09-19T17:13:18.721Z · LW(p) · GW(p)

That means your judgement is based on past behaviour that was already punished.

I don't understand this sentence at all. How has he already been punished for his past behavior? Indeed, he has never been banned before, so there was never any previous punishment. 

comment by Sherrinford · 2020-09-17T15:15:38.294Z · LW(p) · GW(p)

I welcome the transparency, but this "I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities" seems a bit weird to me. "a propensity for long unproductive discussions, a history of threats against people who engage with him" and "I assign too high of a probability that old patterns will repeat themselves" seem like quite a judgement and why would someone else not update on this? Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone's authority to her own shortform; all of these things for some time.

Replies from: habryka4, habryka4
comment by habryka (habryka4) · 2020-09-17T18:02:31.621Z · LW(p) · GW(p)

"I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities" seems a bit weird to me. "a propensity for long unproductive discussions, a history of threats against people who engage with him" and "I assign too high of a probability that old patterns will repeat themselves" seem like quite a judgement and why would someone else not update on this?

The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results. 

I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.

Replies from: Benito
comment by Ben Pace (Benito) · 2020-09-17T20:00:01.536Z · LW(p) · GW(p)

I also didn't understand what your sentence was saying. It read to me as "I don't want people to update on this post". When you pointed specifically to LW's culture (which is very argumentative) possibly being a key cause it was clearer what you were saying. Thanks for the clarification (and for trying to avoid negative misinterpretations of your comment).

comment by habryka (habryka4) · 2020-09-17T18:07:57.020Z · LW(p) · GW(p)

Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone's authority to her own shortform; all of these things for some time.

I am not sure. I really don't like the world where someone is banned from commenting on other people's posts, but can still make top-level posts, or is banned from making top-level posts but can still comment. Both of these end up in really weird equilibria where you sometimes can't reply to conversations you started and respond to objections other people make to your arguments, and that just seems really bad. 

I also don't really know what those things would have done. I don't think those things would have reduced the uncertainty of whether curi is a good fit for LessWrong super much, and feel like they could have just dragged things out into a long period of conflict that would have been more stressful for everyone. 

The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.

Replies from: Richard_Kennaway, Sherrinford, Sherrinford
comment by Richard_Kennaway · 2020-09-18T10:06:24.575Z · LW(p) · GW(p)
The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.

Blocking from writing but allowing to vote seems like a really bad idea. Being read-only is already available — that's the capability of anyone without an account.

Generally I'd be against complicated subsets of permissions for various classes of disfavoured members. Simpler to say that someone is either a member, or they're not.

comment by Sherrinford · 2020-09-18T07:11:53.384Z · LW(p) · GW(p)

Additionally, I'd like to know whether people are warned before they are banned, and whether they are asked about their own view of the matter.

Replies from: Vaniver, max-kaye
comment by Vaniver · 2020-09-18T16:39:59.309Z · LW(p) · GW(p)

Sometimes people are warned, and sometimes they aren't, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren't warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list [LW · GW].]

Replies from: habryka4
comment by habryka (habryka4) · 2020-09-18T16:59:53.657Z · LW(p) · GW(p)

Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.

Replies from: max-kaye
comment by Max Kaye (max-kaye) · 2020-09-19T15:41:04.911Z · LW(p) · GW(p)

Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

curi evidently wanted to change some things about his behaviour, otherwise he wouldn't have updated his commenting policy. How do you know he wouldn't have updated it more if you'd warned him? That's exactly the type of criticism we (CR/FI) think is useful.

That sort of update is exactly the type of thing that would be reasonable to expect next time he came back (considering that he was away for 2 weeks when the ban was announced). He didn't want to be banned, and he didn't want to have shitty discussions, either. (I don't know those things for certain, but I have high confidence.)

What probability would you assign to him continuing just as before if you said something like "If you keep continuing what you're doing, I will ban you. It's for these reasons." Ideally, you could add "Here they are in the rules/faq/whatever".

Practically, the chance of him changing is lower now because there isn't any point if he's never given any chances. So in some ways you were exactly right to think there's low probability of him changing, it's just that it was due to your actions. Actions which don't need to be permanent, might I add.

Replies from: Vaniver
comment by Vaniver · 2020-09-19T15:51:34.605Z · LW(p) · GW(p)

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).

comment by Max Kaye (max-kaye) · 2020-09-18T09:16:15.829Z · LW(p) · GW(p)

I'm not sure about other cases, but in this case curi wasn't warned. If you're interested, he and I discuss the ban in the first 30 mins of this stream

comment by Sherrinford · 2020-09-17T18:33:42.291Z · LW(p) · GW(p)

I agree to your first paragraph.

Whether someone is "good fit" already should be visible by the Karma (and I think Karma then translates into Karma points per Vote?) and I don't see why that should additionally lead to a ban or something. A ban, or a writing ban, could result for destructive behavior.

I think there is no real point in having people blocked from reading. Writing - ok (though after all things start out as personal blog posts in any case and don't have to be made frontpage posts).

comment by Max Kaye (max-kaye) · 2020-09-18T02:20:58.370Z · LW(p) · GW(p)

FYI I am on that list and fine with it - curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto

I think you're wrong on multiple counts. Will reply more in a few hours.

comment by Max Kaye (max-kaye) · 2020-09-18T08:57:38.055Z · LW(p) · GW(p)

FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph.

http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059

comment by Wei Dai (Wei_Dai) · 2020-09-13T08:12:56.765Z · LW(p) · GW(p)

I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

Replies from: Wei_Dai, lsusr, ryan_b, Vaniver, ESRogs, Kaj_Sotala, cousin_it, gbear605, ChristianKl, RyanCarey, TurnTrout
comment by Wei Dai (Wei_Dai) · 2020-09-13T21:30:52.324Z · LW(p) · GW(p)

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I realize this would be a hard sell in today's environment, but what if [LW · GW] building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful "corrigible" or "obedient" AI to a civilization full of people with crazy amounts of moral certainty?

Replies from: lsusr
comment by lsusr · 2020-09-18T12:42:20.790Z · LW(p) · GW(p)

Non-dualist philosophies such as Zen place high value on confusion (they call it "don't know mind") and have a sophisticated framework for communicating this idea. Zen is one of the alternative intellectual traditions I alluded to in my controversial post [LW · GW] about ethical progress.

The Dao De Jing 道德经, written 2.5 thousand years ago, includes strong warnings against ontological certainty (and, by extension, moral certainty). If we naïvely apply the Lindy Effect then Chinese civilization is likely to continue for thousands more years while Western science annihilates itself after mere centuries. This may not be a coincidence.

Here is the manifesto you are looking for:

道可道也,非恒道也。名可名也,非恒名也。无名,万物之始也;有名,万物之母也。故恒无欲也,以观其眇;恒有欲也,以观其所徼。两者同出,异名同谓。玄之又玄,众眇之门。

―Chapter 1 of the Dao De Jing 道德经

Unfortunately, the duality of emptiness and form is difficult to translate into English.

comment by lsusr · 2020-09-18T12:27:42.742Z · LW(p) · GW(p)

So why hasn't civilization figured that out already?

States evolve to perpetuate themselves. Civilization has figured it out (in the blind idiot god [LW · GW] sense of "figured it out") that moral uncertainty is teachable and decreases trust in the state ideology. You have it backward. The states in existence today promote moral certainty in children for exactly the same reason the Communist and Nazi states did.

comment by ryan_b · 2020-09-15T20:39:09.577Z · LW(p) · GW(p)
Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

I expect it is this. General moral uncertainty has all kinds of problems in expectation, like:

  • It ruins morality as a coordination mechanism among the group.
  • It weakens moral conviction in the individual, which is super bad from the perspective of people who believe there are direct consequences for a lack of conviction (like Hell).
  • It creates space for different and possibly weird moralities to arise; I don't know of any moral systems that think it is a good thing to be a member of a different moral system, so I expect all the current moral systems to agree on this one.

I feel like the first bullet point is the real driving force behind the problems it would prevent, anyhow. Moral uncertainty doesn't cause people to do good things; it keeps them from doing good things (that are different from other groups' definitions of good things).

comment by Vaniver · 2020-09-18T16:46:03.989Z · LW(p) · GW(p)

So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

This is sort of a rehash of sibling comments, but I think there are two factors to consider here.

The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time."

The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.

comment by ESRogs · 2020-09-14T19:55:24.013Z · LW(p) · GW(p)

It seems like teaching that fact, and instilling moral uncertainty in general into children

I would guess that teaching that fact is not enough to instill moral uncertainty. And that instilling moral uncertainty would be very hard.

comment by Kaj_Sotala · 2020-09-16T11:28:29.725Z · LW(p) · GW(p)

Often expressing any understanding towards the motives of a "bad guy" is taken as signaling acceptance for their actions. There was e.g. controversy around the movie Downfall for this:

Downfall was the subject of dispute by critics and audiences in Germany before and after its release, with many concerned of Hitler's role in the film as a human being with emotions in spite of his actions and ideologies.[40][30][49] The portrayal sparked debate in Germany due to publicity from commentators, film magazines, and newspapers,[25][50] leading the German tabloid Bild to ask the question, "Are we allowed to show the monster as a human being?".[25]
It was criticized for its scenes involving the members of the Nazi party,[23] with author Giles MacDonogh criticizing the portrayals as being sympathetic towards SS officers Wilhelm Mohnke and Ernst-Günther Schenck,[51] the former of whom was accused of murdering a group of British prisoners of war in the Wormhoudt massacre.[N 1]
comment by cousin_it · 2020-09-15T15:45:30.304Z · LW(p) · GW(p)

Wouldn't more moral uncertainty make people less certain that Communism or Nazism were wrong?

comment by gbear605 · 2020-09-13T23:28:28.768Z · LW(p) · GW(p)

That's definitely how it was taught in my high school, so it's not unknown.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-09-14T17:54:17.015Z · LW(p) · GW(p)

Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn't be so certain about our morality...

comment by ChristianKl · 2020-09-16T15:45:45.504Z · LW(p) · GW(p)

We want to teach children to accept the norms of our society and the narrative we tell about it. A lot of what we teach is essential pro-system propaganda. 

Teaching moral uncertainty doesn't help with that and it also doesn't help with getting students to score better on standardized tests which was the main goal of educational reforms of the last decades. 

Replies from: lsusr
comment by lsusr · 2020-09-18T12:23:09.492Z · LW(p) · GW(p)

Compulsory education is an organ of the state. Nation-states evolve to perpetuate their own existence. Teaching moral uncertainty is counter-productive toward maintaining the norms of a nation-state.

comment by RyanCarey · 2020-09-15T10:34:20.795Z · LW(p) · GW(p)

I guess it's because high-conviction ideologies outperform low-conviction ones, including nationalistic and political ideologies, and religions. Dennett's Gold Army/Silver Army analogy explains how conviction can build loyatly and strength, but a similar thing is probably true for movement-builders. Also, conviction might make adherents feel better, and therefore simply be more attractive.

comment by TurnTrout · 2020-09-14T19:29:36.843Z · LW(p) · GW(p)

If I had to guess, I'd guess the answer is some combination of "most people haven't realized this" and "of those who have realized it, they don't want to be seen as sympathetic to the bad guys". 

comment by Rafael Harth (sil-ver) · 2020-09-06T08:23:48.287Z · LW(p) · GW(p)

The full-text version [LW · GW] of the Embedded Agency sequence has colors! And it's not just in the form of an image, but they're actually embedded as text. Is there any way a normal LW user can do the same with any of the three editors? (I.e., LW docs, Draft-JS, or Markdown.)

Replies from: habryka4
comment by habryka (habryka4) · 2020-09-06T17:35:08.434Z · LW(p) · GW(p)

Alas, not. The reason is a bit silly. I can enable text-colors in our editor, but this has the unintended side-effect of now copying over the text-color from wherever you are copying your text from, even the shade of black that that other program uses, which is hard to spot, but ends up looking kind of unsettling on LessWrong. Since the vast majority of posts are just written in normal "black-or-grey on white" text colors, the cost of that seemed larger than the ability to allow people to use colored text. 

Eventually we could probably do something clever, like filtering out grey shades of text when you copy-paste it into the editor, but I haven't gotten around to that, though PRs are always welcome.

comment by gjm · 2020-09-24T16:44:37.706Z · LW(p) · GW(p)

Apparently OpenAI has sold Microsoft some sort of exclusive licence to GPT-3. I assume this is bad for the prospects of anyone else doing serious research on it.

Replies from: Rana Dexsin
comment by Rana Dexsin · 2020-09-25T04:47:32.313Z · LW(p) · GW(p)

Is there visible reporting on this?

Replies from: gjm, mingyuan
comment by mingyuan · 2020-09-25T05:05:55.634Z · LW(p) · GW(p)

Yup, https://www.theverge.com/2020/9/22/21451283/microsoft-openai-gpt-3-exclusive-license-ai-language-research

comment by Rafael Harth (sil-ver) · 2020-09-22T11:16:55.051Z · LW(p) · GW(p)

I recently realized that I've been confused about an extremely basic concept: the difference between an Oracle and an autonomous agent.

This feels obvious in some sense. But actually, you can 'get' to any AI system via output behavior + robotics. If you can answer arbitrary questions, you can also answer the question 'what's the next move in this MDP', or less abstractly, 'what's the next steering action of the imaginary wheel' (for a self-driving car). And the difference can't be 'an autonomous agent has a robotic component'.

The essential difference seems to be that the former system only uses its output channels whenever it is probed, whereas the second uses them autonomously. But I don't ever hear people make this distinction. I think part of the reason why I hadn't internalized this as an axis before is that there is the agent vs. nonagent thing, but actually, those are orthogonal to each other. We clearly can have any of the four combinations of {agent, nonagent} {autonomous, non-autonomous}.[1]

It's a pretty bad sign that I don't know without looking at the definition whether 'tool AI' refers to the entire bottom half or just the bottom-left quadrant. With looking, it seems to be just the latter.

What led me to this was thinking about Corrigibility. I think it is applicable to the entire top half, all agent-like systems, but it feels like a stronger requirement for the top right, autonomous agents. If you have an oracle, then corrigibility seems to reduce to 'don't try to influence user's behavior through your answers'.

When I look at this, I am convinced by the arguments that we probably can't just build Tool AI, but I super want the most powerful systems of the future be non-autonomous. That just seems to be way safer without sacrificing a lot of performance. I think because of this, I've been thinking of IDA as trying to build non-autonomous systems (basically oracles), even though the sequence pretty clearly seems to have autonomous systems in mind.[2] On the other hand, Debate seems to be primarily aimed at non-autonomous systems, which (if true) is an interesting difference.

So is all of this just news to me, and actually everyone is aware of this distinction?


  1. And if you added a third axis for 'robotic/non-robotic', we would end up with examples in all eight areas. ↩︎

  2. I award myself an F- for doing this. ↩︎

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-09-22T12:18:22.014Z · LW(p) · GW(p)

Two existing suggestions for how to avoid existential risk naturally fall out of this framing.

  1. Go all the way to the left (even further than the picture implies) by giving the AI no output channels whatsoever. This is Microscope AI [LW · GW].

  2. Go all the way to the bottom and avoid all agent-like systems, but allow autonomous systems like self-driving cars. This is (as I understand it) Comprehensive AI Services (CAIS).

comment by TurnTrout · 2020-09-18T01:07:21.808Z · LW(p) · GW(p)

I'm going on a 30-hour roadtrip this weekend, and I'm looking for math/science/hard sci-fi/world-modelling Audible recommendations. Anyone have anything?

comment by Liron · 2020-10-01T12:28:19.543Z · LW(p) · GW(p)

Golden raises $14.5M. I wrote about Golden here [LW · GW] as an example of the most common startup failure mode: lacking a single well-formed use case. I’m confused about why someone as savvy as Mark Andreessen is tripling down and joining their board. I think he’s making a mistake.

comment by Anirandis · 2020-09-20T00:36:00.414Z · LW(p) · GW(p)

If anyone happens to be willing to privately discuss some potentially infohazardous stuff that's been on my mind (and not in a good way) involving acausal trade, I'd appreciate it - PM me. It'd be nice if I can figure out whether I'm going batshit.

comment by Sherrinford · 2020-09-14T18:21:46.582Z · LW(p) · GW(p)

Do those of you who live in America fear the scenarios discussed here? ("What If Trump Loses And Won’t Leave?")

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-09-16T13:20:16.210Z · LW(p) · GW(p)

I do, at least. I don't think "What if trump loses and wont' leave" is the best summary of my concern; the best summary is "What if the election is heavily disputed."

Replies from: Sherrinford
comment by Sherrinford · 2020-09-17T13:06:57.418Z · LW(p) · GW(p)

"What if Trump Loses..." is just the title of the article, but the article also discusses scenarios where "Biden might be the one who disputes the result".

comment by Sherrinford · 2020-09-18T07:19:42.182Z · LW(p) · GW(p)

I do not know whether this has already been mentioned on Lesswrong, but 4-6 weeks ago you could read in German news websites that commercially available mouth wash has been tested to kill coronavirus in the lab and the (positive) results have been published in Journal of Infectious Diseases.

You can click through this article to see the ranked names of the mouth wash brands and their "reduction factor" though I found the sample sizes seemed quite small. You can also find a list in this overview article. In an article I saw today on this topic, the author warned against using the stuff permanently because it also kills the desirable part of your oral flora. But it was suggested that it may help once you are infected, and may possibly help prophylactically (of course only in the sense of helping when you are possibly infected).

comment by tinyanon (aaron-teetor) · 2020-09-14T18:08:32.721Z · LW(p) · GW(p)

I'm so bored of my job, I need a programming job that has actual math/algorithms :/  I'm curious to hear about people here who have programming jobs that are more interesting.  In college I competed at a high level in ICPC, but I got into my head that there are so few programming jobs with actual advanced algorithms that if your name on topcoder isn't red you might as well forget about it.  I ended up just taking a boring job at a top tech company that pays well but does very little for society and is not intellectually stimulating at all.

Replies from: lincolnquirk, lsusr
comment by lincolnquirk · 2020-09-17T16:34:19.644Z · LW(p) · GW(p)

Have you read https://www.benkuhn.net/hard/ ? Curious what you think. (Disclosure: I started the company that Ben works for, which does not have hard eng problems but does have a high potential for social impact)

Replies from: aaron-teetor
comment by tinyanon (aaron-teetor) · 2020-09-17T21:36:34.469Z · LW(p) · GW(p)

I feel happy pulling up kattis and doing some algorithm questions so there is definitely joy to be had chasing technical questions.  Ben doesn't seem to be disputing that but is offering two other things you can chase. 

Rather than competing for an A+ on a hard problem, I could try to solve an easy problem as quickly as possible

I don't know if this is different person to person but for me gamifying a problem can make me care more about something but it can't make me care about something I don't care about at all

So don’t look for hard problems—important ones are ultimately more fun!

This has been in my head for months because everyone* gives a variation of this advice and it feels like it's missing the hard part.  It started when I saw a clip on Reddit of Dr. K from Healthy Gamer saying something along the lines of "If you don't know what you want to do, get a piece of paper and write down everything wrong with the world.  In 5 minutes the paper will be almost full" and... What? No?  I mean, things are problems in that they make people's lives worse.  But I notice very very little actually changes how I feel.  So why would I expect anything I do to change how someone else feels if nothing they do can change how I feel?  There are only two axis that actually change how I feel about life: lonely VS belonging and bored VS engaged.  I don't really have a reason to expect other people are very different except that people in worse life situations also have an unsafe VS secure axis.  So the problems are "loneliness" and "listlessness".  Everyone acts like there are important problems everywhere.  You see people saying ideas for side projects are a dime a dozen but here I am where I actually have the funds to quit and make something I thought had value and just nothing I can think of that seems to have any value.

 

*Everyone except one friend on Paxil who assures me the solution to my problem is Paxil and one friend who is convinced LSD is the solution to all problems.  I remain unconvinced.

comment by lsusr · 2020-09-19T07:57:59.720Z · LW(p) · GW(p)
  • Quantitative finance has use for people who know advanced math and algorithms. (Though they are not known for doing great good for society.)
  • You can also get around this problem by starting your own ML startup. (I did this.) The startup route takes work and risk tolerance but provides high positive externalities for society.