Posts

The new Editor 2020-09-23T02:25:53.914Z · score: 58 (18 votes)
AI Advantages [Gems from the Wiki] 2020-09-22T22:44:36.671Z · score: 23 (9 votes)
Sunday September 27, 12:00PM (PT) — talks by Alex Flint, Alex Zhu and more 2020-09-22T21:59:56.546Z · score: 11 (2 votes)
Gems from the Wiki: Do The Math, Then Burn The Math and Go With Your Gut 2020-09-17T22:41:24.097Z · score: 42 (16 votes)
Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more 2020-09-17T00:27:47.735Z · score: 27 (5 votes)
Gems from the Wiki: Paranoid Debating 2020-09-15T03:51:10.453Z · score: 29 (8 votes)
Gems from the Wiki: Acausal Trade 2020-09-13T00:23:32.421Z · score: 45 (15 votes)
Notes on good judgement and how to develop it (80,000 Hours) 2020-09-12T17:51:27.174Z · score: 15 (5 votes)
How Much Computational Power Does It Take to Match the Human Brain? 2020-09-12T06:38:29.693Z · score: 41 (13 votes)
What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers 2020-09-12T01:46:07.349Z · score: 100 (43 votes)
‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) 2020-09-11T20:31:00.990Z · score: 17 (8 votes)
Sunday September 13, 12:00PM (PT) — talks by John Wentworth, Liron and more 2020-09-10T19:49:06.325Z · score: 20 (4 votes)
How To Fermi Model 2020-09-09T05:13:19.243Z · score: 76 (29 votes)
Conflict, the Rules of Engagement, and Professionalism 2020-09-05T05:04:16.081Z · score: 36 (13 votes)
Open & Welcome Thread - September 2020 2020-09-04T18:14:17.056Z · score: 12 (3 votes)
Sunday September 6, 12pm (PT) — Casual hanging out with the LessWrong community 2020-09-03T02:08:25.687Z · score: 35 (11 votes)
Open & Welcome Thread - August 2020 2020-08-06T06:16:50.337Z · score: 12 (3 votes)
Use resilience, instead of imprecision, to communicate uncertainty 2020-07-20T05:08:52.759Z · score: 3 (2 votes)
The New Frontpage Design & Opening Tag Creation! 2020-07-09T04:37:01.137Z · score: 52 (15 votes)
AI Research Considerations for Human Existential Safety (ARCHES) 2020-07-09T02:49:27.267Z · score: 57 (13 votes)
Open & Welcome Thread - July 2020 2020-07-02T22:41:35.440Z · score: 14 (6 votes)
Open & Welcome Thread - June 2020 2020-06-02T18:19:36.166Z · score: 20 (9 votes)
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T05:00:35.435Z · score: 31 (8 votes)
[Announcement] LessWrong will be down for ~1 hour on the evening of April 10th around 10PM PDT (5:00AM GMT) 2020-04-09T05:09:24.241Z · score: 11 (2 votes)
April Fools: Announcing LessWrong 3.0 – Now in VR! 2020-04-01T08:00:15.199Z · score: 93 (33 votes)
Rob Bensinger's COVID-19 overview 2020-03-28T21:47:31.684Z · score: 40 (14 votes)
Coronavirus Research Ideas for EAs 2020-03-27T22:10:35.767Z · score: 15 (5 votes)
March 25: Daily Coronavirus Updates 2020-03-27T04:32:18.530Z · score: 11 (2 votes)
March 24th: Daily Coronavirus Link Updates 2020-03-26T02:22:35.214Z · score: 9 (1 votes)
March 22nd & 23rd: Coronavirus Link Updates 2020-03-25T01:08:14.499Z · score: 9 (1 votes)
March 21st: Daily Coronavirus Links 2020-03-23T00:43:29.913Z · score: 10 (2 votes)
March 20th: Daily Coronavirus Links 2020-03-21T19:17:33.320Z · score: 10 (2 votes)
March 19th: Daily Coronavirus Links 2020-03-21T00:00:54.173Z · score: 19 (4 votes)
Sarah Constantin: Oxygen Supplementation 101 2020-03-20T01:00:16.453Z · score: 16 (6 votes)
March 18th: Daily Coronavirus Links 2020-03-19T22:20:27.217Z · score: 13 (4 votes)
March 17th: Daily Coronavirus Links 2020-03-18T20:55:45.372Z · score: 12 (3 votes)
March 16th: Daily Coronavirus Links 2020-03-18T00:00:33.273Z · score: 15 (2 votes)
Kevin Simler: Outbreak 2020-03-16T22:50:37.994Z · score: 16 (6 votes)
March 14/15th: Daily Coronavirus link updates 2020-03-16T22:24:11.637Z · score: 41 (8 votes)
Coronavirus Justified Practical Advice Summary 2020-03-15T22:25:17.492Z · score: 88 (25 votes)
LessWrong Coronavirus Link Database 2020-03-13T23:39:32.544Z · score: 75 (17 votes)
Open & Welcome Thread - March 2020 2020-03-08T22:06:05.649Z · score: 11 (3 votes)
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:36:40.191Z · score: 20 (3 votes)
Studying Early Stage Science: Research Program Introduction 2020-01-17T22:12:03.829Z · score: 34 (10 votes)
Open & Welcome Thread - January 2020 2020-01-06T19:42:36.499Z · score: 11 (3 votes)
Open & Welcome Thread - December 2019 2019-12-03T00:00:29.481Z · score: 13 (4 votes)
Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors 2019-11-16T20:27:57.039Z · score: 68 (29 votes)
Open & Welcome Thread - November 2019 2019-11-02T20:06:54.030Z · score: 12 (4 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:44:28.241Z · score: 29 (5 votes)
AI Alignment Open Thread October 2019 2019-10-04T01:28:15.597Z · score: 28 (8 votes)

Comments

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T20:46:08.673Z · score: 4 (2 votes) · LW · GW

I mean, really? It's not like we asked 270 random people. We basically asked 270 people, each one of which had already invested many hundreds of hours into participating on LessWrong, many of which I knew personally and considered close friends. Like, I agree, if you message 270 random people you don't get to expect anything from them, but the whole point of networks of trust is that you get to expect things from each other and ask things from each other.

If any of the people in that list of 270 people had asked me to spend a few minutes doing something that was important to them, I would have gladly obliged.

Comment by habryka4 on On Destroying the World · 2020-09-28T20:24:28.063Z · score: 22 (5 votes) · LW · GW

I want to point out a few things in particular. Firstly, the email was sent out to 270 users which from my perspective made it seem that the website was almost guaranteed to go down at some time, with the only question being when (I was aware game was played last year, but I had no memory of the outcome or the number of users).

I mean, this is a fine judgement to make, but also a straightforwardly wrong one. Last year we had ~150 people, and the site did not go down, with many people saying that we really have to add more incentives if we want to have any substantial chance of the site going down. I do think it's a pretty understandable mistake to make, but also one that is actually really important to avoid in real-life unilateralist situations.

Obviously, someone pressing the button wouldn't damage the honor or reputation of Less Wrong and so it seemed to indicate that this was just a bit of fun..

Of course it damaged our reputation! How could it not have? Being able to coordinate on this is a pretty substantial achievement, and failing on this is a pretty straightforwardly sad thing to happen. I definitely lost a good amount of trust in LessWrong, and I know of at least 10 other people who independently expressed similar things. Again, it's an understandable mistake to make, but also straightforwardly a retrospectively wrong statement.

Now Habryka is annoyed because he was trying to run a specific experiment and that experiment wasn't, "Can people who kind of care about the game, but don't care too much get fooled into taking down the site". I can understand that, I imagine that this experiment took a lot of time to set up and he was probably looking forward to it for a while.

To be clear, I think in real-life situations, people taking the consequences of their actions not seriously and treating things as just a game to be played is a serious path towards real-life risks! I don't think you destroyed the setup for this experiment at all. Indeed, someone not thinking for very long about the consequences of their actions, and taking an action with pretty serious consequences out of carelessness is one of the primary ways in which I expected the frontpage to get nuked, and was an intentional part of the test that I wanted to perform. Indeed, usually norms deteriorate by people disassociating from them, and saying that they never felt they were real in the first place, and brushing things off as inconsequential. 

To clarify this some more, in a bit of a rambling way: different people have different values. While it is obvious to us in this community that destroying civilization via a nuclear war is pretty bad, there are many people who when asked "if you could wipe out humanity with the press of a button" would happily go and press it, especially if they didn't think much about it, because they have the cached belief that civilization is probably overall bad and that life would be better off without humanity. Or many people believe in an afterlife and that the apocalypse would overall cause there to be less sin, or whatever. 

I assign substantial probability to the world being destroyed not by someone who wants to destroy the world, but by someone who just doesn't really think that their actions will have substantial consequences. Like Petrov could have just been a normal bureaucrat, doing his job, following the protocols that were set out by him, and it's really not hard to imagine a Petrov who just didn't really care about his job. Who realized in the abstract that nuclear war was a thing, but didn't really care about it viscerally, and when given the read from the instruments, just didn't think about it very hard and forwarded the signals to his superiors. That's how bad things happen. The most likely world in which civilization ended because Petrov didn't intervene, seems to me to be one where Petrov's attitude was overall pretty similar to your attitude here. That doesn't mean your attitude is wrong, I believe that you would actually care about the real case of the nuclear weapons, and am not at all saying that you specifically wouldn't have done the right thing if the real deal was on the line, but that the reference class of the reason why you didn't do the right thing here (by my lights) is pretty representative of the reference class by which I expect things to go wrong in reality.

Communicating and coordinating on shared priorities and values is really hard. It's a way lots of things break. In this case, we clearly failed at that. But also, that's part of the challenge of building a real and important thing. In real-life, you don't get to assume that everyone who is working with you actually really cares about avoiding nuclear war with your enemies. You don't get to assume that everyone has a shared understanding of humanity being really important to preserve, and that being cautious with humanity's future is of utmost importance. Most people don't viscerally believe those statements, so you can't just build coordination on that assumption, and if you do it anyways, I think things will fail in pretty analogous ways to how they failed on Saturday.

Comment by habryka4 on Puzzle Games · 2020-09-28T20:11:38.422Z · score: 2 (1 votes) · LW · GW

Added a spoiler block

Comment by habryka4 on Puzzle Games · 2020-09-28T20:10:57.220Z · score: 2 (1 votes) · LW · GW

I added a spoiler block

Comment by habryka4 on Puzzle Games · 2020-09-28T20:10:22.217Z · score: 2 (1 votes) · LW · GW

I added a spoiler block.

Comment by habryka4 on What hard science fiction stories also got the social sciences right? · 2020-09-28T06:25:39.444Z · score: 8 (6 votes) · LW · GW

Epistemic status: Haven't read the book, so take it with some piles of sand.

Everything I heard about the Three Body Problem gets the sociology wrong, and seems to fail to model what humans actually do in crises. At least what I heard so far, is that it really reinforces the "humans fall into despair when faced with crises" when that's really the opposite of what we know happens in real humanitarian crises. People usually substantially increase the amount of work they do, and generally report higher levels of engagement and very rarely just give up. 

See also this pretty extended critique of the Three Body Problem by Jacobian: https://putanumonit.com/2018/01/07/scientist-fiction/

Comment by habryka4 on Blog posts as epistemic trust builders · 2020-09-28T06:22:29.019Z · score: 4 (2 votes) · LW · GW

Oh, yeah, totally. I had understood Zack to make an ontological argument in the first paragraph that such an entity cannot coherently exist, or alternatively that "it is not deserving of anyone's trust", both of which seem like statements that are too strong to me, and I think neither correspond to the thing you are saying here. The rest of the comment seems pretty good and I agree with most of it. 

Comment by habryka4 on Blog posts as epistemic trust builders · 2020-09-28T05:22:50.331Z · score: 6 (3 votes) · LW · GW

Eh, it's pretty obvious that there is a thing that corresponds to "beliefs of the rationality community" or "broad consensus of the rationality community", and also pretty obvious that those broadly get a lot of things more right than many other sources of ideas one could listen to. Of course, it might still be fine advice to try really hard to think through things for yourself, but like, calling the existence of such a thing as something that one could even hypothetically assign trust to a "delusion" just seems straightforwardly wrong.

Comment by habryka4 on What are good rationality exercises? · 2020-09-28T05:14:22.826Z · score: 3 (2 votes) · LW · GW

Nah, I don't think that's a real concern. Or at least I really don't see much danger in the things in there, and have worked a lot with it in the past.

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T04:32:34.257Z · score: 21 (7 votes) · LW · GW

To be clear, while there is obviously some fun intended in this tradition, I don't think describing it as "just a game" feels appropriate to me. I do actually really care about people being able to coordinate to not take the site down. It's an actual hard thing to do that actually is trying to reinforce a bunch of the real and important values that I care about in Petrov day. Of course, I can't force you to feel a certain way, but like, I do sure feel a pretty high level of disappointment reading this response.

Like, the email literally said you were chosen to participate because we trusted you to not actually use the codes.

Comment by habryka4 on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-09-27T18:29:37.863Z · score: 6 (3 votes) · LW · GW

Promoted to curated! I held off on curating this post for a while, first because it's long and it took me a while to read through it, and second because we already had a lot of AI Alignment posts in the curation pipeline, and I wanted to make sure we have some diversity in our curation decisions. But overall, I really liked this post, and also want to mirror Rohin's comment in that I found this version more useful than the version where you got everything right, because this way I got to see the contrast between your interpretation and Paul's responses, which feels like it helped me locate the right hypothesis more effective than either would have on its own (even if more fleshed out). 

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T00:40:06.833Z · score: 3 (2 votes) · LW · GW

Oh, sorry, you are totally correct. We originally accidentally linked to the 2019 post, and I fixed it this morning.

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T22:16:59.383Z · score: 2 (1 votes) · LW · GW

Just to clarify, which two posts would you like us to link to? 

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T22:15:50.486Z · score: 9 (5 votes) · LW · GW

I did realize my comment came of a bit too snarky, now that I am rereading it. Just to be clear, no snark intended, just some light jest!

Comment by habryka4 on A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments · 2020-09-26T18:39:47.774Z · score: 5 (3 votes) · LW · GW

Sorry for the unfortunate timing of this post and Petrov Day! When the frontpage goes back up tomorrow, I will bump this post to make sure it gets some proper time on the frontpage.

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T18:20:16.322Z · score: 14 (6 votes) · LW · GW

Yep, seems like the Nash Equlibrium is pretty stably at everyone not pressing the button. Really needed some more incentives, I agree.

Comment by habryka4 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T16:45:37.649Z · score: 41 (21 votes) · LW · GW

I can confirm that this message was not sent by any admin.

Comment by habryka4 on MikkW's Shortform · 2020-09-26T08:25:39.374Z · score: 6 (4 votes) · LW · GW

(For what it's worth, the post made it not at all clear to me that we were talking about a nontrivial amount of funding. I read it as just you thinking a bit through your personal finance allocation. The topic of divesting and impact investing has been analyzed a bunch on LessWrong and the EA Forum, and my current position is mostly that these kinds of differences in investment don't really make much of a difference in total funding allocation, so it doesn't seem worth optimizing much, besides just optimizing for returns and then taking those returns and optimizing those fully for philanthropic impact.)

Comment by habryka4 on The new Editor · 2020-09-24T19:58:53.919Z · score: 2 (1 votes) · LW · GW

Yeah, I agree we should add that somewhere. But don't want to clutter up the new-comment form. But I think we can make something work (currently the bottom left corner is basically completely empty, so seems like a cheap option). 

Comment by habryka4 on The new Editor · 2020-09-24T19:57:51.229Z · score: 2 (1 votes) · LW · GW

We do directly expose a markdown export via the graphQL API, so that sure would be a sad roundtrip :P

Comment by habryka4 on The new Editor · 2020-09-23T23:36:15.804Z · score: 2 (1 votes) · LW · GW

Yeah, I also noticed this. Should be easy to fix. The previous placeholder text implementation was a hack to deal with the old editor, but the new editor actually allows us to do this much more elegantly, so I will see when I get around to fixing this.

Comment by habryka4 on The new Editor · 2020-09-23T23:34:15.982Z · score: 2 (1 votes) · LW · GW

Yeah, sorry, the image uploader is currently connected to the WYSIWYG editor, so for markdown you still have to host them somewhere else. The markdown editor is currently really a very minimal HTML textfield with nothing fancy happening, so it's not super obvious how to make image-upload work without a bunch of additional work.

Comment by habryka4 on The new Editor · 2020-09-23T22:51:52.816Z · score: 4 (2 votes) · LW · GW

Yep, we have markdown translation for all documents, and it should be basically fully interoperable (LaTeX is a bit janky, but I think everything else should work). 

You can get it either via the API, or you can activate the markdown editor in your user profile, and then edit any post that you edited in a non-markdown format. It will offer you to display the content in Markdown instead. 

Comment by habryka4 on This Territory Does Not Exist · 2020-09-23T18:04:46.816Z · score: 2 (1 votes) · LW · GW

Yeah, I don't know. Don't take this as a moderator warning (yet), but usually when discussions reach the "one-sentence accusation of fallacy" stage it's usually best to disengage. I haven't had time to read this whole thread to figure out exactly what happened, but I don't want either of you to waste a ton of time in unproductive discussion.

Comment by habryka4 on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-23T05:04:11.658Z · score: 2 (1 votes) · LW · GW

Ruby went through all the pages and decided whether to import them or not. I think it's unlikely we are going to import most of the remaining pages (some of which were pretty random and low-quality), but we will make sure they stay accessible, and if there is any individual post that isn't covered by the import that you feel is missing, there is a good chance we can just add it to the import. Which specific ones we should import is Ruby's call. 

Comment by habryka4 on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes · 2020-09-22T23:09:53.152Z · score: 2 (1 votes) · LW · GW

Promoted to curated: This post made a point that I have been grasping at for a while, and made it quite well. For better or for worse, I use Prisoner's Dilemma analogies at least 5 times a week, and so understanding the dynamics around those dilemmas is quite important to me. This post felt like it connected a number of ideas in this space in a way that I expect to refer back to in the future at least a few times.

Comment by habryka4 on benwr's unpolished thoughts · 2020-09-22T19:58:59.161Z · score: 5 (3 votes) · LW · GW

Oops, you are right that for some reason we had the verbatim search feature deactivated on some of the indexes. Thank you for helping me notice this! This should now be fixed! (Because of caching it might take a while for it to work for the exact "well, actually" query, but you can try using quotes for some other queries, and it should now work as expected).

Comment by habryka4 on benwr's unpolished thoughts · 2020-09-22T17:07:05.653Z · score: 2 (1 votes) · LW · GW

The search box does not appear to support multi-word strings.

The search box definitely supports multi-word strings. See this screenshot. 

Comment by habryka4 on benwr's unpolished thoughts · 2020-09-22T06:19:19.662Z · score: 9 (5 votes) · LW · GW

Huh, weird. I do notice that I don't like the word "really" because it is super ambiguous between being a general emphasis "this is really difficult" or being a synonym to "actually", i.e. in "do you really mean this?". The first usage feels much more common to me, i.e. in more than 80% of the sentences I could come up with the word "really" in it while I was writing the comment, I used it as general emphasis, and not as a synonym to "actually".

Comment by habryka4 on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-21T17:13:46.199Z · score: 4 (2 votes) · LW · GW

(Just to be clear, I understood Ruby's comment to be a joke)

Comment by habryka4 on Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more · 2020-09-20T17:30:14.448Z · score: 3 (2 votes) · LW · GW

Yeah, we have an open PR that adds online events to the Community section and the navigation menu on the left. Currently all events need a physical location, which is obviously pretty dumb during a global pandemic where most events are online, but it obviously made sense in the pre-pandemic world, so we've been encouraging a number of online meetup organizers to post them as normal posts instead. 

Comment by habryka4 on Coordination Surveys: why we should survey to organize responsibilities, not just predictions · 2020-09-20T06:05:24.273Z · score: 4 (2 votes) · LW · GW

Yeah, I've been thinking the same. It feels like there are a number of action-coordination dimension where we could have done substantially better (a substantial number of which will still be relevant for a while, so there is still time to improve).

Comment by habryka4 on Thomas Kwa's Shortform · 2020-09-19T17:17:59.598Z · score: 3 (2 votes) · LW · GW

Alas, the best I have usually been able to do is "<Name of the paper> replication" or "<Name of the author> replication". 

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-19T17:13:18.721Z · score: 2 (1 votes) · LW · GW

That means your judgement is based on past behaviour that was already punished.

I don't understand this sentence at all. How has he already been punished for his past behavior? Indeed, he has never been banned before, so there was never any previous punishment. 

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-19T17:06:34.762Z · score: 3 (2 votes) · LW · GW

significant parts of habryka's post were factually incorrect.

I am not currently aware of any factual inaccuracies, but would be happy to correct any you point out. 

The only thing you pointed out was something about the word "threat" being wrong, but that only appears to be true under some very narrow definition of threat. This might be weird rationalist jargon, but I've reliably used the word "threat" to simply mean signaling some kind of intention of inflicting some kind punishment in response to some condition by the other person. Curi and other people from FI have done this repeatedly, and the "list of people who have evaded/lied/etc." is exactly one of such threats, whether explicitly labeled as such or not. 

The average LessWrong user would pretty substantially regret having engaged with curi if they later end up on that list, so I do think it's a pretty concrete punishment, and while there might be some chance you are unaware of the negative consequences, this doesn't really change the reality very much that due to the way I've seen curi active on the site, engaging with him is a trap that people are likely to regret.

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-18T16:59:53.657Z · score: 3 (4 votes) · LW · GW

Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-18T16:55:29.993Z · score: 1 (2 votes) · LW · GW

This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-17T18:07:57.020Z · score: 2 (1 votes) · LW · GW

Additionally, I think that while a ban is sometimes necessary (e.g. harassment), a 2-year ban seems like quite a jump. I could think of a number of different sanctions, e.g. blocking someone from commenting in general; giving users the option to block someone from commenting; blocking someone from writing anything; limiting someone's authority to her own shortform; all of these things for some time.

I am not sure. I really don't like the world where someone is banned from commenting on other people's posts, but can still make top-level posts, or is banned from making top-level posts but can still comment. Both of these end up in really weird equilibria where you sometimes can't reply to conversations you started and respond to objections other people make to your arguments, and that just seems really bad. 

I also don't really know what those things would have done. I don't think those things would have reduced the uncertainty of whether curi is a good fit for LessWrong super much, and feel like they could have just dragged things out into a long period of conflict that would have been more stressful for everyone. 

The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-17T18:02:31.621Z · score: 3 (2 votes) · LW · GW

"I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities" seems a bit weird to me. "a propensity for long unproductive discussions, a history of threats against people who engage with him" and "I assign too high of a probability that old patterns will repeat themselves" seem like quite a judgement and why would someone else not update on this?

The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results. 

I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.

Comment by habryka4 on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-17T17:59:14.538Z · score: 2 (1 votes) · LW · GW

Yep, after we are done with the import, we are going to redirect all the pages we imported. And then probably make all the remaining pages on the old wiki read-only, so we don't have to maintain a whole separate wiki system forever. 

Comment by habryka4 on Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem · 2020-09-17T17:56:38.206Z · score: 8 (5 votes) · LW · GW

This sentence really makes no sense to me. The proof that it can have an incentive to allow itself to be switched off even if it isn't uncertain is trivial. 

Just create a utility function that assigns intrinsic reward to shutting itself off, or create a payoff matrix that punishes it really hard if it doesn't turn itself off. In this context using this kind of technical language feels actively deceitful to me, since it's really obvious that the argument he is making in that chapter cannot actually be a proof. 

In general, I... really don't understand Stuart Russell's thoughts on AI Alignment. The whole "uncertainty over utility functions" thing just doesn't really help at all with solving any part of the AI Alignment problem that I care about, and I do find myself really frustrated with the degree to which both this preface and Human Compatible repeatedly indicate that it somehow is a solution to the AI Alignment problem (not only like, a helpful contribution, but both this and Human Compatible repeatedly say things that to me read like "if you make the AI uncertain about the objective in the right way, then the AI Alignment problem is solved", which just seems obviously wrong to me, since it doesn't even deal with inner alignment problems, and it also doesn't solve really any major outer alignment problems, but that requires a bit more writing). 

Comment by habryka4 on Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more · 2020-09-17T17:49:05.269Z · score: 4 (2 votes) · LW · GW

The sailing ships one sounds fun. GWP as terrible metric also sounds interesting. The others also seem good, but those two seem marginally better. 

Comment by habryka4 on Open & Welcome Thread - September 2020 · 2020-09-17T00:56:15.752Z · score: 32 (16 votes) · LW · GW

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.

The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

Comment by habryka4 on Conflict, the Rules of Engagement, and Professionalism · 2020-09-16T22:27:15.192Z · score: 4 (3 votes) · LW · GW

Sorry for the delay! Here it is: https://www.facebook.com/bshlgrs/posts/10218388194790943

Comment by habryka4 on How To Fermi Model · 2020-09-16T04:55:47.492Z · score: 2 (1 votes) · LW · GW

A recently released paper that seems kind of relevant: https://www.researchgate.net/publication/337275911_Taking_a_disagreeing_perspective_improves_the_accuracy_of_people%27s_quantitative_estimates

Comment by habryka4 on capybaralet's Shortform · 2020-09-16T00:38:56.225Z · score: 2 (1 votes) · LW · GW

:D Glad to hear that! 

Comment by habryka4 on Comparing Utilities · 2020-09-15T21:24:34.430Z · score: 11 (2 votes) · LW · GW

Yep, fixed. Thank you!

Judging from the URL of those links, those images were hosted on a domain that you could access, but others could not, namely they were stored as Gmail image attachments, to which of course you as the recipient have access, but random LessWrong users do not. 

Comment by habryka4 on Comparing Utilities · 2020-09-15T03:18:59.263Z · score: 9 (5 votes) · LW · GW

Oh no! The two images starting from this point are broken for me: 

Comment by habryka4 on Book Review: Working With Contracts · 2020-09-15T03:04:38.851Z · score: 8 (4 votes) · LW · GW

This is great! It's also been on my to-do list for a while to look more into how exactly contracts work and what the relevant abstractions are, and this feels like it gives me a decent framework to start from. 

Comment by habryka4 on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-14T17:50:57.837Z · score: 4 (2 votes) · LW · GW

Hmm, I guess maybe I am more lucky. It has happened reasonably frequently to me that someone gets an ugh-field around a task that some other person doesn't find stressful (examples: organizing spreadsheets, calling businesses, having meetings, writing long explanatory blogposts).

But I do agree that reassignment is definitely much less frequent than just talking through whatever is aversive and usually one can find some other solution to the problem (in my experience pair-programming or pair-writing is often pretty successful here).