Comment by habryka4 on Discourse Norms: Moderators Must Not Bully · 2019-06-16T20:09:54.679Z · score: 5 (4 votes) · LW · GW

There exists no literal Nazi party. Do you mean anyone who has ever said anything good about the original german Nazi party? What does "supporting" mean?

Do you mean people who self-identity as a member of the Nazi party?

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-16T20:05:31.239Z · score: 2 (1 votes) · LW · GW

Oh, interesting. I will look into this, probably tomorrow. Sorry for the confusion. Probably an account merging side effect

Comment by habryka4 on Discourse Norms: Moderators Must Not Bully · 2019-06-16T18:30:04.184Z · score: 5 (3 votes) · LW · GW

I think I do not know what an "actual Nazi" is. It is obviously an extremely fuzzy boundary that could range from including over 100 million people over humanity's history, or barely 5000, and I do not know which you mean.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-16T18:23:01.806Z · score: 2 (1 votes) · LW · GW

Yeah, I've heard the same from others, so I think it's likely we will add a mark as read button.

Interested in hearing more about what the thing is you would like to see. There are things we can do with cookies that would at least help the accuracy of the view tracking.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T16:34:49.691Z · score: 4 (2 votes) · LW · GW

I prefer the current setup, mostly because I often discover sequences by just reading posts in the recommendations that then turn to have been part of a sequence I want to read, for which I then want to start at the beginning (and I expect this will be particularly the case with posts from R:A-Z for most users).

Will think about whether there is a way to get the best of both worlds.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T16:31:46.852Z · score: 5 (3 votes) · LW · GW

Yeah, this is pretty high on the Todo list. Hopefully we can do that next week.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T04:04:47.030Z · score: 2 (1 votes) · LW · GW
After all, three-quarters of the work here is precisely in bringing the old posts in question to the attention of users; relying on users in the first place, to accomplish that, seems to be an ineffective plan—whereas using the automated recommendation engine is perfect. (Still the user-originated system you allude to would, I think, be a good supplement.)

This indicates at least some misunderstanding of what I tried to convey. I agree that the recommendation system can do the job of promoting the visibility of such posts, but then I was additionally suggesting that it would be good to independently allow users to promote epistemic corrections to a higher level of visibility on the post-page itself in a way that does not require moderator interaction.

Comment by habryka4 on Spiracular's Shortform Feed · 2019-06-15T04:03:05.420Z · score: 2 (1 votes) · LW · GW

*nods* I think definitely when we make shortform feeds more of a first-class feature then we should encourage authors to specify their preferences for comments on their feeds.

I mean visibility pretty straightforwardly in that I often want to intentionally limit the number of people who can see my content because I feel worried about being misunderstood/judged/dragged into uncomfortable interactions.

Happy to discuss any of this further since I think shortform feeds and norms around them are important, but would prefer to do so on a separate post. You're welcome to start a thread about this over on my own shortform feed.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T03:59:12.150Z · score: 2 (1 votes) · LW · GW

I think agree that we can do some better UI work to show that separation, and I think that's probably the correct long-term strategy. Just the backlog of additional features like that is long, and difficulty of solving this problem well isn't trivial (and neither is the cost of messing up), so I was mostly comparing options that don't require any additional features like that and keep the existing site hierarchy.

This discussion has however made me update that putting in the relevant effort does surface a good amount of additional value, so I will think about that more.

Comment by habryka4 on Spiracular's Shortform Feed · 2019-06-15T03:47:17.554Z · score: 5 (3 votes) · LW · GW

My preference for most of my shortform feed entries is to intentionally have a very limited amount of visibility, with most commenting coming from people who are primarily interested in a collaborative/explorative framing. My model of Spiracular (though they are very welcome to correct me) feels similar.

I think I mentioned in the past that I think it's good for ideas to start in an early explorative/generative phase and then later move to a more evaluative phrase, and the shortform feeds for me try to fill the niche of making it as low-cost as possible for me to generate things. Some of these ideas (usually the best ones) tend to then later get made into full posts (or in my case, feature proposals for LessWrong) where I tend to be more welcoming of evaluative frames.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T03:36:41.283Z · score: 2 (1 votes) · LW · GW

I think there is still a loss of ownership that people would feel when we add big moderator note's to the top of their posts, even if clearly signaled as moderator-added content, that I think would feel quite violating to many authors, though I might be wrong here.

I confess I don’t really know what you mean by this.

Not sure how to explain more. It would be good if there was some system that would allow other users that are not moderators to be able to inform other users about the updated epistemic content of a post. There are many potential ways to achieve that.

One might be to add inline comments that when they reach a certain threshold of votes can be displayed prominently enough to get the attention of others reading the content for the first time (though that also comes with cost), another might be to find some way to reduce or remove the strong first-mover bias in comment sections that prevent new comments from reaching the top of the comment section most of the time (due to voting activity usually being concentrated right after a post is created, which makes it hard fo rnew comments to get a lot of upvotes).

Comment by habryka4 on Discourse Norms: Moderators Must Not Bully · 2019-06-15T02:54:33.607Z · score: 14 (8 votes) · LW · GW

For whatever it's worth, it definitely had a really big impact on my experience of this post in a way that felt to me like it invalidated most of its intention.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T02:14:47.593Z · score: 2 (1 votes) · LW · GW
It seems to me that it would be extremely valuable to include posts like this in the recommendations—but annotate them with a note that the research in question hasn’t replicated. This would, I think, have an excellent pedagogic effect! To see how popular, how highly-upvoted, a study could be, while turning out later to have been bunk—think of the usefulness as a series of naturalistic rationality case studies! (Likewise useful would be to examine the comment threads of these old posts; did any of the commentariat suspect anything amiss? If so, what heuristics did they use? Did certain people consistently get it right, and if so, how? etc.) The new recommendation engine could do great good, in this way…

This is an interesting point. I think I would be in favor of this if we had a way to pin comments to the top as moderators. Right now I expect we could leave a comment, but I don't expect that comment to actually show up high enough in the comment tree to be seen by most users, and we could edit the post but I am particularly hesitant to write retraction notices for other people.

Ideally I would want a way for things like this to happen organically driven by user activity instead of moderator intervention, but I don't know yet how to best do that. Interested in suggestions, since it feels important for the broader vision of making progress over a long period of time.

Comment by habryka4 on Recommendation Features on LessWrong · 2019-06-15T02:11:06.383Z · score: 2 (1 votes) · LW · GW
Is this adjusted by post date? Posts from before the relaunch are going to have much less karma, on average (and as user karma grows and the karma weight of upvotes grows with it, average karma will increase further). A post from last month with 50 karma, and a post from 2010 with 50 karma, are really not comparable…

Rerunning the whole vote history with the new karma is one of the next things on our to-do list. Right now it will indeed be biased towards the recent year, which I hope to fix soon (that is one of the things that I consider necessary before removing the "[beta]" tag from the feature).

Recommendation Features on LessWrong

2019-06-15T00:23:18.102Z · score: 53 (14 votes)
Comment by habryka4 on SSC Sacramento Meetup · 2019-06-14T23:13:49.232Z · score: 2 (1 votes) · LW · GW

Huh, quite weird. Was this just on the edit page, and what browser were you using? Sorry for that happening.

Comment by habryka4 on SSC Sacramento Meetup · 2019-06-14T22:39:18.600Z · score: 3 (2 votes) · LW · GW

Moves this back to your drafts, since it didn't have a location and seems to end before it starts.

Comment by habryka4 on Yes Requires the Possibility of No · 2019-06-14T22:03:59.226Z · score: 4 (2 votes) · LW · GW

Promoted to curated: This post makes an important point that I haven't actually seen made somewhere else, but that I myself had to explain on many past occasions, so having a more canonical referent to it is quite useful.

I also quite like the format and generally think that pointing to important concepts using a bunch of examples seems relatively underutilized given how easy those kinds of post tend to be to write, and how useful they tend to be.

Welcome to LessWrong!

2019-06-14T19:42:26.128Z · score: 64 (19 votes)
Comment by habryka4 on Editor Mini-Guide · 2019-06-14T02:55:55.373Z · score: 3 (2 votes) · LW · GW

You can use footnotes with the markdown editor. You can read about the syntax here:

https://github.com/markdown-it/markdown-it-footnote

Comment by habryka4 on Welcome and Open Thread June 2019 · 2019-06-13T23:18:11.891Z · score: 3 (2 votes) · LW · GW

Sorry, the syntax is slightly counterintuitive. In the WYSIWYG editor it's >! on a new line, rendering like this:

This is a spoiler

In markdown it's :::spoiler to open and ::: to close

Comment by habryka4 on FB/Discord Style Reacts · 2019-06-13T22:44:44.761Z · score: 2 (1 votes) · LW · GW

Interesting. Do you have any screenshots or more concrete descriptions of how trn works? Or maybe recommendations for other things?

Comment by habryka4 on Spiracular's Shortform Feed · 2019-06-13T20:50:54.117Z · score: 5 (3 votes) · LW · GW

Yay, shortform feeds!

Comment by habryka4 on FB/Discord Style Reacts · 2019-06-13T20:14:13.483Z · score: 2 (1 votes) · LW · GW
Threading on LW is not great

Is this compared to other sites, or do you just think threading in general has some problems?


Comment by habryka4 on Welcome and Open Thread June 2019 · 2019-06-13T20:04:40.839Z · score: 3 (2 votes) · LW · GW

There is a spoiler tag, but no collapsible sections (yet)

Comment by habryka4 on What kind of thing is logic in an ontological sense? · 2019-06-13T04:14:20.113Z · score: 2 (1 votes) · LW · GW

Edit note: Made the post into a question, since it seems like it was intended to be one

Comment by habryka4 on Welcome and Open Thread June 2019 · 2019-06-12T01:07:33.545Z · score: 2 (1 votes) · LW · GW

Yeah, that's indeed just an import artifact. Fixing that should be pretty straightforward.

Comment by habryka4 on Long Term Future Fund applications open until June 28th · 2019-06-12T00:53:57.258Z · score: 4 (2 votes) · LW · GW

Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.

My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.

Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.

Comment by habryka4 on Long Term Future Fund applications open until June 28th · 2019-06-11T19:30:20.192Z · score: 4 (2 votes) · LW · GW

I am actually not a huge fan of the "operations bottleneck" framing, and so don't really have a great response to that. Maybe I can write something longer on this at some point, but the very short summary is that I've never seen the term "operations" used in any consistent way, and instead I've seen it refer to a very wide range of skillsets of barely-overlapping skillsets that are often very high-skill tasks that people hope to find a person for who is both willing to work with very little autonomy and with comparably little compensation.

I think many orgs have very concrete needs for specific skillsets they need to fill and for which they need good people, but I don't think there is something like a general and uniform "operations skillset" missing at EA orgs, which makes building infrastructure for this a lot harder.

Comment by habryka4 on Get Rich Real Slowly · 2019-06-10T20:51:11.062Z · score: 4 (2 votes) · LW · GW

Modified your post to have actual footnotes. You can do it using the markdown editor. Footnote support for the WYSIWYG editor is in the works (but is obviously a bit more complicated).

Long Term Future Fund applications open until June 28th

2019-06-10T20:39:58.183Z · score: 28 (7 votes)
Comment by habryka4 on Logic, Buddhism, and the Dialetheia · 2019-06-10T04:32:50.525Z · score: 2 (1 votes) · LW · GW

Edit Note: Fixed your images for you. You were linking to imgur, without linking to the actual image addresses.

Comment by habryka4 on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-09T18:50:57.786Z · score: 2 (1 votes) · LW · GW

Mostly think more about this question than they already have, which likely includes learning the best available models from others.

The critique here was more one of intention than one of epistemic state. It seems to me like there is a mental motion of being curious about how to make progress on something, even if one is still confused, which I contrast with a mental motion of "trying to look like you are working on the problem".

Comment by habryka4 on Arbital scrape · 2019-06-08T17:17:02.486Z · score: 4 (2 votes) · LW · GW

Yeah, notifications are for all comments, not just top-level.

Comment by habryka4 on [Answer] Why wasn't science invented in China? · 2019-06-08T03:36:28.105Z · score: 2 (1 votes) · LW · GW

Promoted to curated: I think the history of science is one of the most natural places to look for insights into rationality. And in the history of science, the question "Why the west?" is one of the most obviously important ones that could shed light on what allows people to make scientific discoveries. I am pretty excited about similar analysis and summary like this on LessWrong, as well as people taking what has been written here so far and asking more followup questions.

I also found a lot of the discussion and comments quite valuable, which is also worth highlighting and is another reason for curation.

I think my biggest problem with this post is the degree to which it does end up mostly being a summary of other people's work, in a way that feels like it makes it harder to really grok. I feel like a lot of things are lost if someone tries to summarize someone else's view, instead of trying to explain their own view, and I feel like this shows in at least some parts of this post.

Comment by habryka4 on Drowning children are rare · 2019-06-07T00:08:32.224Z · score: 4 (2 votes) · LW · GW

For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.

Comment by habryka4 on Arbital scrape · 2019-06-07T00:03:48.472Z · score: 5 (3 votes) · LW · GW

This seems good. I remember Said being interested in hosting a static version of the site.

Comment by habryka4 on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-06T22:49:48.896Z · score: 4 (2 votes) · LW · GW

Oh, yes. To be clear. Whenever we delete anything, we still allow the author to access the content. We've never deleted anything in the sense of making the content inaccessible to its author and don't plan to ever do so.

Comment by habryka4 on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-06T22:17:36.793Z · score: 2 (1 votes) · LW · GW

Ray basically ended up writing what I wanted to say, but happy to answer any more questions.

Comment by habryka4 on Integrating disagreeing subagents · 2019-06-06T21:29:29.031Z · score: 6 (3 votes) · LW · GW

Promoted to curated: I continue to think this whole sequence is about pretty important things, and this post in particular stands out as making connections to a large volume of existing writing both on LessWrong and in the established literature, which I think is particularly key for a topic like this.


Comment by habryka4 on Undiscriminating Skepticism · 2019-06-04T23:07:20.479Z · score: 11 (5 votes) · LW · GW

Huh, that sure was an interesting series of comments. Thanks for updating this after so many years and providing a tiny bit of data (and humour).

Comment by habryka4 on All knowledge is circularly justified · 2019-06-04T23:02:15.574Z · score: 2 (1 votes) · LW · GW

Edit note: Removed large amounts of trailing whitespace that I presume were not intentional.

Comment by habryka4 on Chapter 7: Reciprocation · 2019-06-04T19:45:44.769Z · score: 2 (1 votes) · LW · GW

Alas, then I guess the britpicking never properly completed.

Comment by habryka4 on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-04T18:53:47.394Z · score: 4 (2 votes) · LW · GW

This seems roughly correct to me.

Comment by habryka4 on Asymmetric Justice · 2019-06-04T18:51:33.127Z · score: 9 (2 votes) · LW · GW

Promoted to curated: I think there is something really important in the Copenhagen Interpretation of Ethics, and this post expands on that concept a bunch of important ways. I've ended up referring back to it a bunch of times over the last month, and I've found that it has significantly changed my models of the global coordination landscape.

Comment by habryka4 on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-04T17:46:55.301Z · score: 2 (1 votes) · LW · GW

Note: I think view access to a document is not sufficient to see comments. At least I can't see any comments.

Comment by habryka4 on Chapter 7: Reciprocation · 2019-06-04T04:56:27.439Z · score: 2 (1 votes) · LW · GW

We copied the version from fanfiction.net two years ago. Maybe the HPMOR.com version is more up to date?

Comment by habryka4 on [deleted post] 2019-06-04T01:23:54.110Z

Fixed now.

Comment by habryka4 on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-03T22:51:32.902Z · score: 22 (10 votes) · LW · GW

As the funder that you are very likely referring to, I do want to highlight that I don't feel like this summarizes my views particularly well. In particular this section:

EA really does seem to be missing a management layer. People are thinking about their careers, starting organisations, doing direct work and research. Not many people are drawing up plans for coordination on a higher level and telling people what to do. Someone ought to be dividing up the big picture into roles for people to fill. You can see the demand for this by how seriously we take 80k. They’re the only ones doing this beyond the organisational level.
Much the same in the cause area we call AI Safety Education. Most AIS organisations are necessarily thinking about hiring and training, but no one is specializing in it. In the coming year, our aim is to fill this niche, building expertise and doing management consulting. We will aim to smarten up the coordination there. Concrete outputs might be:
+ Advice for grantmakers that want to invest in the AI Safety researcher pipeline
+ Advice for students that want to get up to speed and test themselves quickly
+ Suggesting interventions for entrepreneurs that want to fill up gaps in the ecosystem
+ Publishing thinkpieces that advance the discussion of the community, like this one
+ Creating and keeping wiki pages about subjects that are relevant to us
+ Helping AIS research orgs with their recruitment process

I think in general people should be very hesitant to work on social coordination problems because they can't find a way to make progress on the object-level problems. My recommendation was very concretely "try to build an internal model of what really needs to happen for AI-risk to go well" and very much not "try to tell other people what really needs to happen for AI-risk", which is almost the exact opposite.

I actually think going explicitly in this direction is possibly worse than RAISE's previous plans. One of my biggest concerns with RAISE was precisely that it was trying far too early to tell people what exactly to learn and what to do, without understanding the relevant problems themselves first. This seems like it exacerbates that problem by trying to make your job explicitly about telling other people what to do.

A lot of my thoughts in this space are summarized by the discussion around Davis' recent post "Go Do Something", in particular Ray's and Ben Hoffman's comments about working on social coordination technology:

Benquo:

This works for versions of "do something" that mainly interact with objective reality, but there's a pretty awful value-misalignment problem if the way you figure out what works is through feedback from social reality.
So, for instance, learning to go camping or cook or move your body better or paint a mural on your wall might count, but starting a socially legible project may be actively harmful if you don't have a specific need that's meeting that you're explicitly tracking. And unfortunately too much of people's idea of what "go do something" ends up pointing to trying to collect credit for doing things.
Sitting somewhere doing nothing (which is basically what much meditation is) is at least unlikely to be harmful, and while of limited use in some circumstances, often an important intermediate stage in between trying to look like you're doing things, and authentically acting in the world.

Ray:

It's been said before for sure, but worth saying periodically.
Something I'd add, which particularly seems like the failure mode I see in EA-spheres (less in rationalist spheres but they blur together)
Try to do something other than solve coordination problems.
Or, try to do something that provides immediate value to whoever uses it, regardless of whether other people are also using it.
A failure mode I see (and have often fallen to) is looking around and thinking "hmm, I don't know how to do something technical, and/or I don't have the specialist skills necessary to do something specialist. But, I can clearly see problems that stem from people being uncoordinated. I think I roughly know how people work, and I think I can understand this problem, so I will work on that."
But:
+ It actually requires just as much complex specialist knowledge to solve coordination problems as it does to do [whatever other thing you were considering].
+ Every time someone attempts to rally people around a new solution, and fails, they make it harder for the next person who tries to rally people around a new solution. This makes the coordination system overall worse.
This is a fairly different framing than Benquo's (and Eliezer's) advice, although I think it amounts to something similar.
Comment by habryka4 on 2017 LessWrong Survey · 2019-06-03T19:44:12.975Z · score: 2 (1 votes) · LW · GW

There haven't been any further, but I would be open to helping run one.

Comment by habryka4 on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-01T04:52:26.717Z · score: 2 (1 votes) · LW · GW

Sure, I will try to write some more things about this early next week.

Comment by habryka4 on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-01T01:35:48.616Z · score: 2 (1 votes) · LW · GW

Some attributes of Medium seem nice to me, which includes a low barrier to posting. I don't really think LessWrong should try to copy most of what they do.

Comment by habryka4 on Editor Mini-Guide · 2019-06-01T00:06:48.518Z · score: 2 (1 votes) · LW · GW

Yes, you're right. I mixed that up. Fixed.

Re the rest: This was advice for the post-editor. For the comment editor we intentionally don't make it super easy to attach images, since that makes it too easy to disproportionately get more attention than seems good.

We didn't deactivate images to make it easier for us on the backend and allow arbitrary content-transfer between posts and comments, but that's why you don't see that button (you would see it on the post-editor).

Comment section from 05/19/2019

2019-05-20T00:51:49.298Z · score: 18 (7 votes)

Kevin Simler's "Going Critical"

2019-05-16T04:36:32.470Z · score: 56 (21 votes)

Gwern's "Why Tool AIs Want to Be Agent AIs: The Power of Agency"

2019-05-05T05:11:45.805Z · score: 24 (6 votes)

[Meta] Hiding negative karma notifications by default

2019-05-04T02:36:43.919Z · score: 27 (8 votes)

Has government or industry had greater past success in maintaining really powerful technological secrets?

2019-05-01T02:24:52.302Z · score: 28 (7 votes)

Does the patent system prevent industry from keeping secrets?

2019-05-01T02:24:35.928Z · score: 8 (1 votes)

What are concrete historical examples of powerful technological secrets?

2019-05-01T02:22:37.870Z · score: 8 (1 votes)

Why is it important whether governments or industry projects are better at keeping secrets?

2019-05-01T02:10:21.533Z · score: 8 (1 votes)

Change A View: An interesting online community

2019-04-30T18:34:37.351Z · score: 53 (20 votes)

Habryka's Shortform Feed

2019-04-27T19:25:26.666Z · score: 61 (16 votes)

Long Term Future Fund: April 2019 grant decisions

2019-04-08T02:05:44.217Z · score: 52 (11 votes)

What LessWrong/Rationality/EA chat-servers exist that newcomers can join?

2019-03-31T03:30:20.819Z · score: 53 (13 votes)

How large is the fallout area of the biggest cobalt bomb we can build?

2019-03-17T05:50:13.848Z · score: 21 (5 votes)

How dangerous is it to ride a bicycle without a helmet?

2019-03-09T02:58:23.964Z · score: 32 (14 votes)

LW Update 2019-01-03 – New All-Posts Page, Author hover-previews and new post-item

2019-03-02T04:09:41.029Z · score: 28 (7 votes)

New versions of posts in "Map and Territory" and "How To Actually Change Your Mind" are up (also, new revision system)

2019-02-26T03:17:28.065Z · score: 36 (12 votes)

How good is a human's gut judgement at guessing someone's IQ?

2019-02-25T21:23:17.159Z · score: 45 (17 votes)

Major Donation: Long Term Future Fund Application Extended 1 Week

2019-02-16T23:30:11.243Z · score: 45 (12 votes)

EA Funds: Long-Term Future fund is open to applications until Feb. 7th

2019-01-17T20:27:17.619Z · score: 31 (11 votes)

Reinterpreting "AI and Compute"

2018-12-25T21:12:11.236Z · score: 33 (9 votes)

[Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles)

2018-12-23T21:49:06.438Z · score: 18 (4 votes)

Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?

2018-12-08T01:49:56.073Z · score: 21 (6 votes)

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

2018-11-21T03:39:15.247Z · score: 38 (9 votes)

Switching hosting providers today, there probably will be some hiccups

2018-11-15T19:45:59.181Z · score: 13 (5 votes)

The new Effective Altruism forum just launched

2018-11-08T01:59:01.502Z · score: 28 (12 votes)

Introducing the AI Alignment Forum (FAQ)

2018-10-29T21:07:54.494Z · score: 89 (30 votes)

Upcoming API changes: Upgrading to Open-CRUD syntax

2018-10-04T02:28:39.366Z · score: 16 (3 votes)

AI Governance: A Research Agenda

2018-09-05T18:00:48.003Z · score: 27 (5 votes)

Changing main content font to Valkyrie?

2018-08-24T23:05:42.367Z · score: 25 (4 votes)

LW Update 2018-08-10 – Frontpage map, Markdown in LaTeX, restored posts and reversed spam votes

2018-08-10T18:14:53.909Z · score: 24 (10 votes)

SSC Meetups Everywhere 2018

2018-08-10T03:18:58.716Z · score: 31 (9 votes)

12 Virtues of Rationality posters/icons

2018-07-22T05:19:28.856Z · score: 49 (22 votes)

FHI Research Scholars Programme

2018-06-29T02:31:13.648Z · score: 34 (10 votes)

OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August

2018-06-26T22:40:34.825Z · score: 56 (20 votes)

Announcement: Legacy karma imported

2018-05-31T02:53:01.779Z · score: 40 (8 votes)

Using the LessWrong API to query for events

2018-05-28T22:41:52.649Z · score: 12 (3 votes)

April Fools: Announcing: Karma 2.0

2018-04-01T10:33:39.961Z · score: 120 (38 votes)

Harry Potter and the Method of Entropy 1 [LessWrong version]

2018-03-31T20:38:45.125Z · score: 21 (4 votes)

Site search will be down for a few hours

2018-03-30T00:43:22.235Z · score: 12 (2 votes)

LessWrong.com URL transfer complete, data import will run for the next few hours

2018-03-23T02:40:47.836Z · score: 69 (20 votes)

You can now log in with your LW1 credentials on LW2

2018-03-17T05:56:13.310Z · score: 30 (6 votes)

Cryptography/Software Engineering Problem: How to make LW 1.0 logins work on LW 2.0

2018-03-16T04:01:48.301Z · score: 23 (4 votes)

Should we remove markdown parsing from the comment editor?

2018-03-12T05:00:22.062Z · score: 20 (5 votes)

Explanation of Paul's AI-Alignment agenda by Ajeya Cotra

2018-03-05T03:10:02.666Z · score: 55 (14 votes)

[Meta] New moderation tools and moderation guidelines

2018-02-18T03:22:45.142Z · score: 104 (36 votes)

Speed improvements and changes to data querying

2018-02-06T04:23:20.693Z · score: 32 (7 votes)

Models of moderation

2018-02-02T23:29:51.335Z · score: 61 (16 votes)