Comment by habryka4 on Crypto quant trading: Intro · 2019-04-21T17:53:01.410Z · score: 4 (2 votes) · LW · GW

Yeah, Satvik works on the same team as Alexei (I think)

Comment by habryka4 on Rest Days vs Recovery Days · 2019-04-21T04:34:13.910Z · score: 4 (2 votes) · LW · GW

Sunday is usually church and rest day for most christian countries, so that seems more canonical to me.

Comment by habryka4 on More realistic tales of doom · 2019-04-20T02:34:45.194Z · score: 6 (3 votes) · LW · GW

Promoted to curated: I think this post made an important argument, and did so in a way that I expect the post and the resulting discussion around it to function as a reference-work for quite a while.

In addition to the post itself, I also thought the discussion around it was quite good and helped me clarify my thinking in this domain a good bit.

Comment by habryka4 on Username change and event page editing · 2019-04-19T20:09:36.086Z · score: 2 (1 votes) · LW · GW

Alas, the screenshot above is from FF66 on macOS. So looks like the smallest things can make a difference :/

Comment by habryka4 on Username change and event page editing · 2019-04-19T18:47:15.848Z · score: 2 (1 votes) · LW · GW

It shows up for me. Here is a screenshot from my Firefox:

Firefox Screenshot

You can see it in the bottom right corner.

I do indeed test our website on Firefox about once a week and make sure everything works, and also do a bunch of crash-tracking to understand whether there are any browser-related issues that show up. You might be using an outdated version of Firefox, and sadly the development effort for testing all current versions of Firefox (and it's not just about older versions, most of our browser-related bugs actually come from more recent releases that then get patched) is beyond our development capacity.

Comment by habryka4 on Username change and event page editing · 2019-04-19T05:37:19.724Z · score: 2 (1 votes) · LW · GW

(In case it's unclear, the Intercom widget is the thing in the bottom right corner that allows you to open a chat with the dev team)

Comment by habryka4 on Liar Paradox Revisited · 2019-04-18T23:22:43.616Z · score: 2 (1 votes) · LW · GW

Hmm, it looks like it was written in the draft-js editor. In any case, I fixed it for you.

Comment by habryka4 on Where to Draw the Boundaries? · 2019-04-18T23:19:29.656Z · score: 10 (3 votes) · LW · GW

After rereading the post a few times, I think you are just misunderstanding it?

Like, I can't make sense of your top-level comment in my current interpretation of the post, and as such I interpreted your comment as asking for clarification in a weirdly hostile tone (which was supported by your first sentence being "What is that sentence supposed to tell me?"). I generally think it's a bad idea to start substantive criticisms of a post with a rhetorical question that's hard to distinguish from a genuine question (and probably would advise against rhetorical questions in general, but am less confident of that).

To me the section you quoted seems relatively clear, and makes a pretty straightforwardly true point, and from my current vantage point I fail to understand your criticism of it. I would be happy to try to explain my current interpretation, but would need a bit more help understanding what your current perspective is.

Comment by habryka4 on Where to Draw the Boundaries? · 2019-04-18T19:17:03.902Z · score: 8 (4 votes) · LW · GW

I do find myself somewhat confused about the hostility in this comment. It's hard to write good things, and there will always be misunderstandings. Many posts on LessWrong are unnecessarily confusing, including many posts by Eliezer, usually just because it takes a lot of effort, time and skill to polish a post to the point where it's completely clear to everyone on the site (and in many technical subjects achieving that bar is often impossible).

Recommendations for how to phrase things in a clear way seem good to me, and I appreciate them on my writing, but doing so in a way that implies some kind of major moral failing seems like it makes people overall less likely to post, and also overall less likely to react positively to feedback.

Comment by habryka4 on Liar Paradox Revisited · 2019-04-18T03:32:54.601Z · score: 2 (1 votes) · LW · GW

Weird, here is a gif of how it's supposed to work:

Are you using the markdown or the draft-js editor? If it's the markdown editor, then surrounding things with those ticks should make everything in between them code.

Comment by habryka4 on Crypto quant trading: Intro · 2019-04-17T21:52:18.710Z · score: 9 (6 votes) · LW · GW

This is quite interesting. I don't expect to ever get into quant-trading myself, but still expect to find a bunch of this valuable. I like the hands-on approach and the pace seems roughly good to me so far, though I already have some amount of data-science experience.

The anti-inductive nature of the whole thing makes me really confused about what kinds of strategies I would want to explore. Like, we could use any of the standard AI methods, from bayesian modeling, to a hidden markov model to deep neural nets, but I don't expect any of them to work.

Comment by habryka4 on Liar Paradox Revisited · 2019-04-17T19:18:19.012Z · score: 2 (1 votes) · LW · GW

Open a code-block by typing ``` (triple tick), then press enter.

Comment by habryka4 on What is Driving the Continental Drift? · 2019-04-17T18:57:17.043Z · score: 2 (1 votes) · LW · GW
Way back then and before, I was already a fan of Alfred Wegener's theory of continental drift, which was still being disputed at the time;

It's still super weird to me how recent continental drift is as a theory with broad scientific consensus. However, I am pretty sure that it was already in all the high-school textbooks by the late 80s and 90s, so I am somewhat confused by this sentence.

Comment by habryka4 on What is Driving the Continental Drift? · 2019-04-17T18:50:30.468Z · score: 2 (1 votes) · LW · GW

Fixed them for you. Sorry for the technical difficulties, feel free to ping us on Intercom if you ever run into problems like this again.

Comment by habryka4 on Experimental Open Thread April 2019: Socratic method · 2019-04-17T18:18:45.399Z · score: 1 (2 votes) · LW · GW

I mean, at least that was the whole point of Socrates questioning, wasn't it? Maybe we need a different term for something that is less adversarial, but compared to Plato's original texts, the questions here are much less leading.

Comment by habryka4 on StrongerByScience: a rational strength training website · 2019-04-17T18:15:53.136Z · score: 2 (1 votes) · LW · GW

(Removed the duplicate [Link] from the title)

Comment by habryka4 on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T18:12:41.126Z · score: 2 (1 votes) · LW · GW

There is the classical example from Strategy of Conflict (ironically copied from your own book summary):

Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.
Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.
And after we split the prize and cash our checks I learn that you broke the radio on purpose.

(Edit: Nevermind, misread the above to say" In what simple models are people worse off when they have more information about others")

Comment by habryka4 on Conspiracy World is missing. · 2019-04-17T18:10:13.844Z · score: 2 (1 votes) · LW · GW

Interesting idea. I will look into that.

Comment by habryka4 on Open Problems in Archipelago · 2019-04-17T18:09:01.687Z · score: 6 (4 votes) · LW · GW

I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.

Comment by habryka4 on Open Problems in Archipelago · 2019-04-17T03:38:22.330Z · score: 9 (5 votes) · LW · GW

Yeah, this is roughly what I meant with not really giving it a shot in terms of UI.

Comment by habryka4 on Open Problems in Archipelago · 2019-04-16T23:25:34.778Z · score: 29 (10 votes) · LW · GW

In a broader sense, I do kind of feel like from a UI and culture perspective, we never really gave the Archipelago stuff a real shot. I do think we should make a small update that the problem can't just be solved by giving a bunch of people moderation power and allowing them to set their own guidelines, but I think I already modeled the problem as pretty difficult and so this isn't a major update.

We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.

That said, I don't actually think this should be our top priority, though the last few weeks have updated me more towards a bunch of problems in this space being things we need to start tackling again soon. My current model is that the top priority should be more about establishing the latter stages of the intellectual progress funnel with stuff like Q&A, and that some of those things are actually more likely to solve a lot of the things that the Archipelago was trying to solve (as an example, I expect spaces oriented around a question to generate less conflict-heavy discussions, which I expect will make people more interested in writing up their ideas publicly. I also expect questions to more naturally give rise to conversations oriented around some concrete outcome, which I also expect to create a more focused atmosphere and support more archipelago-like conversations)

Comment by habryka4 on Open Problems in Archipelago · 2019-04-16T23:11:34.800Z · score: 7 (3 votes) · LW · GW

There is still the whole off-topic/on-topic thing that lots of people were excited about in the Archipelago thread. That still seems like plausibly a good idea to me.

Comment by habryka4 on Slack Club · 2019-04-16T18:40:39.026Z · score: 25 (8 votes) · LW · GW
I point this out merely because I realized, when you brought up the spiritual example, that I wasn't given a full account of what's different about rationalists, maybe, in that there's a tendency to make new jargon even when a literature search would reveal existing jargon exists.

I don't think this is different for STEM, or cognitive science, or self-help. After having studied both CS and Math and studied some physics in my off-time, everyone constantly invents new names for all the things. To give you a taste, the first paragraph from the Wikipedia article on Tikhonov regularization:

Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.

You will find the same pattern of lots of different names for the exact same thing in almost all statistical concepts in the Wikipedia series on statistics.

Comment by habryka4 on Can coherent extrapolated volition be estimated with Inverse Reinforcement Learning? · 2019-04-16T17:38:55.558Z · score: 3 (2 votes) · LW · GW

Ah, yes. Sorry. Should have made the authorship that quote clearer.

Comment by habryka4 on Highlights from "Integral Spirituality" · 2019-04-16T06:38:04.193Z · score: 2 (1 votes) · LW · GW

Hmm, so originally I thought it would be best for you to create a new top-level post on your own, but I think Ray (Raemon) is planning to publish a question about this pretty soon, so it might be a better idea to wait for 24 hours or so, since that would be the most natural place to consolidate the discussion. Though you are welcome to create a new top-level post right now if that doesn't seem like a good idea to you.

Comment by habryka4 on Highlights from "Integral Spirituality" · 2019-04-16T06:24:20.597Z · score: 4 (2 votes) · LW · GW

Criticizing the post is fine, but please don't bring up the discussion whether something belongs on LessWrong or not on every post of this type. We can (and should) have that discussion, we should just have it on a separate post, and ideally in a consolidated place so we don't have the same discussion a dozen times (as we've already had it a few times).

Since this post also has a bunch of specifically stated moderation guidelines that are also explicitly called attention to in this post with which this comment doesn't seem super compatible with (and Gordon has the "I'm happy for LW site moderators to help enforce my policy" checkbox on his profile checked), I will delete this comment for now. Not sure whether that's the right call but it's in line with our historic policies, which I am happy to discuss in a separate post (as well as whether we should promote posts with more stringent moderation guidelines to the frontpage, which I usually avoid but looks like we didn't in this case, which is also up for questioning. My guess is I will move it back to personal blog tomorrow after thinking more about it, but not sure).

To be clear, I share some of your concerns and I am seriously interested in discussing this more. I would just prefer to consolidate it in a central place.

Comment by habryka4 on Rest Days vs Recovery Days · 2019-04-16T02:10:35.101Z · score: 12 (5 votes) · LW · GW

Promoted to curated: I've found this post personally quite useful. Not necessarily because it said new things, but because it gave a handle to a bunch of stuff that I already had vague intuitions about.

I've actually generally benefited a lot from the whole Sabbath stuff, and have adopted a habit of spending my Sundays away from any internet-connected devices (except in emergencies). In that practice, I also noticed that when I didn't successfully take a recovery day on Saturday, I would have a lot of trouble properly resting on Sunday, in a way that pretty closely resembled the ideas in this post.

Besides that, I appreciate that this post isn't much longer than it needs to be, properly links to related articles and generally makes its point in a clear and concise way.

The biggest criticism I have of this post is that the two words for the two days just sound too similar in my head. When referring to this post I repeatedly had to do a double-take where I rederive the meaning of the two words, and often forgot one of them since they didn't obviously derive from one another. I also have some personal distaste for the call-to-actiony things right at the end, though this one was basically fine and I expect it will have made the post better for other people.

Comment by habryka4 on Conspiracy World is missing. · 2019-04-15T19:09:52.794Z · score: 3 (2 votes) · LW · GW

Yeah, tag-links were quite rare, so we didn't port them over for dev-time reasons. If lots of people want them back, we can probably figure out some way to make them work again.

For any collection of posts that people want to link to, I recommend creating a sequence on the Library page. Anyone can create any sequence there, and a bunch of other users have already ported over most of the sequences listed on the LessWrong wiki.

Comment by habryka4 on Can coherent extrapolated volition be estimated with Inverse Reinforcement Learning? · 2019-04-15T18:25:14.459Z · score: 15 (5 votes) · LW · GW

Did you read Rohin Sha's value learning sequence? It covers this whole area in a good amount of detail, and I think answers your question pretty straightforwardly:

Existing error models for inverse reinforcement learning tend to be very simple, ranging from Gaussian noise in observations of the expert’s behavior or sensor readings, to the assumption that the expert’s choices are randomized with a bias towards better actions.
In fact humans are not rational agents with some noise on top. Our decisions are the product of a complicated mess of interacting process, optimized by evolution for the reproduction of our children’s children. It’s not clear there is any good answer to what a “perfect” human would do. If you were to find any principled answer to “what is the human brain optimizing?” the single most likely bet is probably something like “reproductive success.” But this isn’t the answer we are looking for.
I don’t think that writing down a model of human imperfections, which describes how humans depart from the rational pursuit of fixed goals, is likely to be any easier than writing down a complete model of human behavior.
We can’t use normal AI techniques to learn this kind of model, either — what is it that makes a model good or bad? The standard view — “more accurate models are better” — is fine as long as your goal is just to emulate human performance. But this view doesn’t provide guidance about how to separate the “good” part of human decisions from the “bad” part.

Here is a link to the full sequence:

Comment by habryka4 on The Hard Work of Translation (Buddhism) · 2019-04-12T20:15:15.683Z · score: 5 (3 votes) · LW · GW
The delusion that such statements “approximately capture the truth” of things like GR is pervasive, but no less a delusion for it.

Not sure whether we disagree here, my guess is I am slightly unsure what you intend to say. I do think there are statements like "time will pass more slowly relative to a stationary observer if you move close to the speed of light" that are highly specific predictions that can be verified (given sufficient investments in experiments) without deeply understanding the theory of relativity. Such a statement does definitely capture some aspect of the truth of general relativity.

If some process (like a physicist or a research lab) repeatedly generates highly surprising predictions like this that turn out to come true, someone might be said to meaningfully be "convinced of the veracity of general relativity" without a concrete understanding of the underlying theory.

Comment by habryka4 on Plans are Recursive & Why This is Important · 2019-04-12T19:20:55.987Z · score: 2 (1 votes) · LW · GW

This is at least true in feed-forward neural nets

Comment by habryka4 on Book review: Why we sleep · 2019-04-08T06:16:04.265Z · score: 2 (1 votes) · LW · GW

Excellent, these really do like the studies that I remember reading. Thank you a lot!

I would be glad to send you $10 via PayPal if you want, since I've been looking for these for quite a while.

Comment by habryka4 on [Spoilers] How did Voldemort learn the horcrux spell? · 2019-04-08T06:12:45.813Z · score: 3 (2 votes) · LW · GW

Mod note: Added spoiler tags

Long Term Future Fund: April 2019 grant decisions

2019-04-08T02:05:44.217Z · score: 52 (11 votes)
Comment by habryka4 on 0 And 1 Are Not Probabilities · 2019-04-07T19:24:00.855Z · score: 3 (2 votes) · LW · GW

We published new versions of a lot of sequences posts a few months ago. If you click on the "Response to previous version" text, you can read the original text that the comment was referring to.

Comment by habryka4 on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T05:53:15.562Z · score: 9 (5 votes) · LW · GW

I think it would pretty significantly influence which career paths I would choose. If it came back with an IQ of 90 I would be very hesitant to go into mathematics, or really research professions in general, since those tend to be both pretty heavy-tailed and seem to rely on having a high general intelligence. I would be much more likely to choose a profession that is more standardized and has less variable outcomes (most service industry jobs seem to fit reasonably well here).

Also lots of other things, like how much I should trust my own judgement vs. relying on tradition and well-established norms. It's not like this single piece of information would totally dominate my considerations, but it would add up to a general self-model that might include realizing that I am likely to be wrong when challenging tradition and established rules, and having a higher IQ should make me at least a bit bolder in challenging existing rules, since I would be more likely to actually get it right.

Also simple other things, like how much effort to put into getting high SAT scores, which would determine lots of college admissions (or whatever my local country's equivalent of the SAT is). I would be more hesitant to switch professions, since I would expect to be less good at learning new skills than other people.

(Note: Thrasymachus's response gets the basics right, in that I would expect most people to have a pretty good guess of their broad competence already. Knowing your IQ score is unlikely to be the most critical piece of evidence you will get on that, but if it's really out of line with all the other evidence you have about yourself, then I would pay attention to that and try to figure out what's up.)

Comment by habryka4 on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2019-04-06T17:55:05.291Z · score: 4 (2 votes) · LW · GW

I am currently hesitant to allow users to do that independently and without restrictions, because people changing usernames all the time can cause a bunch of confusion with authorship, make search worse and also enables impersonation.

Right now, you can ping us on Intercom and we will gladly change your username.

Comment by habryka4 on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-06T17:51:01.082Z · score: 4 (2 votes) · LW · GW

You were wrong about this aspect of GPT-2. Here is a screenshot of the plain markdown version that we got directly from GPT-2:


Comment by habryka4 on IRL 5/8: Maximum Causal Entropy IRL · 2019-04-04T23:05:53.190Z · score: 2 (1 votes) · LW · GW

I am pretty sure that you don't need a university account, though it does require registration. Is there anything in particular that made you think it's only for some universities?

Comment by habryka4 on User GPT2 is Banned · 2019-04-02T18:25:14.749Z · score: 6 (4 votes) · LW · GW

Sure, I am happy to share the training code, though we used our direct database access to export the data to train it, and that data doesn't currently contain any author information. Though you can theoretically get all the data via the API.

Comment by habryka4 on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-02T07:45:34.707Z · score: 2 (1 votes) · LW · GW

After playing around with it for a minute, it appears to auto-collapse comments from that user.

Comment by habryka4 on Open Thread April 2019 · 2019-04-01T07:38:48.449Z · score: 2 (1 votes) · LW · GW

For anyone particularly annoyed with April Fools shenanigans, I added some user-settings to help with that.

Comment by habryka4 on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T03:30:00.328Z · score: 2 (1 votes) · LW · GW

Reference: Gwern's post on Modafinil

Comment by habryka4 on [deleted post] 2019-04-01T03:14:09.868Z


Comment by habryka4 on [deleted post] 2019-04-01T03:11:27.543Z


Comment by habryka4 on [deleted post] 2019-04-01T03:02:37.059Z

Another test comment

Comment by habryka4 on [deleted post] 2019-04-01T02:53:49.034Z


Comment by habryka4 on Review of Q&A [LW2.0 internal document] · 2019-04-01T02:26:40.910Z · score: 2 (1 votes) · LW · GW

Syntax is based on the markdown-it footnotes plugin:

I will add it to my to-do list to generally update our editor guides, and make them more discoverable. Currently not documented anywhere.

Comment by habryka4 on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:21:58.330Z · score: 3 (2 votes) · LW · GW

Mod note: I decided to promote this post to the frontpage, which does mean frontpage guidelines apply, though I think overall we can be pretty flexible in this thread. Depending on how it goes we might want to promote future threads like this to the frontpage or leave them on personal blog.

What LessWrong/Rationality/EA chat-servers exist that newcomers can join?

2019-03-31T03:30:20.819Z · score: 53 (13 votes)
Comment by habryka4 on Ductive Defender: a probability game prototype · 2019-03-31T02:42:41.220Z · score: 2 (1 votes) · LW · GW

You are given conditional probabilities of your chance of hitting each asteroid, and later on get some ability to choose different probabilities for different asteroids.

Comment by habryka4 on Please use real names, especially for Alignment Forum? · 2019-03-29T06:12:58.691Z · score: 2 (1 votes) · LW · GW

GW should have that data available by querying the fullName field on users, so that should be easy to implement for them.

I think at the least we should allow users to set a setting to display their full-name by default and show them on hover. I am a bit hesitant to do the parenthesis thing, just because it would make usernames quite big, which I think will cause some problems with some upcoming redesigns we have for the frontpage.

How large is the fallout area of the biggest cobalt bomb we can build?

2019-03-17T05:50:13.848Z · score: 21 (5 votes)

How dangerous is it to ride a bicycle without a helmet?

2019-03-09T02:58:23.964Z · score: 32 (14 votes)

LW Update 2019-01-03 – New All-Posts Page, Author hover-previews and new post-item

2019-03-02T04:09:41.029Z · score: 28 (7 votes)

New versions of posts in "Map and Territory" and "How To Actually Change Your Mind" are up (also, new revision system)

2019-02-26T03:17:28.065Z · score: 36 (12 votes)

How good is a human's gut judgement at guessing someone's IQ?

2019-02-25T21:23:17.159Z · score: 43 (16 votes)

Major Donation: Long Term Future Fund Application Extended 1 Week

2019-02-16T23:30:11.243Z · score: 45 (12 votes)

EA Funds: Long-Term Future fund is open to applications until Feb. 7th

2019-01-17T20:27:17.619Z · score: 31 (11 votes)

Reinterpreting "AI and Compute"

2018-12-25T21:12:11.236Z · score: 33 (9 votes)

[Video] Why Not Just: Think of AGI Like a Corporation? (Robert Miles)

2018-12-23T21:49:06.438Z · score: 18 (4 votes)

Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?

2018-12-08T01:49:56.073Z · score: 21 (6 votes)

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

2018-11-21T03:39:15.247Z · score: 38 (9 votes)

Switching hosting providers today, there probably will be some hiccups

2018-11-15T19:45:59.181Z · score: 13 (5 votes)

The new Effective Altruism forum just launched

2018-11-08T01:59:01.502Z · score: 28 (12 votes)

Introducing the AI Alignment Forum (FAQ)

2018-10-29T21:07:54.494Z · score: 88 (29 votes)

Upcoming API changes: Upgrading to Open-CRUD syntax

2018-10-04T02:28:39.366Z · score: 16 (3 votes)

AI Governance: A Research Agenda

2018-09-05T18:00:48.003Z · score: 27 (5 votes)

Changing main content font to Valkyrie?

2018-08-24T23:05:42.367Z · score: 25 (4 votes)

LW Update 2018-08-10 – Frontpage map, Markdown in LaTeX, restored posts and reversed spam votes

2018-08-10T18:14:53.909Z · score: 24 (10 votes)

SSC Meetups Everywhere 2018

2018-08-10T03:18:58.716Z · score: 31 (9 votes)

12 Virtues of Rationality posters/icons

2018-07-22T05:19:28.856Z · score: 49 (22 votes)

FHI Research Scholars Programme

2018-06-29T02:31:13.648Z · score: 34 (10 votes)

OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August

2018-06-26T22:40:34.825Z · score: 56 (20 votes)

Announcement: Legacy karma imported

2018-05-31T02:53:01.779Z · score: 40 (8 votes)

Using the LessWrong API to query for events

2018-05-28T22:41:52.649Z · score: 12 (3 votes)

April Fools: Announcing: Karma 2.0

2018-04-01T10:33:39.961Z · score: 120 (38 votes)

Harry Potter and the Method of Entropy 1 [LessWrong version]

2018-03-31T20:38:45.125Z · score: 21 (4 votes)

Site search will be down for a few hours

2018-03-30T00:43:22.235Z · score: 12 (2 votes) URL transfer complete, data import will run for the next few hours

2018-03-23T02:40:47.836Z · score: 69 (20 votes)

You can now log in with your LW1 credentials on LW2

2018-03-17T05:56:13.310Z · score: 30 (6 votes)

Cryptography/Software Engineering Problem: How to make LW 1.0 logins work on LW 2.0

2018-03-16T04:01:48.301Z · score: 23 (4 votes)

Should we remove markdown parsing from the comment editor?

2018-03-12T05:00:22.062Z · score: 20 (5 votes)

Explanation of Paul's AI-Alignment agenda by Ajeya Cotra

2018-03-05T03:10:02.666Z · score: 55 (14 votes)

[Meta] New moderation tools and moderation guidelines

2018-02-18T03:22:45.142Z · score: 104 (36 votes)

Speed improvements and changes to data querying

2018-02-06T04:23:20.693Z · score: 32 (7 votes)

Models of moderation

2018-02-02T23:29:51.335Z · score: 58 (14 votes)

RSS Feeds are fixed and should be properly functional this time

2018-01-30T00:47:58.791Z · score: 12 (4 votes)

Notification update and PM fixes

2018-01-25T01:06:42.589Z · score: 24 (5 votes)

Sorry for being less around, will be back soon

2017-11-21T20:40:14.509Z · score: 16 (4 votes)

11/07/2017 Development Update: LaTeX!

2017-11-07T11:06:58.012Z · score: 29 (10 votes)

Moved from Moloch's Toolbox: Discussion re style of latest Eliezer sequence

2017-11-05T02:22:48.471Z · score: 18 (11 votes)

10/28/17 Development Update: New editor framework (markdown support!), speed improvements, style improvements

2017-10-28T09:59:07.667Z · score: 25 (7 votes)

10/25/17 Development update: View-tracking & database fixes

2017-10-25T12:33:56.589Z · score: 20 (6 votes)

Continuing the discussion thread from the MTG post

2017-10-24T02:20:22.980Z · score: 17 (5 votes)

10/20/17 Development Update: HTML pasting, the return of the bullets and bugfixes

2017-10-20T08:31:25.396Z · score: 9 (2 votes)

10/19/2017: Development Update (new vote backend, revamped user pages and advanced editor)

2017-10-19T09:09:06.061Z · score: 21 (7 votes)

Activated social login, temporarily deactivated normal signup

2017-10-14T03:59:11.873Z · score: 16 (3 votes)

10/10/2017: Development update ("autosaving" & Intercom options)

2017-10-10T19:15:12.804Z · score: 9 (2 votes)

10/05/2017: Development update (performance & styles)

2017-10-06T07:22:36.777Z · score: 11 (3 votes)