Posts

Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) 2024-04-23T03:58:43.443Z
Jobs, Relationships, and Other Cults 2024-03-13T05:58:45.043Z
The "context window" analogy for human minds 2024-02-13T19:29:10.387Z
Throughput vs. Latency 2024-01-12T21:37:07.632Z
Taking responsibility and partial derivatives 2023-12-31T04:33:51.419Z
The proper response to mistakes that have harmed others? 2023-12-31T04:06:31.505Z
Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" 2023-11-21T17:39:17.828Z
Is the Wave non-disparagement thingy okay? 2023-10-14T05:31:21.640Z
Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) 2023-09-28T02:48:58.994Z
Joseph Bloom on choosing AI Alignment over bio, what many aspiring researchers get wrong, and more (interview) 2023-09-17T18:45:28.891Z
What is the optimal frontier for due diligence? 2023-09-08T18:20:03.300Z
Conversation about paradigms, intellectual progress, social consensus, and AI 2023-09-05T21:30:17.498Z
Announcement: AI Narrations Available for All New LessWrong Posts 2023-07-20T22:17:33.454Z
Some reasons to not say "Doomer" 2023-07-09T21:05:06.585Z
Open Thread - July 2023 2023-07-06T04:50:06.735Z
Reacts now enabled on 100% of posts, though still just experimenting 2023-05-28T05:36:40.953Z
New User's Guide to LessWrong 2023-05-17T00:55:49.814Z
Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate 2023-05-11T21:20:52.537Z
[New] Rejected Content Section 2023-05-04T01:43:19.547Z
Open & Welcome Thread - May 2023 2023-05-02T02:58:01.690Z
What 2025 looks like 2023-05-01T22:53:15.783Z
Should LW have an official list of norms? 2023-04-25T21:20:53.624Z
[Feedback please] New User's Guide to LessWrong 2023-04-25T18:54:40.379Z
LW moderation: my current thoughts and questions, 2023-04-12 2023-04-20T21:02:54.730Z
A Confession about the LessWrong Team 2023-04-01T21:47:11.572Z
[New LW Feature] "Debates" 2023-04-01T07:00:24.466Z
LW Filter Tags (Rationality/World Modeling now promoted in Latest Posts) 2023-01-28T22:14:32.371Z
The LessWrong 2021 Review: Intellectual Circle Expansion 2022-12-01T21:17:50.321Z
Petrov Day Retrospective: 2022 2022-09-28T22:16:20.325Z
LW Petrov Day 2022 (Monday, 9/26) 2022-09-22T02:56:19.738Z
Which LessWrong content would you like recorded into audio/podcast form? 2022-09-13T01:20:06.498Z
Relationship Advice Repository 2022-06-20T14:39:36.548Z
Open & Welcome Thread - May 2022 2022-05-02T23:47:21.181Z
A Quick Guide to Confronting Doom 2022-04-13T19:30:48.580Z
March 2022 Welcome & Open Thread 2022-03-02T19:00:43.263Z
[Beta Feature] Google-Docs-like editing for LessWrong posts 2022-02-23T01:52:22.141Z
[New Feature] Support for Footnotes! 2022-01-04T07:35:21.500Z
Open & Welcome Thread November 2021 2021-11-01T23:43:55.006Z
Petrov Day Retrospective: 2021 2021-10-21T21:50:40.042Z
Book Review Review (end of the bounty program) 2021-10-15T03:23:04.300Z
Petrov Day 2021: Mutually Assured Destruction? 2021-09-22T01:04:26.314Z
LessWrong is paying $500 for Book Reviews 2021-09-14T00:24:23.507Z
You can get feedback on ideas and external drafts too 2021-09-09T21:06:04.446Z
LessWrong is providing feedback and proofreading on drafts as a service 2021-09-07T01:33:10.666Z
(apologies for Alignment Forum server outage last night) 2021-08-25T14:45:06.906Z
Welcome & FAQ! 2021-08-24T20:14:21.161Z
The Case for Extreme Vaccine Effectiveness 2021-04-13T21:08:39.470Z
Vows & Declaration 2021-03-20T17:21:52.866Z
Feelings of Admiration, Ruby <=> Miranda 2021-03-19T16:12:35.577Z
Partnership 2021-03-11T17:17:02.266Z

Comments

Comment by Ruby on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) · 2024-04-26T19:22:43.847Z · LW · GW

Over the years the idea of a closed forum for more sensitive discussion has been raised, but never seemed to quite make sense. Significant issues included:
- It seems really hard or impossible to make it secure from nation state attacks
- It seems that members would likely leak stuff (even if it's via their own devices not being adequately secure or what)

I'm thinking you can get some degree of inconvenience (and therefore delay), but hard to have large shared infrastructure that's that secure from attack.

Comment by Ruby on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) · 2024-04-26T18:44:58.873Z · LW · GW

I'd be interested in a comparison with the Latest tab.

Comment by Ruby on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) · 2024-04-26T18:42:57.698Z · LW · GW

Typo? Do you mean "click on Recommended"? I think the answer is no, in order to have recommendations for individuals (and everyone), they have browsing data.

1) LessWrong itself doesn't aim for a super high degree of infosec. I don't believe our data is sensitive to warrant large security overhead.
2) I trust Recombee with our data about as much as our trust ourselves to not have a security breach. Maybe actually I could imagine LessWrong being of more interest to someone or some group and getting attacked.

It might help to understand what your specific privacy concerns are.

Comment by Ruby on The Best Textbooks on Every Subject · 2024-04-08T17:56:49.977Z · LW · GW

Hard to answer without knowing your background. I might try online courses or ask Chat-GPT here for advice.

Comment by Ruby on Scale Was All We Needed, At First · 2024-03-22T18:07:45.997Z · LW · GW

Curated. It's a funny thing how fiction can sharpen our predictions, at least fiction that's aiming to be at least plausible in some world model. Perhaps it's the exercise of playing our models forwards in detail rather than isolated abstracted predictions. This is a good example. Even if it seems implausible, noting why is interesting. Curating, and I hope to see more of these built on differing assumptions and reaching different places. Cheers.

Comment by Ruby on Using axis lines for good or evil · 2024-03-19T01:25:03.954Z · LW · GW

Curated. Beyond the object level arguments for how to do plots here that are pretty interesting, I like this post for the periodic reminder/extra evidence that relatively "minor" details in how information is presented can nudge/bias interpretation and understanding.

I think the claims around bordering lines become strongly true if there were established convention, and more weakly so the way currently are. Obviously one ought to be conscious in reading and creating graphs for whether 0 is included.

Comment by Ruby on My Clients, The Liars · 2024-03-11T03:27:13.708Z · LW · GW

I'd be pretty interested in the non-cartoonish version, also from people who are more competent and savvy.

Comment by Ruby on My Clients, The Liars · 2024-03-08T18:45:26.535Z · LW · GW

For balanced feedback, I enjoyed the choice of diction, and particularly those two words.

Trivia: in racetracks, a "chicane" is a random "unnecessary" kink or twist inserted to make it more complicated (and more challenging/fun).

Comment by Ruby on Shortform · 2024-03-01T18:53:42.121Z · LW · GW

My understanding is commitment is you say that won't swerve first in a game of chicken. Pre-commitment is throwing your steering wheel out the window so that there's no way that you could swerve even if you changed your mind.

Comment by Ruby on The Pareto Best and the Curse of Doom · 2024-02-26T19:36:47.003Z · LW · GW

Sparsity seems like maybe a relevant keyword.

Comment by Ruby on Shaming with and without naming · 2024-02-24T03:01:59.036Z · LW · GW

I feel like marring the reputation of a person in response to wrongdoing has a very important basic purpose for warning other people about interacting with the wrongdoer, i.e. Sarah Smith is dishonest, so don't trust things she says to be true. This is valuable in worlds where everyone is already a fixed truth-teller/liar and everybody has fixed values.

Comment by Ruby on The Pareto Best and the Curse of Doom · 2024-02-24T02:28:16.921Z · LW · GW

I like the content/concept here but feel "curse of doom" doesn't communicate the idea very well. This does seem like effectively a curse of dimensionality though? (Perhaps that's what inspired this name). Not sure of "Pareto Best of the Curse of Dimensionality" is the right name, but I think it gets at the idea better than generic "doom".

Comment by Ruby on CFAR Takeaways: Andrew Critch · 2024-02-23T19:19:38.327Z · LW · GW

Curated. This post feels to me like a kind of a survey of the mental skills and properties people do/don't have for effectiveness, of which I don't recall any other examples right now, and so is quite interesting. I think it's both interesting from allowing someone to ask themselves if they're weak on any of these, but also helpful in modeling others and answering questions of the sort "why don't people just X?". For all that we spend a tonne of time interacting with people, people's internal mental lives are private, and so much like shower habits (I'm told) vary a lot more than externally observable behaviors. 

I would like to see the "scope sensitivity" piece fleshed out more. I can see how it applies to eliminating annoyances that take 10 minutes every day and add up, but I don't think that's at the heart of rationality. I'd be curious how much mileage someone gets from just reflection on their own mind, and how much that can be done without invoking numeracy.

Comment by Ruby on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T18:52:03.966Z · LW · GW

It does, quite a bit! Definitely speeds me up somewhere between 20% and 100% depending on task. And I think it's a bigger deal for those now working on code and who are newer to it.

Comment by Ruby on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T19:00:46.438Z · LW · GW

This is basically what we do, capped by our team capacity. For most of the last ~2 years, we had ~4 people working full-time on LessWrong plus shared stuff we get from EA Forum team. Since the last few months, we reallocated people from elsewhere in the org and are at ~6 people, though several are newer to working on code. So pretty small startup. Dialogues has been the big focus of late (plus behind the scenes performance optimizations and code infrastructure).

All that to say, we could do more with more money and people. If you know skilled developers willing to live in the Berkeley area, please let us know!

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T21:09:41.341Z · LW · GW

My intuition (not rigorous) is there a multiple levels in the consequentialist/deontoligical/consequentialist dealio.

I believe that unconditional friendship is approximately something one can enter into, but one enters into it for contingent reasons (perhaps in a Newcomb-like way – I'll unconditionally be your friend because I'm betting that you'll unconditionally be my friend). Your ability to credibly enter such relationships (at least in my conception of them) is dependent on you not starting to be more "conditional" because you doubt that the other person is also being there. This I think is related to not being a "fair weather" friend. I continue to be your friend even when it's not fun (you're sick, need taking care of whatever) even if I wouldn't have become your friend to do that. And vice versa. Kind of a mutual insurance policy.

Same thing could be with contracts, agreements, and other collaborations. In a Newcomb-like way, I commit to being honest, being cooperative, etc to a very high degree even in the face of doubts about you. (Maybe you stop by the time someone is threatening your family, not sure what Ben, et al, think about that.) But the fact I entered into this commitment was based on the probabilities I assigned to your behavior at the start.

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T02:24:27.071Z · LW · GW

I see interesting points on both sides here. Something about how this comment(s) is expressed makes me feel uncomfortable, like this isn't the right tone for exploring disagreements about correct moral/cooperative behavior, it at least it makes it a lot harder for me to participate. I think it's something like it feels like performing moral outrage/indignation in a way that feels more persuadey than explainy, and more in the direction of social pressure, norms-enforcery. The phrase "shame on you" is a particularly clear thing I'll point at that makes me perceive this.

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T00:47:47.083Z · LW · GW

I was going to write stuff about integrity, and there's stuff to that, but the thing that is striking me most right now is that the whole effort seemed very incompetent and naive. And that's upsetting.

I am now feeling uncertain about the incompetence and naivety of it. Whether this was the best move possible that failed to work out, or best move possible that actually did get a good outcome, or a total blunder is determined by info I don't have.

I have some feeling of they were playing against a higher-level political player which both makes it hard but also means they needed to account for that? Their own level might be 80+th percentile in reference class of executive/board type-people, but still lower than Sam.

The piece that does seem most like they really made a mistake was trying to appoint an interim CEO (Mira) who didn't want the role. It seems like before doing that, you should be confident the person wants it.

I've seen it raised that the board might find the outcome to be positive (board stays independent even if current members leave?). If that's true, does change the evaluation of the competence. Feels hard for me to confidently judge that, though my gut sense is Sam got more of what he wanted/common knowledge of his sway than others.

Comment by Ruby on AI Alignment [progress] this Week (11/19/2023) · 2023-11-22T01:41:58.542Z · LW · GW

Styling of the headers in this post is off and makes it harder to read. Maybe the result of a bad copy/paste?

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T20:58:52.483Z · LW · GW

These recent events have me thinking the opposite: policy and cooperation approaches to making AI go well are doomed – while many people are starting to take AI risk seriously, not enough are, and those who are worried will fail to restrain those who aren't (where not being risked in a consequence of humans often being quite insane when incentives are at play). The hope lies in somehow developing enough useful AI theory that leading labs adopt and resultantly build an aligned AI even though they never believed they were going to cause AGI ruin.

And so maybe let's just get everyone to focus on the technical stuff. Actually more doable than wrangling other people to not build unsafe stuff.

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T20:29:45.007Z · LW · GW

If he can lead an exodus from OpenAI to Microsoft, he can lead one from Microsoft to somewhere else.

Comment by Ruby on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T20:28:58.607Z · LW · GW

People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules.

I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won't result in accomplishing one's consequentialist goals.

I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it's not that true of many.

Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-10-02T23:20:11.795Z · LW · GW

You were the first, as you guessed.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-29T18:43:18.058Z · LW · GW

If I were to do it again, I might include such an option, though I'm still not terribly sad I didn't.

If we really wanted that info, I could  sample and survey people who received the message, looked at it (we have data to know this) and ask them why they didn't vote. My guess is between 1-10% of people who didn't vote because of frame commented about it,, so that's 40 to 400. 

372 people have responded by now out of 2500, so 15%. Let's guess that 50% of people saw it by now, so ~1250 (though could get better data on it).  A third responded if so, which seems great for a poll. Of those the 800 who saw but didn't, I could see 100-400 doing so because the frame didn't really seem right (lining up with the above estimate). Which seems fine. I bet if I'd spent 10x developing the poll, I wouldn't get that number down much, and knowing it with more precision doesn't really help. It's LW, people are very picky and precise (a virtue...but also makes having some nice things hard).

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-29T01:54:50.427Z · LW · GW

Sorry, I get the point that the option provided doesn't let you mu/reject the frame. It's not clear to me that this is a core framing of Petrov's actions/virtue was conscientious objection or so on.

Beyond that, the survey wasn't aiming to allow people to symbolically act out their responses or to reject the frame in an unambiguous way. Insisting that you get to register that you saw it but didn't like it feels like insisting that you get to participate, but in your own way, rather than simply not engaging if you don't like it. I also feel like if there was an option to conscientious object and you took that, that'd still be within the frame I created for you to do so? But open to being corrected here.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T21:01:15.176Z · LW · GW

Huh, I'm surprised that happened. I wouldn't have thought you'd get a message given that.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T21:00:46.781Z · LW · GW

I didn't list in the main post or say until because I fear that I'm saying it defensively in response to criticism, but to model the design for this year requires knowing that we spent vastly less time on it, deciding to something at the last minute. (We'd been very busy with a massive conference in days before Petrov Day.

At 11am (US West Coast time) we started thinking there was something we could maybe do, and at 12pm we got started. I felt we needed to rush if we were to include European folks at all, so really was looking for something we could get done quickly. As the post mentions, we didn't spend much time on the poll options or trying to design it well, we just wanted something out so half the people wouldn't be completely excluded.

The second message idea wasn't even chosen until about half an hour before I sent it. We basically sent the first message and then worked on figuring out how to build on it, and then "next year's will be decided based on this" was an 11th hour insight. It gives it some stakes without being overwhelming stakes.

Could we have done something better with more time and effort? For sure. But I think this was better than letting Petrov Day pass without any kind of commemoration.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T20:54:40.178Z · LW · GW

I predict that, if you presented the parts I quoted from the survey message to a random sample of the university-educated population, and asked them whether they thought the poll was biased, >50% would say yes

That seems quite plausible to me. My response was that we weren't trying especially hard to avoid bias because we weren't trying to get a super clear result.

And as such, and in keeping with the Petrov Day theme, I maintain that it's important to offer a true "mu" option, or a "do not participate".

Can you elaborate on this? 

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T19:50:17.943Z · LW · GW

My metahonesty is I might hoodwink you a little on April Fools, Petrov Day, and similar.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T19:48:04.717Z · LW · GW

I'm sad you didn't like it. It indeed was not a carefully planned rigorous survey of Petrov Day attitudes.

In my thinking, it was more the start to a ~game/exercise than trying to maximally model people's attitudes. I wanted to assign people to "teams" (I'd considered random assignment), but this felt this was a little more meaningful, and there's non-zero bits even in an imperfect survey.

There was no intention to be leading in the responses, nor to corral for any particular response.

I actually hoped that the slap-dash nature would make people suspicious (plus inadvertent bugs/typos) and get them more into Petrov Day mood. From other comments, it sounds like this did happen somewhat.

I think if a failure happened here, it's that you and others saw the poll as primarily an attempt to accurate survey LessWrong member's beliefs about Petrov (pretty reasonable belief), but for me it was the start to something else, and goal wasn't "rigorous survey", for which a mu option would have made sense.

I'm uncertain how much we should ever be a little sneaky/misleading for the sake of games/experiments/etc. I'm a pro norm that on April Fool's and Petrov Day and similar, people might hoodwink you a little. At least I might, I will as say a matter of metahonesty.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T17:59:16.720Z · LW · GW

There is an upside to being the kind of person who will press the button in retaliation. You hope never to, but the fact that you credibly would allows for MAD game theory to apply. (FDT, etc. etc.)

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T06:28:17.346Z · LW · GW

Thanks for sharing all of that in such detail, <3 You make me feel quite glad we did this celebration.

Would you like to know which number click yours was?

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T06:27:55.577Z · LW · GW

Oh, good catch. I had the rows on the denominator sorted wrong so that table was 75% wrong. Fixed now...

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T06:10:34.946Z · LW · GW

I think that going with the majority in this case is not honoring your word. You explicitly said "the first to do so out of any minority group".

You make a very good point! I think I should update here. I too have been acting in haste. While in past years we spent quite significant number of person-days on Petrov Day, this year we've been focused elsewhere so this post was quickly written too. Fortunately, it gets feedback. Thanks, and I'll update the OP to at least say I'll need to review the decision here.

Comment by Ruby on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T04:12:49.116Z · LW · GW

I'm curious to know what your free-form response would be.

Comment by Ruby on Ruby's Public Drafts & Working Notes · 2023-09-27T11:30:48.640Z · LW · GW

Huh, well that's something.

I'm curious, who else got this? And if yes, anyone click the link? Why/why not?

Comment by Ruby on Joseph Bloom on choosing AI Alignment over bio, what many aspiring researchers get wrong, and more (interview) · 2023-09-17T23:55:23.361Z · LW · GW

Ah, nope. Oops, we haven't published that one yet but will soon. Will edit for now.

Comment by Ruby on The commenting restrictions on LessWrong seem bad · 2023-09-16T17:20:38.134Z · LW · GW

I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)

"I'm downvoted because I'm controversial" is a go-to stance for people getting downvoted (and resultantly rate-limited), though in my experience the issue is quality rather than controversy (or rather both in combination).

Overall though, we've been thinking about the rate limit system and its effects. I think there are likely bad effects even if it's successfully in some case reducing low quality stuff.

Comment by Ruby on Sharing Information About Nonlinear · 2023-09-10T03:20:45.171Z · LW · GW

I think if you are a cofounder of a organization and have a front row seat, that even if you were not directly doing the worst things, I want hold you culpable for not noticing or intervening.

Comment by Ruby on Sharing Information About Nonlinear · 2023-09-07T20:20:33.463Z · LW · GW

I don't think the post fully conveyed it, but I think the employees were quite afraid of leaving and expected this to get them a lot of backlash or consequences. A particularly salient for people early in EA careers is what kind of reference they'll get.

Think about the situation of leaving your first EA job after a few months. Option 1: say nothing about why you left, have no explanation for leaving early, don't really get a reference. Option 2: explain why the conditions were bad, risk the ire of Nonlinear (who are willing to say things like "your career could be over in a couple of DMs"). It's that kind of bind that gets people to keep persisting, hope it'll get better.

Comment by Ruby on How ForumMagnum builds communities of inquiry · 2023-09-05T19:56:12.830Z · LW · GW

There's a single codebase. It's React and the site is composed out of "components". Most components are shared but can have some switching logic within them changes behavior. For some things e.g. frontpage, each site has its own customized component. There are different "style sheets" / "themes" for each them. When you run in instance for Forum Magnum, you tell it whether it's a LW instance, EA Forum instance, etc. and it will run as the selected kind of site.

Coordination happens via Slack, GitHub, and a number of meetings (usually over Zoom/Tuple). Many changes get "forum-gated" so they only apply to one site.

Comment by Ruby on How ForumMagnum builds communities of inquiry · 2023-09-05T17:45:10.436Z · LW · GW

The LW1.0 was a fork of the Reddit codebase, I assume because it was available and had many of the desired features. I wasn't there for the decision to build LW2.0 as a new Forum, but I imagine doing so allowed for a lot more freedom to build a forum that served the desired purpose in many ways.

how ForumMagnum is really developed

Something in your framing feels a bit off.  Think of "ForumMagnum" as an engine and LessWrong, EA Forum as cars. We're the business of "building and selling cars", not engines. LW and EA Forum are sufficiently similar to use the same the engine, but there aren't Forum Magnum developers, just "LW developers" and "EAF developers". You can back out an abstracted Forum Magnum philosophy, but it's kind of secondary/derived from the object level forums. I suppose my point is against treating it as too primary.

Comment by Ruby on How ForumMagnum builds communities of inquiry · 2023-09-04T19:53:42.108Z · LW · GW

Very interesting! You've identified many of the reasons for many of the decisions.

Cmd-enter will submit on LW comment boxes.

Forum Magnum was originally just the LessWrong codebase (built by the LessWrong Team that later renamed/expanded as Lightcone Infrastructure), and the EA Forum website for a long while was a synced fork of it. In 2021 we (LW) and EA Forum decided to have a single codebase with separate branches rather than a fork (in many ways very similar, but reduced some frictions), and we chose the name Forum Magnum for the shared codebase.

You can see who's contributed to the codebase here: https://github.com/ForumMagnum/ForumMagnum/graphs/contributors

jimrandomh, Raemon, discordious, b0b3rt and darkruby501 are LessWrong devs.
 

Comment by Ruby on Dear Self; we need to talk about ambition · 2023-09-03T04:24:51.864Z · LW · GW

Curated. I like a lot of things about this post, but I particularly like posts that dig out something vaguely like "social" vs "non-social drives", and how our non-social drives affect the social incentives that we set up for ourselves. I think this is a complicated tricky topic and Elizabeth has done a commendable job tacking it for herself, a good example of tackling this head on. It's also just unfortunate that the message of "think for yourself/motivate yourself indepedent of other's approval" can become a hoop of other's approval. I like that this was called out. It's tricky, but perhaps that's just how it needs to be.

Comment by Ruby on Feedbackloop-first Rationality · 2023-08-25T19:40:57.592Z · LW · GW

Curated. There's a lot about Raemon's feedbackloop-first rationality that doesn't sit quite right, isn't quite how I'd theorize about it, but there's a core here I do like. My model is that "rationality" was something people were much more excited about ~10 years ago until people updated that AGI was much closer than previously thought. Close enough, that rather than sharpen the axe (perfect the art of huma thinking), we better just cut the tree now (AI) with what we've got.

I think that might be overall correct, but I like it if not everyone forgets about the Art of Human Rationality. And if enough people pile on the AI Alignment train, I could see it being right to dedicate quite a few of them to the meta of generally thinking better.

Something about the ontology here isn't quite how I'd frame it, though I think I could translate it. The theory that connects this back to Sequences rationality is perhaps that feedbackloops are iterated empiricism with intervention. An alternative name might be "engineered empiricism", basically this is just one approach to entangling oneself with the territory. That's much less of what Raemon's sketched out, but I think situating feedbackloops within known rationality-theory would help.

I think it's possible this could help with Alignment research, though I'm pessimistic about that unless Alignment researchers are driving the development process, but maybe it could happen and just be slower.

I'd be pretty glad for a world where we had more Raemons and other people and this could be explored. In general, I like this is for keeping alive the genre of "thinking better is possible", a core of LessWrong and something I've pushed to keep alive even as the bulk of the focus is on concrete AI stuff.

Comment by Ruby on Feedbackloop-first Rationality · 2023-08-25T19:17:19.972Z · LW · GW

but I do think it's the most important open problem in the field.

What are the other contenders?

Comment by Ruby on Self-driving car bets · 2023-08-16T18:20:59.467Z · LW · GW

Curated! I'm a real sucker for retrospectives, especially ones with reflection over long periods of time and with detailed reflection on the thought process. Kudos for this one. I'd be curious to see more elaboration on the points that go behind:

Overall I’ve learned from the last 7 years to put more stock in certain kinds of skeptical priors, but it hasn’t been a huge effect.

Comment by Ruby on [deleted post] 2023-08-16T03:54:17.369Z

It will be the publish date of the republishing.

Comment by Ruby on How do I find all the items on LW that I've *favorited* or upvoted? · 2023-08-08T17:44:30.535Z · LW · GW

For context, EA Forum and LessWrong have approximately the same code and approximately the same features. So thanks to their team for making this useful feature. <3

Comment by Ruby on Accidentally Load Bearing · 2023-07-20T07:53:21.690Z · LW · GW

Curated. I worry a little that this a bit "insight-porn-y", but Chesteron's Fence is enough of a favorite concept that I appreciate elaboration upon it. It might be the case that "Kaufman's closet" saves me/someone from a grave mistake someday.