Drake Thomas's Shortform

post by Drake Thomas (RavenclawPrefect) · 2024-10-23T16:49:30.979Z · LW · GW · 4 comments

Contents

4 comments

4 comments

Comments sorted by top scores.

comment by Drake Thomas (RavenclawPrefect) · 2024-10-23T16:49:31.357Z · LW(p) · GW(p)

I work on a capabilities team at Anthropic, and I've spent (and spend) a while thinking about whether that's good for the world and which kinds of observations could update me up or down about it. This is an open offer to chat with anyone else trying to figure out questions of working on capability-advancing work at a frontier lab! I can be reached at "graham's number is big" sans spaces at gmail.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-10-23T18:06:10.986Z · LW(p) · GW(p)

Isn't the most relevant question whether it is the best choice for you? (Taking into account your objectives which are (mostly?) altruistic.)

I'd guess having you work on capabilities at Anthropic is net good for the world[1], but probably isn't your best choice long run and plausibly isn't your best choice right now. (I don't have a good understanding of your alternatives.)

My current view is that working on capabilites at Anthropic is a good idea for people who are mostly altruistically motivated if and only if that person is very comparatively advantaged at doing capabilies at Anthropic relative to other similarly altruistically motivated people. (Maybe if they are in the top 20% or 10% of comparatively advantage among this group of people.)


  1. Because I think Anthropic being more powerful/successful is good, the experience you'd gain is good, and the influence is net positive. And these factors are larger than the negative externalities on advacing AI for other actors. ↩︎

Replies from: Raemon, akash-wasil
comment by Raemon · 2024-10-23T18:31:06.269Z · LW(p) · GW(p)

The way I'd think about this: You should have at least 3 good plans for what you would do that you really believe in, and at least one of them should be significantly different from what you are currently doing. I find this really valuable for avoiding accidental inertia, motivated reasoning, or just regular ol' tunnel vision.

I remain fairly confused about Anthropic despite having thought about it a lot, but in my experience "have two alternate plans you really believe in" is a sort of necessary step for thinking clearly about one's mainline plan.

comment by Akash (akash-wasil) · 2024-10-23T18:27:46.757Z · LW(p) · GW(p)

@Drake Thomas [LW · GW] are you interested in talking about other opportunities that might be better for the world than your current position (and meet other preferences of yours)? Or are you primarily interested in the "is my current position net positive or net negative for the world" question?