Posts
Comments
(Cross-posted from the EA forum)
Hi, I run the 80,000 Hours job board, thanks for writing this out!
I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.
For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:
- Infosec
- Even if we were very sure that OpenAI was reckless and did not care about existential safety, I would still expect them to not want their model to leak out to competitors, and importantly, we think it's still good for the world if their models don't leak! So I would still expect people working on their infosec to be doing good work.
- Non-infosec safety work
- These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this!
- This is true even if we expect them to lack political power and to play second fiddle to capabilities work and even if that makes them less good opportunities vs. other companies.
We also include a note on their 'job cards' on the job board (also DeepMind’s and Anthropic’s) linking to the Working at an AI company article you mentioned, to give context. We’re not opposed to giving more or different context on OpenAI’s cards and are happy to take suggestions!
Fixed, thanks!
Woooo!
This event is no longer cancelled!
This event is cancelled. I will not be there in time due to Hurricane Fiona, and cannot guarantee another host. Big apologies!
Come hang out, shy pals!
From my reading, he's much more scout than postmodern soldier in his lectures https://foucault.info/parrhesia/foucault.DT1.wordParrhesia.en/ -- and as a bonus, a much easier read.
This is an excellent post. I expect (and hope!) it will shape how I handle disagreements.
"The person counters every objection raised, but the counters aren’t logically consistent with each other."
Is there a particular term for this? This is something I've encountered before, and having a handle for it might help with addressing it.
Come hang out!
This was a rich read, thank you!
This is interesting. Am I wrong in summarizing it as "deontology helps with coordination"?
Is there a way to see all the nominations listed? I registered in 2020 so I can't vote but I'd still love to pick through the nominations.
I of course agree (with original parent comment) that there are real problems and real solutions. I think three things are needed to qualify as a grifter, as compared to an honest informer.
1) The sell. A grifter doesn't just tell you there's a problem, they will share the solution for a price. If you give them money (or control), they'll relieve your worry.
2) The exclusivity. Alternative solutions are specifically highlighted as being insufficient or counterproductive. Somebody engaging in good-faith can acknowledge the costs and benefits of different approaches (while still believing their solution is best).
3) The promise. Grifts promise a solution, not just a tool. An honest informer will have ideas about what can help, but won't guarantee their success.
I think somebody can fit two of these three and still be in good-faith.
76 was originally disclaimed with "wait a year before trying to be friends", which maybe should be added back in. I think friendship with exes is often doable eventually, it's the immediate aftermath where I think people handle themselves poorly and add trouble to whatever trouble made them break up.
Yeah, I think the Reddit solution will lose its value over time. I think the important part is to find an argument (this applies to Hacker News too).
I'd be interested to find another take on #69, I think that's one I came up with on my own through trial and error.
Thanks!
It's not near the top of my mind either, but it is something I feel confident recommending to almost everybody, whereas I don't feel confident advising people on their financial investments. This is a small fruit, but it's low-hanging.
These are useful criticisms! I'll caveat it later towards trusted friends, which I think cuts off much of the risks.
Hi all!
I've been reading SSC for years, and took the plunge to read the sequences last year. It shifted a lot of my previous beliefs, and though I have a strong aversion to group identity, I suppose I'm an "aspiring rationalist" now!
About me: I work in Kelowna, BC. I've written for Quillette and Areo in the past, but my current writing output is confined to Letter (including this conversation on metarationality). I will probably do effortposts here eventually, but for now I'm mostly scanning for coronavirus information.
Any rationalists or effective altruists in BC, message me!
Please pardon the late reply. I've modified the plan to two months each for a gratitude journal, cold showers, meditation, and cutting sugar. Thank you for sharing your own experiences!
Cold showers: I'm curious about the distance between how people swear by them and the inconclusive research - and, like you said, there's no obvious harm. When I've done them in the past I've noticed an energy boost in the hours afterwards, but I've been unable to push myself to keep them up when autumn hits. So this will be a summer experiment.
Sugar: I once cut sugar (all sweets, though not all added sugars) for roughly two months and felt like I noticed the disappearance of big energy drops. This time I want to see what else comes of it.
Gratitude journal: Given the research behind it, this seems obviously worth the effort, and is what I'm doing right now. It seems like there's evidence that writing 1-3 times a week gives better results, but I'm doing it daily for the habit formation.
Other than that, I don't have much to reply beyond appreciating the personal reference points, the meditation links, and the encouragement!
I got some good ideas from this list, and will report back with results.
Most of these seem like hacks to increase extroversion.
For #12, shouldn't three star reviews divide (one star + five star reviews), not be subtracted?
This is helpful, thanks! Do you have a study or post in mind?
This seems correct to me. There are already self-improvement approaches to attempt and modify. Using epistemic rationality to achieve instrumental rationality is less about creating an RSI, and more about evaluating and improving upon existing SIs.