Anki deck for learning the main AI safety orgs, projects, and programs
post by Bryce Robertson (bryceerobertson) · 2023-09-30T16:06:33.045Z · LW · GW · 0 commentsContents
Why Anki? What you’ll learn How to access Accuracy and feedback None No comments
Having a high-level overview of the AI safety ecosystem seems like a good thing, so I’ve created an Anki deck to help people familiarise themselves with the 167 key organisations, projects, and programs currently active in the field.
Why Anki?
Anki is a flashcard app that uses spaced repetition to help you efficiently remember information over the long term. It’s useful for learning and memorising all kinds of things – it was the the main tool I used to learn German within a year – and during the time that I’ve been testing out this deck I feel like it’s already improved my grasp of the AI safety landscape.
What you’ll learn
The deck is based on data from the AI Existential Safety Map run by AED – if you’re not familiar with them, you’ll learn who they are in this deck.
Each card includes:
- The org’s full name
- Its nickname/acronym (where applicable)
- Its logo
- A brief description of what it does
- A link to its website for further info (accessed through the ‘Edit’ button for that card)
How to access
You can download the deck here.
Accuracy and feedback
Given the difficulty of summarising an entire org/project into one or two sentences, the descriptions come with the caveat of being necessarily reductive. They aim to capture the essence of each entity but may not fully encompass the breadth or nuance of their work. I encourage you to visit the link included in each card if you’d like a more comprehensive understanding of that particular org.
That being said, if you think any content should be modified then please comment them below, along with any problems/suggestions for the deck in general.
If the general feedback is that this seems to be useful to people, then I may in the future create one covering the most prominent people in AI safety as well.
Thank you to @George Vii [EA · GW] for testing out the deck in advance, and credit to all the volunteers who have contributed to the AI Existential Safety Map. This project was completed while a grantee of CEEALAR.
0 comments
Comments sorted by top scores.