What are all the AI Alignment and AI Safety Communication Hubs?
post by Gunnar_Zarncke · 2022-06-15T16:16:03.241Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 4 A_donor 3 Orpheus 3 Viktor Rehnberg None 2 comments
For effective work on complex topics, multiple people need to work together[citation needed].
For AI Alignment, LessWrong and its sister site, the Alignment Forum, are arguably the Shelling points. But there are other platforms that, for one reason or other - language, culture, size, personal preference - are better suited for individual contributors.
On Being an individual alignment grantmaker, [LW · GW] the following platforms were mentioned:
Next, I increased my surface area with places which might have good giving opportunities by involving myself with many parts of the movement. This includes Rob Miles’s Discord, AI Safety Support’s Slack [named "AI alignment"], in-person communities, EleutherAI, and the LW/EA investing Discord, where there are high concentrations of relevant people, and exploring my non-LW social networks for promising people.
Other such hubs that I know of (all only with tangential AI safety focus):
- Astral Codex Ten Scott Alexander discusses alignment sometimes, too; has a lively comment section on Substack; also: ACX Discord, Subreddit
- Rationality Berlin Slack of the Berlin LessWrong community; maybe 30 members; one AI Alignment effort; meditation, dojos, monthly meetup, organizes the yearly European LessWrong Community Weekend
- Google Groups
- bayarealesswrong active
- less-wrong-parents inactive
- lesswrong-hamburg group of my meetup in Hamburg, very low activity
- probably many other regional groups
- Bountied Rationality Facebook group, a marketplace for small tasks in the community
Many channels require an invite. If you post additional hubs, please mention if an invite is needed and how one might get invited.
Answers
- There's also the new Alignment Ecosystem Slack, but that's invite only currently. From the tag [? · GW]: "If you'd like to join message plex [LW · GW] with an overview of your involvement."
- I found a great designer/programmer for one of my alignment projects on the EA Creatives and Communicators Slack.
- Impact Markets is somewhat relevant.
I am getting ready to help launch two more in the next couple of weeks, one for alignment grantmakers (gated to people who can verify they've directed $10k+ towards alignment), one for software engineers who want to help with alignment. They're not polished yet so not ready for a big announcement, but feel free to turn up early if you're sold on the idea already.
(also, neat, I got cited!)
A Matrix space for the community https://matrix.to/#/#ai-safety:matrix.org
Other that I find worth mentioning are channels for opportunities at getting started in AI Safety. I know both AGI Safety Fundamentals and AI Safety Camp have slack channels for participants. Invitation needed and you probably need to be a participant to get invited.
There is also a 80000 hours Google group for technical AI safety. Invitation is needed, I can't find that they've broadcasted how to get in so I won't share it. But, they mention it on their website so I guess it is okay to include it here.
I've also heard about research groups in AI safety having their own discords and slack channels. In those cases, to get invited you should probably contact someone at the specific place and show that you have interest in their research. I keep it vague, because again, I don't know how public their existence is.
2 comments
Comments sorted by top scores.
comment by cometthecat · 2022-06-16T00:22:07.530Z · LW(p) · GW(p)
I will mention that there is a 'Control Problem' subreddit, not exactly high level discussion but it does cross post a lot of good information from time to time: https://www.reddit.com/r/ControlProblem/
Replies from: joel-burget↑ comment by Joel Burget (joel-burget) · 2022-06-16T00:31:21.591Z · LW(p) · GW(p)
Gwern often posts to https://www.reddit.com/r/mlscaling/ as well