AI Safety Info Distillation Fellowship

post by Robert Miles (robert-miles), mwatkins · 2023-02-17T16:16:45.732Z · LW · GW · 3 comments

Contents

  Our theory of change: A single-point-of-access to AI Safety
    TL;DR: directing people to the right parts of the alignment ecosystem requires lots of hard to find knowledge. We’re building a place you can throw a wide range of people at and they'll get what they need, which means onboarding people will be much easier.
  Apply!
  Other ways you can contribute
None
3 comments

AI Safety Info (aka Stampy), the large community-written interactive FAQ, is launching a paid three month fellowship for writers. Up to 5 fellows, working collaboratively with each other and with Rob Miles, will distil content from around the alignment ecosystem into answers which summarise key ideas and link out to the relevant resources.

Our theory of change: A single-point-of-access to AI Safety

TL;DR: directing people to the right parts of the alignment ecosystem requires lots of hard to find knowledge. We’re building a place you can throw a wide range of people at and they'll get what they need, which means onboarding people will be much easier.

FAQs are usually somewhere on the spectrum between “too long so you can't find your question” and “too short so it doesn't answer your question”. We bypass this trade-off by using a custom interface, with LM-powered semantic search. The page starts off showing some example questions and, as you click to expand them, related questions pop up, creating a "tab explosion in one page" effect. If a reader has a specific question, they can easily search our collection of answers, or request an answer from our editors if there isn't one already written.

This fills a hole in the outreach and on-boarding landscape, as a place where a wide range of readers can all be sent the same link and be directed towards the information they need. We aim to cater to people who:

The project also hosts canonical living documents, like an overview of what each organization is working on, and an index of relevant videos. The goal is to be a central nexus for the growing AGI safety information ecosystem, to make it easier to get people to the right place. This will cause more people to be on-boarded well, save people time, and ultimately result in more progress being made.

Distilling content for the site can also become a great on-ramp for aspiring researchers; writing alongside a community of co-learners provides social encouragement, feedback, and the motivation of producing content which will be read by a large audience.

Apply!

Our application process is:

For our first cohort we'll pick up to five people who:

The fellows will receive $2500/month for three months of full-time editing work. Volunteer contributions are always welcome, and will help with applications to any future rounds of this program we'll run if this is successful.

Other ways you can contribute

Spread the word, link your friends to this post, browse our FAQ and link it to friends when relevant questions come up in conversation or online. We’re not a comprehensive resource yet, but we do already have a good amount of seed content.

The system was built by a team of volunteers in our free time. We have the core system working, but there are a handful of features which would improve the experience of using the site and we’d love to have more actively involved developers join the team. We also have ideas for more ambitious expansions of the project, including a unified feed and integration with an alignment-dataset [LW · GW]-finetuned LLM for search over all alignment literature, and eventually a conversation agent.

Thanks to Rick Schwall from Saving Humanity from Homo Sapiens for reaching out and funding this program!

3 comments

Comments sorted by top scores.

comment by DragonGod · 2023-02-20T13:27:55.126Z · LW(p) · GW(p)

I'll be starting on my application soon.

comment by trevor (TrevorWiesinger) · 2023-02-17T21:40:38.189Z · LW(p) · GW(p)

Was this inspired in part by the AI Safety Arguments contest [LW · GW] and the AI safety public materials contest [LW · GW]? Were those contests deemed successes?

Replies from: ete
comment by plex (ete) · 2023-02-17T23:52:55.137Z · LW(p) · GW(p)

Not inspired by them, no. Those did not have, as far as I'm aware, a clear outlet for use of the outputs. We have a whole platform we've been building towards for three years (starting on the FAQ long before those contests), and the ability to point large numbers of people at that platform once it has great content thanks to Rob Miles.