Stampy's AI Safety Info - New Distillations #1 [March 2023]

post by markov (markovial) · 2023-04-07T11:06:39.413Z · LW · GW · 0 comments

This is a link post for https://aisafety.info/?state=8IHO_6714_5864_8486_89LM_89LK_8392_8G1I_8C7S_8E3Z_6412_897J_7616_8326_7715_6478_6320_86WT_8AF1_89LL_6984_1001_87AG_

Contents

No comments

This is our first check-in from the AI Safety Info distillation fellowship [LW · GW]. We are working on distilling all manner of content to help make it easier for people to engage with AI Safety and x-risk topics. Here is the AI Safety Info website (and its more playful clone Stampy).

If there is a question you had in mind, but didn't find a clear answer for - type it in the search box and request an answer. It can be literally any question about AI safety no matter how basic or advanced. It could also just be a convoluted topic that you think could do with some distillation. One of the distillers will get to work on answering it.

Additionally, if you see content that you feel could be clearer or has mistakes, you can leave a comment on the document by clicking the edit button on the bottom right of the answer.

We will try to make this post a regular thing where we post some of the questions that have been answered or topics that have been distilled over the last little while. Since this is the first one, the following is a longer list with links to answers as an example of some of the questions that we answered in the last month:

This is crossposted to the EA-forum : https://forum.effectivealtruism.org/posts/exgGArFuMsvKa3CAT/new-distillations-on-stampy-s-ai-safety-info-expansive [EA · GW]

0 comments

Comments sorted by top scores.