Stampy's AI Safety Info - New Distillations #2 [April 2023]

post by markov (markovial) · 2023-05-09T13:31:48.625Z · LW · GW · 1 comments

This is a link post for https://aisafety.info/?state=8QZH_8222_7626_8EL6_7749_8XBK_8XV7_8PYW_89ZU_7729_6412_6920_8G1G_7580_8H0O_7632_7772_6350_8C7T_8HIA_8IZE_7612_8EL9_89ZQ_8PYV_

Contents

1 comment

Hey! This is another update from the distillers at the AI Safety Info website (and its more playful clone Stampy).

Here are a couple of the answers that we wrote up over the last month (April 2023). As always let us know if there are any questions that you guys have that you would like to see answered. 

The list below redirects to individual links, and the collective URL above renders all of the answers in the list on one page at once.

Crossposted to the Effective Altruism Forum: https://forum.effectivealtruism.org/posts/FyfBJJJAZAdpfE9MX/stampy-s-ai-safety-info-new-distillations-2 [EA · GW]

1 comments

Comments sorted by top scores.

comment by Viliam · 2023-05-11T11:06:07.760Z · LW(p) · GW(p)

Great work!

I would like to also see some ELI5 examples; currently the text feels too abstract. For example, I clicked on the "outer alignment" page. I think it would be better if it contained 3-5 simple examples.