AI alignment researchers may have a comparative advantage in reducing s-risks

post by Lukas_Gloor · 2023-02-15T13:01:50.799Z · LW · GW · 1 comments

Contents

1 comment

1 comments

Comments sorted by top scores.

comment by Martín Soto (martinsq) · 2023-02-16T03:44:09.435Z · LW(p) · GW(p)

Thank you for this post! It neatly exposes some of my recent worries about s-risks going unnoticed (or at least, not acted on). Your exposition also pushes me to try my fit for macro-strategy in the future.