What success looks like

post by Marius Hobbhahn (marius-hobbhahn), MaxRa, JasperGeh, Yannick_Muehlhaeuser · 2022-06-28T14:38:42.758Z · LW · GW · 4 comments

This is a link post for https://forum.effectivealtruism.org/posts/AuRBKFnjABa6c6GzC/what-success-looks-like

Contents

  Summary
None
4 comments

TL;DR: We wrote a post on possible success stories of a transition to TAI to better understand which factors causally reduce the risk of AI risk. Furthermore, we separately explain these catalysts for success in more detail and this post can thus be thought of as a high-level overview of different AI governance strategies.  

Summary

Thinking through scenarios where TAI goes well informs our goals regarding AI safety and leads to concrete action plans. Thus, in this post,

4 comments

Comments sorted by top scores.

comment by shminux · 2022-06-28T20:37:22.997Z · LW(p) · GW(p)

What's a TAI? There is no definition of this acronym anywhere in this post or in the link, and google brings 3 different but apparently unrelated hits: threats in AI, IEEE Transactions on AI, and... Tentacular AI. I hope it's that last one.

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2022-06-28T21:49:58.537Z · LW(p) · GW(p)

I think usually Transformative AI.

Replies from: shminux
comment by shminux · 2022-06-28T21:52:15.842Z · LW(p) · GW(p)

Thanks :) 

comment by Noosphere89 (sharmake-farah) · 2022-06-28T20:41:34.314Z · LW(p) · GW(p)

My mainline best case or median-optimistic scenario is basically partially number 1, where aligning AI is somewhat easier than today, plus acceleration of transhumanism and a multipolar world both dissolve boundaries between species and the human-AI divide, this by the end of the Singularity things are extremely weird and deaths are in the millions or tens of millions due to wars.