A discussion of AI risk and the cost/benefit calculation of stopping or pausing AI development
post by DuncanFowler · 2024-03-11T21:41:09.934Z · LW · GW · 0 commentsContents
No comments
I've been interested in AI risk for a very long time, but I've never written that much about it because I always felt other people were better at discussing the situation.
I have begun to revise this opinion, and decided to write down the basic points of disagreement I have, which rather ran one longer than I thought it would and now stands at 18000 words, and I still feel like many of the points I make could be expanded upon.
I attach a link below.
I try to make the main points that:
a) The current dominant paradigm of AI is likely not be hostile by default
b) While there are many things that could go wrong there are also many possible solutions
c) Multipolar scenarios are probably more stable then everyone here seems to think
d) Any pause, whether from a few years to indefinitely, probably makes things worse because our leaders are not capable of organising it
And conclude that our chances of survival are over 50% (not including anthropics, aliens saving us etc.) and that we should not pause. I do not mean that this should promote complacency.
Anyway, I hope this gives people a modicum of hope. Please let me know if you think this is a waste of time, making points others have already made better, of whether it is of any value.
https://docs.google.com/document/d/e/2PACX-1vRHYihtx2QUXH54RXNJRN2q7jjhwAcBqhPAhzJhjmw3Yv546vsXfv3pvNM9NUIoybpM-IgplyQWelCW/pub
0 comments
Comments sorted by top scores.