Counterarguments to Core AI X-Risk Stories?
post by DavidW (david-wheaton) · 2023-03-11T17:55:19.309Z · LW · GW · No commentsThis is a question post.
Contents
Answers 10 Quintin Pope 2 Jakub Kraus None No comments
I've added a tag [? · GW]for object-level AI risk skepticism arguments. I've included my own post about deceptive alignment [LW · GW] and Katja Grace's post about AI X-risk counterarguments [LW · GW]. What other arguments should be tagged?
Answers
answer by Quintin Pope · 2023-03-21T00:19:53.879Z · LW(p) · GW(p)
I just finished writing a post My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [LW · GW]
answer by JakubK (Jakub Kraus) · 2023-04-22T22:59:42.187Z · LW(p) · GW(p)
Here's a list of arguments for AI safety being less important, although some of them are not object-level.
No comments
Comments sorted by top scores.