Posts
Comments
Comment by
driplikesake on
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry] ·
2017-12-03T04:56:10.251Z ·
LW ·
GW
Counter to point 4.5.1.: Couldn't a RAI simulate an FAI to create indexical uncertainty as well?