post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by hamnox · 2022-07-12T19:06:37.358Z · LW(p) · GW(p)

I see it as morally wrong to create a AGI at our current human development

This.. this statement really bugs me. It seems floating. What does it being morally right or wrong cash out to in terms of anticipations? Being morally wrong wouldn't stop it from happening! It wouldn't stop AI from having terrible effects if pursued unsafely!

I wish you luck but don't see an easy entry point. I've been struggling to create one for a while. Reading Rationality A-Z all the way through has historically worked to some degree, but very inefficiently.

Replies from: humm1lity
comment by Caerulea-Lawrence (humm1lity) · 2022-07-12T19:55:57.085Z · LW(p) · GW(p)

Thanks for expressing that. I'll let the question stand, but I also have a direction forwards, so it is not so important at the moment. 

I was not aware AGI will emerge by themselves. From my POV they are made, so that sentence is based on that. And since that sentence in itself is hard - I'll refrain from elaborating it. 

Thanks for the luck-wishing. I am writing a post now that I aim for it to be more logical, so if this is my starting point, I hope I'll get somewhere where you do not have to go through the whole list, but might even be something you can read with a sublte scowl. :)