0 comments
Comments sorted by top scores.
comment by MiguelDev (whitehatStoic) · 2023-03-23T23:23:04.450Z · LW(p) · GW(p)
Hello Moderators/Readers,
I am curious as to why the post was downvoted. I would appreciate an explanation so I can improve my writing moving forward. I aim to arrive at helping in solving the alignment problem. Thank you.
Replies from: sharmake-farah, TAG↑ comment by Noosphere89 (sharmake-farah) · 2023-03-24T12:12:21.749Z · LW(p) · GW(p)
I downvoted the post because the demand for aligning it with humanity is fundamentally incoherent and a fake option, and you can't do this because abstractions like humanity or society don't exist, so aligning it to a specific human's intention is better, as this can actually be done.
↑ comment by TAG · 2023-03-24T00:40:00.574Z · LW(p) · GW(p)
-
Alignment with humanity isn't a simpler problem than alignment with a subset of humanity.
-
Alignment with human values in general is what a lot of people already mean by alignment, so lack of originality.
↑ comment by MiguelDev (whitehatStoic) · 2023-03-24T00:52:57.930Z · LW(p) · GW(p)
Thank you for your response.
I understand your 2nd point, but to comment on your 1st comment - is having the simpler question always the right thing to focus on? isn't searching for the right questions to ponder the best way to arrive at the best solutions?
Replies from: TAG↑ comment by TAG · 2023-03-24T01:05:04.657Z · LW(p) · GW(p)
It depends on whether you're being academic or practical.
Replies from: whitehatStoic↑ comment by MiguelDev (whitehatStoic) · 2023-03-24T01:23:26.871Z · LW(p) · GW(p)
Subpar questions lead to incomplete /wrong answers. If it happens to be that we wrongly framed the alignment problem, the cost of this is huge or even catastrophic. It's still cheaper to question even the best ideas now rather than change directions or correct errors later.
Replies from: TAG↑ comment by TAG · 2023-03-24T01:33:36.521Z · LW(p) · GW(p)
Again, you are not suggesting something new , you are suggesting a standard answer which no one has the faintest idea how to implement.
Replies from: whitehatStoic↑ comment by MiguelDev (whitehatStoic) · 2023-03-24T01:38:26.502Z · LW(p) · GW(p)
I'm in the process of writing it. Will link it here once finished. Thanks for being more direct too.
comment by LVSN · 2023-03-23T17:18:53.832Z · LW(p) · GW(p)
If AI copied all human body layouts down to the subatomic level, then re-engineered all human bodies so they were no longer recognizably human but rather something human-objectively superior, then gave all former humans the option to change back to their original forms, would this have been a good thing to do?
I think so!
It has been warned in ominous tones that "nothing human survives into the far future."
I'm not sure human-objectivity permits humanity to remain mostly-recognizably human, but it does require that former humans have the freedom to change back if they wish, and I'm sure that many would, and that would satisfy the criterion of something human surviving the far future.
Replies from: baturinsky, whitehatStoic↑ comment by baturinsky · 2023-03-24T04:39:42.850Z · LW(p) · GW(p)
That decision that will be made by such creature will not be a decision of the human is was made in the image of.
Also, there is no objective measure of superiority.
↑ comment by MiguelDev (whitehatStoic) · 2023-03-23T17:52:40.099Z · LW(p) · GW(p)
I'm sorry, I have no way to answer your question.. I just hope in the future we do.