post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2023-02-14T05:28:50.179Z · LW(p) · GW(p)

Nice to see someone who wants to directly tackle the big problem. Also nice to see someone who appreciates June Ku's work. 

comment by Roman Leventov · 2023-05-06T15:00:37.804Z · LW(p) · GW(p)

the core motivation for formal alignment, for me, is that a working solution is at least eventually aligned: there is an objective answer to the question "will maximizing this with arbitrary capabilities produce desirable outcomes?" where the answer does not depend, at the limit, on what does the maximization.

I don't know about other proposals because I'm not familiar with them, but Methaethical AI actually describes the machinery of the agent, hence "the answer" does depend "on what does the maximisation".