Is there any serious attempt to create a system to figure out the CEV of humanity and if not, why haven't we started yet?

post by Jonas Hallgren · 2021-02-25T22:06:04.695Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    5 G Gordon Worley III
    2 plex
None
No comments

Hey, fellow people, I'm fairly new to LessWrong and if this question is irrelevant i apologise for that, however I was wondering whether any serious attempts to create a system for mapping out the CEV of humanity have been started yet? 

Since the CEV ismore of a democratic process than any other alignment system I can think of, it would make sense to try to create a database for the purpose of a future AGI to calibrate itself on. If we where to create the database now we could explore if it suited our needs through trial and error (testing whether it would predict moral decisions), which would mean that we would get a more functional alignment system than we could otherwise get. Also as a follow up, if we where to create a central authority for creating this database, there's a possibility that the authority could become a central alignment checking facility meaning that we could avoid potential misalginment disasters. 

There are therefore quite clear reasons for me why it would be a good idea to start this project which is why I'm wondering if there are any such plans.

Thank you for your time.

Answers

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2021-02-26T14:56:21.179Z · LW(p) · GW(p)

I think your question sort of misunderstands the CEV proposal. It's something aligned AI might produce, not something we would personally work towards creating. Yes we might keep CEV in mind when figuring out how to build aligned AI, but it's not something we can go straight towards else we would Goodhart [? · GW] ourselves into existential catastrophe at the worst or astronomical waste [? · GW] at the best.

answer by plex · 2021-02-26T15:36:07.848Z · LW(p) · GW(p)

I think a slightly more general version of this question, referring to human values rather than specifically CEV, is maybe a fairly important point.

If we want a system to fulfill our best wishes it needs to learn what they are based on its models of us, and if too few of us spend time trying to work out what we want in an ideal world then the dataset it's working from with be impoverished, perhaps to the point of causing problems.

I think addressing this is less pressing than other parts of the alignment problem, because it's plausible that we can punt it to after the intelligence explosion [LW · GW], but it would maybe be nice to have some project started to collect information about idealized human values.

No comments

Comments sorted by top scores.