Could we use recommender systems to figure out human values?

post by Olga Babeeva (olga-babeeva) · 2020-10-20T21:35:59.996Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    2 Xida Ren
None
No comments

Would it make sense to try to figure out human values with recommender systems? Why or why not? How could this be done?

Answers

answer by Cedar (Xida Ren) · 2022-03-10T21:55:56.798Z · LW(p) · GW(p)

From what it looks like so far, recommender systems are amazingly good at figuring out what we want in the short term and giving them to us. But that is often misaligned to what we want in the longer terms. E.g. I have a YouTube shorts addiction that's ruining my productivity (yay!). So my answer for now is NOPE, unless we do something special.

I'm assuming when you say "human values" you mean what we want for ourselves in the long-term. But I would love it if you would elaborate on what exactly you meant by that.

comment by tamgent · 2022-08-24T19:39:06.459Z · LW(p) · GW(p)

Agree.

Human values are very complex and most recommender systems don't even try to model them. Instead most of them optimise for things like 'engagement' which they claim to be aligned with a user's 'revealed preference'. This notion of 'revealed preference' is a far cry from true preferences (which are very complex) let alone human values (which are also very complex). I recommend this article for an introduction to some of the issues here: https://medium.com/understanding-recommenders/what-does-it-mean-to-give-someone-what-they-want-the-nature-of-preferences-in-recommender-systems-82b5a1559157

No comments

Comments sorted by top scores.