post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2018-12-30T03:50:22.292Z · LW(p) · GW(p)

I suspect the situation is vastly more complicated than that. Revealed preferences show the contradiction between stated preferences and actions. Misaligned incentives models (a part of) a person as a separate agent with distinct short-term goals. But humans are not modeled well as a collection of agents. We are a messy result of billions of years of evolution, with some random mutations becoming meta-stable through sheer chance. All human behavior is a side effect of that. Certainly both RPT and MIT can be a rough starting point, and if someone actually numerically simulates human behavior, the two could be some of the algorithms to use. But I am skeptical they together would explain/predict a significant fraction of what we do.

Replies from: jml
comment by jml · 2018-12-30T05:16:39.476Z · LW(p) · GW(p)
comment by Dr. Jamchie · 2018-12-30T19:01:12.745Z · LW(p) · GW(p)

There is a third alternative: being true about your preferences, but realizing you are not in power to do anything about it.

I.e. I prefer to win lottery, but there is nothing reasonable I can do to achieve that, so I drop the participating in lottery altogether. From the outside it might look like I have revealed that I do not want to win a lottery since I do not even buy the ticket. Caring about environment might fall into this category as well.

comment by Pattern · 2018-12-30T18:00:50.049Z · LW(p) · GW(p)
If these recommendations are repeatedly denied, you now have evidence against the MIT view and can gradually switch to the RPT view.

So we know someone wanted to change, if they are successful in changing?

Replies from: jml
comment by jml · 2018-12-31T04:11:40.723Z · LW(p) · GW(p)