0 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2018-12-30T03:50:22.292Z · LW(p) · GW(p)
I suspect the situation is vastly more complicated than that. Revealed preferences show the contradiction between stated preferences and actions. Misaligned incentives models (a part of) a person as a separate agent with distinct short-term goals. But humans are not modeled well as a collection of agents. We are a messy result of billions of years of evolution, with some random mutations becoming meta-stable through sheer chance. All human behavior is a side effect of that. Certainly both RPT and MIT can be a rough starting point, and if someone actually numerically simulates human behavior, the two could be some of the algorithms to use. But I am skeptical they together would explain/predict a significant fraction of what we do.
Replies from: jmlcomment by Dr. Jamchie · 2018-12-30T19:01:12.745Z · LW(p) · GW(p)
There is a third alternative: being true about your preferences, but realizing you are not in power to do anything about it.
I.e. I prefer to win lottery, but there is nothing reasonable I can do to achieve that, so I drop the participating in lottery altogether. From the outside it might look like I have revealed that I do not want to win a lottery since I do not even buy the ticket. Caring about environment might fall into this category as well.