↑ comment by Tem42 ·
2015-08-13T16:11:54.082Z · LW(p) · GW(p)
I see that this conversation is in danger of splitting into different directions. Rather than make multiple different reply posts or one confusing essay, I am going to drop the discussion of AI, because that is discussed in a lot of detail elsewhere by people who know a lot more than I.
We are using two different models here, and while I suspect that they are compatible, I'm going to outline mine so that you can tell me if I'm missing the point.
I don't use the term meta-preferences, because I think of all wants/preferences/rules/and general-preferences as having a scope. So I would say that my preference for a carrot has a scope of about ten minutes, appearing intermittently. This falls under the scope of my desire to eat, which appears more regularly and for greater periods of time. This in turn falls under the scope of my desire to have my basic needs met, which is generally present at all times, although I don't always think about it. I'm assuming that you would consider the later two to be meta-preferences.
I don’t know how to justify resisting an intervention that would change my preferences
I would assume that each preference has a value to it. A preference to eat carrots has very little value, being a minor aesthetic judgement. A preference to meet your basic needs would probably have a much higher value to it, and would probably go beyond the aesthetic.
If it were easy for me to modify my preferences away from cheeseburgers, I can find a clear reason (or ten) to do so. I justify it by appealing to my higher-level preferences (I would like to be healthier). My preference to be healthier has more value than a preference to enjoy a single meal -- or even 100 meals.
But if it were easy to modify my preferences away from carrots, I would have to think twice. I would want a reason. I don't think I could find a reason.
Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life.
I would set up an example like this: I like carrots. I don't like bell peppers. I have an opportunity to painlessly reverse these preferences. I don't see any reason to prefer or avoid this modification. It makes sense for me to be agnostic on this issue.
I would set up a more fun example like this: I like Alex. I do not like Chris. I have an opportunity to painlessly reverse these preferences.
I would hope that I have reasons for liking Alex, and not liking Chris... but if I don't have good reasons, and if there will not be any great social awkwardness about the change, then yes, perhaps Alex and Chris are fungible. If they are fungible, this may be a sign that I should be more directed in who I form attachments with.
The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.
In the Alex/Chris example, it would be interesting to see if you ever reached a preference that you did mind changing. For example, you might be willing to change a preference for tall friends over short friends, but you might not be willing to change a preference for friends that kick orphans with friends who help orphans.
If you do find a preference that you aren't willing to change, it is interesting to see what it is based on -- a moral system (if so, how formalized and consistent is it), an aesthetic preference (if so, are you overvaluing it? Undervaluing it?), or social pressures and norms (if so, do you want those norms to have that influence over you?).
It is arguable, but not productive, to say that ultimately no one can justify anything. I can bootstrap up a few guidelines that I base lesser preferences on -- try not to hurt unnecessarily (ethical), avoid bits of dead things (aesthetic), and don't walk around town naked (social). I would not want to switch out these preferences without a very strong reason.