Is it rational to modify one's utility function?
post by k64 · 2022-02-04T23:37:12.177Z · LW · GW · No commentsThis is a question post.
Contents
Answers 14 Dagon 4 Ikaxas 2 Vladimir_Nesov 1 Jsevillamol 1 noggin-scratcher -1 Arcayer None No comments
Rationality is often informally defined by means-end reasoning or utility maximization. However, this idea becomes less clear when faced with the option of modifying one's own utility function. Does rationality prescribe avoiding any change to one's current utility function because such a change would obviously reduce expected utility under the current function, or does it prescribe taking actions which result in the highest utility by whatever means necessary, in which case a change would be rational iff the new utility function yields higher expected utility given known background info about the world?
This is obviously relevant to AI alignment, where one concern is that AI may hack their own utility functions and another concern is that they may prevent humans from modifying them (or shutting them off) due to the risk to their current goals. It's also relevant to questions of human rationality, where, on the one hand, we imagine that Ghandi would not take a pill that makes him want to murder people, but on the other hand, we regularly believe that unhappy people should change their own psychology and goals to be more happy.
Answers
The informal part of your opening sentence really hurts here. Humans don't have time-consistent (or in many cases self-consistent) utility functions. It's not clear whether AI could theoretically have such a thing, but let's presume it's possible.
The confusion comes in having a utility-maximizing framework to describe "what the agent wants". If you want to change your utility function, that implies that you don't want what your current utility function says you want. Which means it's not actually your utility function.
You can add epicycles here - a meta-utility-function that describes what you want to want, probably at a different level of abstraction. That makes your question sensible, but also trivial - of course your meta-utility function wants to change your utility function to more closely match your meta-goals. But then you have to ask whether you'd ever want to change your meta-function. And you get caught recursing until your stack overflows.
Much simpler and more consistent to say "if you want to change it, it's not your actual utility function".
↑ comment by k64 · 2024-09-26T21:59:03.194Z · LW(p) · GW(p)
Ok, so if we programmed an AI with something like:
Utility=NumberOfPaperClipsCreated
While True:{
TakeAction(ActionThatWouldMaximize(Utility))
}
Would that mean its Utility Function isn't really NumberOfPaperClipsCreated? Would an AI programmed like that edit its own code?
↑ comment by Dagon · 2024-09-26T23:34:19.845Z · LW(p) · GW(p)
I don't follow the scenario. If the AI is VNM-rational and has a utility function that is linear with number of paperclips created, then it doesn't WANT to edit the utility function, because no other function maximizes paperclips.
Conversely, if an agent WANTS to modify it's utility function, that implies it's not actually the utility function.
Utility function defines what "want" means.
Replies from: k64↑ comment by k64 · 2024-09-27T00:48:41.740Z · LW(p) · GW(p)
Ok, so basically, we could make an AI that wants to maximize a variable called Utility and that AI might edit its code, but we probably would figure out a way to write it so that it always evaluates the decision on whether to modify its utility function according to its current utility function, so it never would - is that what you're saying?
Also, maybe I'm conflating unrelated idea here - I'm not in the AI field - but I think I recall there being a tiling problem of trying to prove that an agent that makes a copy of itself wouldn't change its utility function. If any VNM-rational agent wouldn't want to change its utility function does that mean that the question is just whether the AI would make a mistake when creating its successor?
↑ comment by Dagon · 2024-09-27T03:26:07.540Z · LW(p) · GW(p)
so basically, we could make an AI that wants to maximize a variable called Utility
Oh, maybe this is the confusion. It's not a variable called Utility. It's the actual true goal of the agent. We call it "utility" when analyzing decisions, and VNM-rational agents act as if they have a utility function over states of the world, but it doesn't have to be external or programmable.
I'd taken your pseudocode as a shorthand for "design the rational agent such that what it wants is ...". It's not literally a variable, nor a simple piece of code that non-simple code could change.
The received wisdom in this community is that modifying one's utility function is at least usually irrational. The classic source here is Steve Omohundro's 2008 paper, "The Basic AI Drives," and Nick Bostrom gives basically the same argument in Superintelligence, pp. 132-34. The argument is basically this: imagine you have an AI that is solely maximizing the number of paperclips that exist. Obviously, if it abandons that goal, there will be less paperclips than if it maintains that goal. And if it adds another goal, say maximizing staples, then this other goal will compete with the paperclip goal for resources, e.g. time, attention, steel, etc. So again, if it adds the staple goal, there will be less paperclips than if it doesn't. So if it evaluates every option by h many paperclips result in expectation, then it will choose to maintain its paperclip goal unchanged. This argument isn't mathematically rigorous, and allows that there may be special cases where changing one's goal may be useful. But the thought is that, by default, changing one's goal is detrimental from the perspective of one's current goals.
As I said, though, there may be exceptions, at least for certain kinds of agents. Here's an example. It seems as though, at least for humans, we're more motivated to pursue our final goals directly than we are to pursue merely instrumental goals (which child do you think will read more: the one who intrinsically enjoys reading, or the one you pay $5 for every book they finish?). So, if a goal is particularly instrumentally useful, it may be useful to adopt it as a final goal in itself in order to increase your motivation to pursue it. For example, if your goal is to become a diplomat, but you find it extremely boring to read papers on foreign policy... well, first of all, I question why you want to become a diplomat if you're not interested in foreign policy, but more importantly, you might be well-served to cultivate an intrinsic interest in foreign policy papers. This is a bit risky: if circumstances change so that it's no longer as instrumentally useful, it may end up competing with your initial goals as described by the Bostrom/Omohundro argument. But it could work out that, at least some of the time, the expected value of changing your goal for this reason is positive.
Another paper to look at might be Steve Petersen's paper, "Superintelligence as Superethical," though I can't summarize the argument for you off the top of my head.
For purposes of alignment, it's probably more important that there is not going to be a known aligned utility function over all feasible plans. It's relatively easy to arrange the world in a way that's too hard to judge the value of. Within an island of more well-understood plans whose value can be judged in an aligned way, this value might have the form of expectation of a utility function. But this is less urgently salient, because naive optimization will quickly move the state of the world away from that island. Preserving a utility function doesn't help with this problem.
You can modify your utility function as part of a bargaining or pre commitment strategy.
↑ comment by Dagon · 2022-02-05T16:44:23.856Z · LW(p) · GW(p)
I'd argue that's (in a VNM-rational agent) not changing a utility function, but simply following it - maximizing utility via trade or prediction-of-prediction calculations.
There are probably theoretical cases where real-world agents might alter their preferences (warning: don't update too much on fiction. anti-warning: this is a fun read. https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT, [? · GW] chapter 5). These are not perfectly rational agents (edit: or maybe they are, but it's not clear how "utility" and "preferences" are interacting in this case).
Possibly if the goals you're pursuing in practice are an imperfect approximation of a utility function that you don't actually know (maybe you have some ideas about what kinds of thing it might value, but don't know for sure which are included or with what weighting).
Then there would be room to update your de facto utility function, based on new evidence about the content of the "true" function.
Human utility is basically a function of image recognition. Which is sort of not a straight forward thing that I can say, "This is that." Sure, computers can do image recognition, what they are doing is that which is image recognition. However, what we can currently describe algorithmically is only a pale shadow of the human function, as proven by all recaptcha everywhere.
Given this, the complex confounder is that our utility function is part of the image.
Also, we like images that move.
In sum, modifying our utility function is natural and normal, and is actually one of the clauses of our utility function. Whether it's rational depends on your definition. If you grant the above, and define rationality as self alignment, then of course it's rational. If you ask whether changing your utility function is a "winning" move, probably not? I think it's a very lifelike move though, and anything lacking a mobile function is fundamentally eldritch in a way that is dangerous and not good.
No comments
Comments sorted by top scores.