Two agents can have the same source code and optimise different utility functions
post by Joar Skalse (Logical_Lunatic) · 2018-07-10T21:51:53.939Z · LW · GW · 11 commentsContents
11 comments
I believe that it is possible for two agents to have the exact same source code while (in some sense) optimising two different utility functions.
I take a utility function to be a function from possible world states to real numbers. When I say that an agent is optimising a utility function I mean something like that the agent is "pushing" its environment towards states with higher values according to said utility function. This concept is not entirely unambiguous, but I don't think its necessary to try to make it more explicit here. By source code I mean the same thing as everyone else means by source code.
Now, consider an agent which has a goal like "gain resources" (in some intuitive sense). Say that two copies of this agent are placed in a shared environment. These agents will now push the environment towards different states, and are therefore (under the definition I gave above) optimising different utility functions.
11 comments
Comments sorted by top scores.
comment by Robert Miles (robert-miles) · 2018-07-10T22:35:33.602Z · LW(p) · GW(p)
Makes sense. It seems to flow from the fact that the source code is in some sense allowed to use concepts like 'Me' or 'I', which refer to the agent itself. So both agents have source code which says "Maximise the resources that I have control over", but in Agent 1 this translates to the utility function "Maximise the resources that Agent 1 has control over", and in Agent 2 this translates to the different utility function "Maximise the resources that Agent 2 has control over".
So this source code thing that we're tempted to call a 'utility function' isn't actually valid as a mapping from world states to real numbers until the agent is specified, because these 'Me'/'I' terms are undefined.
comment by Vaniver · 2018-07-10T22:04:11.487Z · LW(p) · GW(p)
The core feature here seems to be that the agent has some ability to refer to itself, and that this localization differs between instantiations. Alice optimizes for dollars in her wallet, Bob optimizes for dollars in his wallet, and so they end up fighting over dollars despite being clones, because the cloning procedure doesn't result in arrows pointing at the same wallet.
It seems sensible to me to refer to this as the 'exact same source code,' but it's not obvious to me how you would create these sort of conflicts without that sort of different resolution of pointers, and so it's not clear how far this argument can be extended.
Replies from: chrisvm↑ comment by Chris van Merwijk (chrisvm) · 2018-07-11T00:22:04.825Z · LW(p) · GW(p)
You don't necessarily need "explicit self-reference". The difference in utility functions can also be obtained due to a difference in the location of the agent in the universe. Two identical worms placed in different locations will have different utility functions due to their atoms being not exactly in the same location, despite not having explicit self-reference. Similarly, in a computer simulation, the agents with the same source code will be called by the universe-program in different contexts (if they weren't, I don't see how it makes sense to even speak of them as being "different instances of the same source code". There would just be one instance of the source code.).
So in fact, I think that this is probably a property of almost all possible agents. It seems to me that you need a very complex and specific ontological model in the agent to prevent these effects and have the two agents have the same utility function.
Replies from: philhcomment by Linda Linsefors · 2018-07-11T00:50:02.902Z · LW(p) · GW(p)
I agree.
An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.
comment by cousin_it · 2018-07-11T00:02:14.372Z · LW(p) · GW(p)
Yeah. But I think that can only happen if the agents aren't very smart :-) A very smart agent would've self-modified at startup before seeing any observations, and adopted a utility function that's a weighted sum of selfish utilities of all copies, with the weights given by its prior of being this or that copy.
Replies from: Logical_Lunatic, chrisvm↑ comment by Joar Skalse (Logical_Lunatic) · 2018-07-11T00:55:01.730Z · LW(p) · GW(p)
Yes, agents with different preferences are incentivised to cooperate provided that the cost of enforcing cooperation is less than the cost of conflict. Agreeing to adopt a shared utility function via acausal trade might potentially be a very cheap way to enforce cooperation, and some agents might do this just based on their prior. However, this is true for any agents with different preferences, not just agents of the type I described. You could use the same argument to say that you are in general unlikely to find two very intelligent agents with different utility functions.
Replies from: cousin_it↑ comment by cousin_it · 2018-07-11T10:24:22.689Z · LW(p) · GW(p)
Agents with identical source code will reason identically before seeing any observations, so the "acausal trade" in this case barely feels like trade at all, just making your preferences updateless over possible future observations. That's much simpler than acausal trade between agents with different source code, which we can't even formalize yet.
↑ comment by Chris van Merwijk (chrisvm) · 2018-07-11T00:44:06.954Z · LW(p) · GW(p)
Here are some counterarguments:
There can be scenario's where the agent cannot change his source code without processing observations. e.g. the agent may need to reprogram himself via some external device.
The agent may not be aware that there are multiple copies of him.
It seems that for many plausible agent designs, it would require a significant change in the architecture to change his utility function. E.g. if two human sociopaths would want to change their utility function into a weighted average of the two, they couldn't do so without significantly changing their brain architecture. A TDT agent could do this, but I think it is not prudent to assume that all actually future existing AGI's we will deal with will be TDT's (in fact, most likely most of them won't be it seems to me).
So I don't think your comment invalidates the relevance of the point made by the poster.
Replies from: cousin_itcomment by Rohin Shah (rohinmshah) · 2018-07-11T08:37:36.441Z · LW(p) · GW(p)
Hmm, I feel like I'm missing some AISFP context here. Why should I care about this result?