How would two superintelligent AIs interact, if they are unaligned with each other?
post by Nathan1123 · 2022-08-09T18:58:16.198Z · LW · GW · No commentsThis is a question post.
Contents
Answers 4 NickGabs 4 Slider 3 Vladimir_Nesov 3 Donald Hobson None No comments
Hello,
As I read more from this forum and other areas about ethical AI and Decision Theory, I start to imagine what future scenario there could be if multiple AGIs are created simultaneously, by actors who are not coordinating with each other. We've seen similar issues come up in the history of technology, particularly computer technology, where multiple competing standards are created around the same time which are mutually incompatible.
So we imagine we have two AGIs, which are intelligent enough to communicate with humans and each other but are built on very different utility functions, as well as different approaches to Decision Theory. From the perspective of their respective creators, their AGI is perfectly aligned with human morality, but due to different sets of assumptions, philosophies or religions the two actors defined their outer alignment in two different ways.
The reason this seems like a bad situation is because (from what I understand) FDT works on an assumption that other actors use a similar utility function as itself. Thus, the two agents would start with the assumption that the other agent uses the same utility function (which is a false). This bad assumption would fall into miscommunication and conflict, as each agent believes the other is acting immoral or defective.
This seems oddly similar to the way human conflicts arise in the real world (through miscommunication), so an AGI being capable of having that problem incidentally makes them more human-like.
What do you think would be the result? Has this thought experiment been entertained before?
Answers
Check out CLR's research: https://longtermrisk.org/research-agenda. They are focused on answering questions like these because they believe that competition between AI's is a big source of s-risk
↑ comment by Nathan1123 · 2022-08-09T23:37:26.337Z · LW(p) · GW(p)
Thanks, I'll be sure to check them out
FDT works on an assumption that other actors use a similar utility function as itself
FDT is not about interaction with other actors, it's about accounting for influence of the agent through all of its instances (including predictions-of) in all possible worlds.
Coordination with other agents is itself an action, that a decision theory could consider. This action involves creation of a new coordinating agent that decides a coordinating policy that all members of a coalition carry out, and this coordinating agent also needs a decision theory. The coordinating agent acts through all agents of the coalition, so it's sensible for it to be some flavor of FDT, though a custom decision theory specifically for such situations seems appropriate, especially since it's doing bargaining.
The decision theory that chooses whether to coordinate by running a coordinating agent or not has no direct reason to be FDT, could just be trivial [LW(p) · GW(p)]. And preparing the coordinating agent is not obviously a question of decision theory, it even seems to fit deontology [LW(p) · GW(p)] a bit better.
I think this is built out of several deeply misunderstood ideas.
If we get 2 AI's, where both AI's are somehow magically aligned (highly unlikely), we are in a pretty good situation. A serious fight between the AI's would satisfy neither party. So either one AI quietly hacks the other, turning it off with minimal destruction, or the AI's cooperate, as they have a pretty similar utility function and can find a future both like.
Nowhere does FDT assume other actors have the same utility function as it. Why do you think it assumes that. It doesn't assume the other agent is FDT. It doesn't make any silly assumptions like that. If both agents are FDT, and have common knowledge of each others source code, they will cooperate, even if their goals are wildly different.
With a high bandwidth internet link, and logically precise statements, we won't get serious miscommunication.
↑ comment by Vladimir_Nesov · 2022-08-09T21:52:00.087Z · LW(p) · GW(p)
If both agents are FDT, and have common knowledge of each others source code
Any common knowledge they can draw up can go into a coordinating agent (adjudicator), all it needs is to be shared among the coalition, it doesn't need to have any particular data. The problem is verifying that all members of the coalition will follow the policy chosen by the coordinating agent, and common knowledge of source code is useful for that. But it could just be the source code of the trivial rule of always following the policy given by the coordinating agent.
One possible policy chosen by the adjudicator should be falling back to unshared/private BATNA, aborting the bargain, and of course doing other things not in scope of this particular bargain. These things are not parts of the obey-the-adjudicator algorithm, but consequences of following it. So common knowledge of everything is not needed, only common knowledge of the adjudicator and its authority over the coalition. (This is also a possible way of looking at UDT, where a single agent in many possible states acting through many possible worlds coordinates among its variants.)
No comments
Comments sorted by top scores.