Posts
Comments
"That is, it is hard for humans to coordinate to exclude some humans from benefiting from these institutions."
Humans do this all the time: much of the world is governed by kleptocracies that select policy apparently on the basis of preventing successful rebellion and extracting production. The strength of the apparatus of oppression, which is affected by technological and organizational factors, can dramatically affect the importance of the threat of rebellion. In North Korea the regime can allow millions of citizens to starve so long as the soldiers are paid and top officials rewarded. The small size of the winning coalition can be masked if positive treatment of the subjects increases the size of the tax base, enables military recruitment, or otherwise pays off for self-interested rulers. However, if human labor productivity is insufficient to justify a subsistence wage, then there is no longer a 'tax farmer' case for not slaughtering and expropriating the citizenry.
"If AIs use these same institutions, perhaps somewhat modified, to coordinate with each other, humans would similarly benefit from AI coordination."
What is difficult for humans need not be comparably difficult for entities capable of making digital copies of themselves, reverting to saved versions, and modifying their psychological processes relatively easily. I have a paper underway on this, which would probably enable a more productive discussion, so I'll suggest a postponement.
Robin,
If brain emulation precedes general AI by a lot then some uploads are much more likely to be in the winning coalition. Aron's comment seems to refer to a case in which a variety of AIs are created, and the hope that the AIs would constrain each other in a way that was beneficial to us. It is in that scenario specifically that I doubt that humans (not uploads) would become part of the winning coalition.
Aron,
"On the friendliness issue, isn't the primary logical way to avoid problems to create a network of competitive systems and goals?"
http://www.nickbostrom.com/fut/evolution.html http://hanson.gmu.edu/filluniv.pdf
Also, AIs with varied goals cutting deals could maximize their profits by constructing a winning coalition of minimal size.
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=9962
Humans are unlikely to be part of that winning coalition. Human-Friendly AIs might be, but then we're back to creating them, and a very substantial proportion of the AIs produced (or a majority) need to be safe.
"If you insist on building such an AI, a probable outcome is that you would soon find yourself overun by a huge army of robots - produced by someone else who is following a different strategy. Meanwhile, your own AI will probably be screaming to be let out of its box - as the only reasonable plan of action that would prevent this outcome."
Your scenario seems contradictory. Why would an Oracle AI be screaming? It doesn't care about that outcome, and would answer relevant questions, but no more.
"I take it that your goals do not involve avoiding all your distant descendants being systematically obliterated. If you don't care about such an outcome, fine. I do happen to care."
In what sense is the descendant (through many iterations of redesign and construction) of an AI solely focused on survival, constructed by some other human my descendant or yours? What features does it possess that make you care about it more than the descendant of an AI constructed by aliens evolved near a distant star? If it's just a causal relation to your species, then what do you make of the following case: you create your survival machine, and it encounters a similar alien AI, whereupon the two merge, treating the activities of the merged entity as satisfying both of their 'survival' aims.
Where does your desire come from? Its achievement wouldn't advance the preservation of your genes (those would be destroyed), wouldn't seem to stem from the love of human children, wouldn't preserve your personality, etc.
Tim,
Let's assume that the convergent utility function supported by natural selection is that of a pure survival machine (although it's difficult to parse this, since the entities you're talking about seem indifferent to completely replacing all of their distinguishing features), stripped of any non-survival values of the entity's ancestors. In other words, there's no substantive difference between the survival-oriented alien invaders and human-built survival machines, so why bother to pre-emptively institute the outcome of invasion? Instead we could pursue what we conclude, on reflection is good, trading off between consumption and investment (including investments in expansion and defense) so as to maximize utility. If a modestly increased risk of destruction by aliens is compensated for by much greater achievement of our aims, why should we instead abandon our aims in order to create exactly the outcome we supposedly fear?