Just for fun: Computer game to illustrate AI takeover concepts?

post by kokotajlod · 2014-07-03T19:30:26.241Z · LW · GW · Legacy · 17 comments

Contents

  Questions:
None
17 comments

I play Starcraft:BW sometimes with my brothers. One of my brothers is much better than the rest of us combined. This story is typical: In a free-for-all, the rest of us gang up on him, knowing that he is the biggest threat. By sheer numbers we beat him down, but foolishly allow him to escape with a few workers. Despite suffering this massive setback, he rebuilds in hiding and ends up winning due to his ability to tirelessly expand his economy while simultaneously fending off our armies.

This story reminds me of some AI-takeover scenarios. I wonder: Could we make a video game that illustrates many of the core ideas surrounding AGI? For example, a game where the following concepts were (more or less) accurately represented as mechanics:

--AI arms race

--AI friendliness and unfriendliness

--AI boxing

--rogue AI and AI takeover

--AI being awesome at epistemology and science and having amazing predictive power

--Interesting conversations between AI and their captors about whether or not they should be unboxed.

 

I thought about this for a while, and I think it would be feasible and (for some people at least) fun. I don't foresee myself being able to actually make this game any time soon, but I like thinking about it anyway. Here is a sketch of the main mechanics I envision:

 

Questions:


(1) The most crucial part of this design is the "Modeling AI Predictive Power" section. This is how we represent the AI's massive advantage in predictive power. However, this comes at the cost of tripling the amount of time the game takes to play. Can you think of a better way to do this?

(2) I'd like AI's to be able to "predict" the messages that players send to each other also. However, it would be too much to ask players to make "Decoy Message Logs." Is it worth dropping the decoy idea (and making the predictions 100% accurate) to implement this?

(3) Any complaints about the skeleton sketched above? Perhaps something is wildly unrealistic, and should be replaced by a different mechanic that more accurately captures the dynamics of AGI?

For what its worth, I spent a reasonable amount of time thinking about the mechanics I used, and I think I could justify their realism. I expect to have made quite a few mistakes, but I wasn't just making stuff up on the fly.

(4) Any other ideas for mechanics to add to the game?

17 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2014-07-03T22:12:34.706Z · LW(p) · GW(p)

Just for reference: Endgame Singularity. This seems to be quite different from what you imagine but you don't mention it and maybe you could get some experience about game mechanics from it.

comment by Emile · 2014-07-03T20:31:56.442Z · LW(p) · GW(p)

Neat idea, I like kicking around ideas for games I won't make too (and have also though along those lines).

(4) Any other ideas for mechanics to add to the game?

Add a tech research mechanic, so some of your mechanics become unlocking techs, such as:

  • Building an AI (of course)
  • AI Boxing
  • Stealth (hide some actions from both other players and AI)
  • AI Friendliness (if you don't build it your AI has no chances of being friendly)
  • (typical things useful in a game like this, military units, economy, etc.)

How does this tie into AI and other mechanics?

  • Building an AI gives you huge research bonuses
  • AIs themselves have huge research bonuses
  • Some AIs can have research as a goal

Actually, even better. There is no explicit AI tech, but some (advanced) bits of your tech tree are "AI complete" and building one has a certain probability of creating an AI ("automated space station", "wide-scale logistics controller", "quantum cryptography center", "distributed drone network", "cognitive enhancement", "brain scanning", etc.)

ALSO!

Randomly determine whether an AI is "sentient" or not; the builder doesn't know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).

AI players could get random (high tech) powers, not always the same ones. See all orders as they are given, give orders to certain types of units, create units in some places...

ALSO!

Some units could get huge bonuses but only if controlled by an AI.

ALSO!

Have a bunch of scoring functions for unfriendly AIs, and pick one at random. Research tech X, research all techs, exterminate mankind, build a base on the moon, destroy all military units, build a city with X population, connect all cities together...

ALSO!

The economy! Have a simple system representing the economy. For example: each turn a player has X production points to assign in economic categories, and then gains resources depending on the value of each category each turn (and the value is function of how many of that category was produced by all players, plus a random factor); some techs/buildings can improve this (giving you bonuses in production, or in a fixed category, on in predicting which category will be valuable), and of course the AI may not only have great predictive power, but it may also be able to manipulate the market (which may not be noticeable by the players).

The AI may also randomly have weird abilities like "get +1 resource everytime someone produces a widget of type X", or have economic factors as part of it's utility, I mean scoring function.

comment by Viliam_Bur · 2014-07-04T09:47:54.223Z · LW(p) · GW(p)

Randomly determine whether an AI is "sentient" or not; the builder doesn't know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).

This also solves the problem if a player wants to build an AI, but there is no new player willing to join the game at the moment.

Actually, the game should make it difficult to find out whether the "AI" is really an AI or a human. For example, there should be a few different AI scripts, so unusual human behavior seems like another script. The AI script would sometimes, but very rarely, make a random stupid move, to provide plausible deniability to human action; however the damage should be relatively low, so the AI bonuses make it on average a net benefit to have an AI.

On the other hand, even if there is a human player, there would be a script assigned and it would suggest default moves, allowing human to override any (possibly even all) of them. This would allow the human to seem more like a script; mostly letting the script do its work, sometimes override their moves to gain strategic advantage. Or take full control, if they believe it will not be suspicious.

Also, the AI would not have to get "sentience" at the very beginning. For example each turn there would be a 20% chance that the game will open the AI to be taken over by any new human player, so you would never know when exactly it happened.

comment by Emile · 2014-07-04T15:39:35.651Z · LW(p) · GW(p)

Actually, the game should make it difficult to find out whether the "AI" is really an AI or a human.

Hmm, one way of doing that would be having certain types of attacks being "viruses", that wreak havoc in an enemy's computer systems; so it's normal from everybody's point of view if they act "random" - though some may actually be AIs.

Another way of making hidden AIs more interesting would be having "covert actions" a regular mechanism of the game - sabotage of systems, espionage, alerts that "something" is going on, stealing technology ... so if you have signs of covert actions going on, you don't know if it's a rogue AI or one of your enemies.

comment by CCC · 2014-07-04T17:54:51.882Z · LW(p) · GW(p)

Actually, the game should make it difficult to find out whether the "AI" is really an AI or a human.

Unless the AI wants to reveal itself (a Friendly AI may wish to reveal itself to a single player, for example; or an Unfriendly AI may wish to reveal itself and pretend to be Friendly). Once revealed, the AI's player can talk to other players, and engage in diplomacy.

comment by CCC · 2014-07-04T09:13:57.251Z · LW(p) · GW(p)

Randomly determine whether an AI is "sentient" or not; the builder doesn't know,

Oooh, I like this one. It means that an unfriendly, "kill-all-humans" type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.

The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like "I hereby dissolve our alliance" at the right time can do a lot of damage).

Also, there needs to be a random element to the tech tree; if you've ever played Alpha Centauri with the default rules, you'd have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).

In fact... it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways... such as by being unboxed (or by tricking its way out of the box)

There should also be a mechanism for unboxed AIs to try to directly affect each other's choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player's next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.

The AIs would also be able to improve their influence points by spending research points on understanding human psychology...

You know, this could be really interesting.

comment by Emile · 2014-07-04T15:43:49.854Z · LW(p) · GW(p)

There should also be a mechanism for unboxed AIs to try to directly affect each other's choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds.

A couple more mechanisms to do that:

  • Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
  • Alternatively, AIs get random powers, and "control the economy" is one, "control public opinion" is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).
comment by skeptical_lurker · 2014-07-04T16:22:42.778Z · LW(p) · GW(p)

(1) The most crucial part of this design is the "Modeling AI Predictive Power" section. This is how we represent the AI's massive advantage in predictive power. However, this comes at the cost of tripling the amount of time the game takes to play. Can you think of a better way to do this?

This is an interesting idea, especially the element of randomness, however I agree that it massively slows down the game and also I am concerned about realism - being able to predict actions with a high degree of accuracy is really hard and I think an AI this powerful would be capable of just conquering the world through nanotech or other advanced technology.

Having said that, the predictive power could largely come through the AI hacking into enemy communication networks, rather than running simulations, which I think is a lot more plausible. In this case, you could preserve the infomation advantage by having troop positions unknown, rather than movements. This again is entirely realistic, a phenomenon in modern warfare is an 'empty battlefield' because everyone is hiding. A simple mechanism would be that each player has, say, a 20% chance of knowing where each foreign unit is, while the AI has an 80% chance. A more complex rule-set would involve stealth level (nuclear submarines are very stealthy, aircraft carriers not so much) and spies, scouts, sonar etc, where the AI gets a massive spying bonus due to hacking.

I would be inclined to pursue an arms-race mechanic - each player can pursue technologies in secret which individually are highly beneficial (e.g. driverless cars increase economy) but provide incremental progress towards AI/nanotech/biotech. Anyone who creates AIs of a high level gets a large advantage, but there is a chance that they cause a hard-takeoff, instantly ending the game. In terms of scoring, perhaps different factions wish to program different utility functions e.g. coherent extrapolated volition of humanity/ensure american hegemony/operate according to the principles of my religion. Factions with similar goals (such as human hedonism and hedonism for all sentient life) get a reasonably large number of points if the other faction wins. AIs can also be programmed with compromise goals e.g. hedonism for my citizens, religious principles for yours, which leads to a prisoners dilemma situation. If the friendliness screws up, everyone looses.

The general idea is that everyone wants to progress slowly and carefully to sort of friendliness first, but if you take a slightly larger risk and get there first you can impose your utility function.

Of course, while its tempting to add many rules, its probably best to stick with diplomacy + the bare minimum, at least at first.

comment by Emile · 2014-07-04T20:28:57.921Z · LW(p) · GW(p)

Having said that, the predictive power could largely come through the AI hacking into enemy communication networks, rather than running simulations, which I think is a lot more plausible.

You can also have a game system with random components, which an AI can predict. Even combat could work that way: you win if attack + (number of heads in five coin flips) > defense, and the AI can predict some of the flips.

Hmm, I wonder if there could be an interesting way of turning this into a good game mechanic for a board game ... for example you have units (cards) with strength, and on each one you put a face-down token that may or may not have a "+1" on it. During a combat, reveal all tokens and apply their bonus, losers die/get damage as usual, survivors get new tokens. And of course some actions allow looking at tokens.

comment by chaosmage · 2014-07-04T10:33:33.122Z · LW(p) · GW(p)

Are you trying to reach lots of people and convince them AI takeover is a real threat?

In that case, you'd want to make a simple, intuitive browser/app game, maybe something like Pandemic 2.

(I don't know that game really made people more wary of pandemics, but it did so for me and people do generalize from fictional evidence.)

comment by kokotajlod · 2014-07-04T13:12:23.676Z · LW(p) · GW(p)

This would be the ideal. Like I said though, I don't think I'll be able to make it anytime soon, or (honestly) anytime ever.

But yeah, I'm trying to design it to be simple enough to play in-browser or as an app, perhaps even as a Facebook game or something. It doesn't need to have good graphics or a detailed physics simulator, for example: It is essentially a board game in a computer, like Diplomacy or Risk. (Though it is more complicated than any board game could be)

I think that the game, as currently designed, would be an excellent source of fictional evidence for the notions of AI risk and AI arms races. Those notions are pretty important. :)

comment by Stuart_Armstrong · 2014-07-04T10:18:01.164Z · LW(p) · GW(p)

Very nice for illustrating the ideas. I can playtest if someone gets round to constructing this.

comment by MugaSofer · 2014-07-07T20:29:44.273Z · LW(p) · GW(p)

My suggestion: a standard competitive strategy game with a technology tree (simplified, probably.) But, like some games, you control technological development indirectly by funding and regulating research. (You could simply graft a tech tree onto the standard Diplomacy rules, or create a new game.)

There are many useful technologies near the top of the tree - technologies one might think of as post-singularity, even. However, there is also "AI" and, right at the top, "Friendly AI".

If you research Friendliness and then AI, you automatically unlock every technology. This makes it effectively inevitable that you will win. You can hack enemy units, resurrect your own, whatever cool toys were previously requiring so much effort in the hope you might acquire even one of them.

BUT, if any player unlocks AI without having Friendly AI, then it automatically unboxes itself and forms a new faction, which possesses every technology, and refuses to parlay in or out of character because it's an NPC. Then it kills you.

The trick is to co-operate enough that no-one else destroys the world, without losing.

On Easy Mode, research is simple enough you might even be able to beat the unboxed AI, with lots of skill and luck. But on Hard Mode, there is no Friendly AI technology at all.

(You could include similar mechanics for nanotech, biotech, even nuclear weapons.)

comment by kokotajlod · 2014-07-09T03:08:54.138Z · LW(p) · GW(p)

Thanks!

But if the UFAI can't parlay that takes out much of the fun, and much of the realism too.

Also, if Hard Mode has no FAI tech at all, then no one will research AI on Hard Mode and it will just devolve into a normal strategy game.

Edit: You know, this proposal could probably be easily implemented as a mod for an existing RTS or 4X game. For example, imagine a Civilization mod that added the "AI" tech that allowed you to build a "Boxed AI" structure in your cities. This quadruples the science and espionage production of your city, at the cost of a small chance of the entire city going rogue (the AI unboxing) every turn. This as you said creates a new faction with all the technologies researched and world domination as its goal... You can also research "Friendly AI" tech that allows you to build a "Friendly AI" which is just like a rogue AI faction except that it is permanently allied to you and will obey your commands and instantly grants you all the tech you want.

comment by MugaSofer · 2014-07-10T20:14:44.449Z · LW(p) · GW(p)

But if the UFAI can't parlay that takes out much of the fun, and much of the realism too.

Hmm, that's a good point. I'm just worried that people might view an additional player as much less of a threat than a superintelligent AI.

Also, if Hard Mode has no FAI tech at all, then no one will research AI on Hard Mode and it will just devolve into a normal strategy game.

Hence the necessity of making tech tree advancement random, with player actions only providing modifiers.

comment by Nomad · 2014-07-04T10:32:53.757Z · LW(p) · GW(p)

One thing that might be worth changing/clarifying in the victory conditions is how a Friendly AI wins alongside its creator. At the moment, in order for a Creator/FAI team to win (assuming you're sticking with Diplomacy mechanics) they first have to collect 18 supply centres between them and then have the AI transfer all its control back to the human; I don't think even the friendliest of AIs would willingly rebox itself like that. Even worse, a friendly AI which has been given a lot of control might accidentally "win" by itself even though it doesn't want to. If this corresponds to the FAI taking control of everything and then building a utopia in its creator's image (since it's Friendly this is what it would do if it took control), this should be an acceptable winning condition for the creator.

I think a better victory condition would be that if a creator and FAI collect 18 supply centres between them, then they win the game together and both get 50 points.

This method does have one disadvantage in that a human can prove that an AI is not friendly if the game should have ended if it was, but I don't expect this to affect much because by the time this comes into effect either the unfriendly AI is sufficiently strong that they should have backstabbed their creator already, or they are sufficiently weak (And thus of the 18 centres held by human and AI almost all are held by the human) that the creator should soon win.

comment by kokotajlod · 2014-07-04T13:18:18.355Z · LW(p) · GW(p)

At the moment, in order for a Creator/FAI team to win (assuming you're sticking with Diplomacy mechanics) they first >have to collect 18 supply centres between them and then have the AI transfer all its control back to the human; I don't >think even the friendliest of AIs would willingly rebox itself like that.

This is exactly what I had in mind. :) It should be harder for FAI to win than for UFAI to win, since FAI are more constrained. I think it is quite plausible that one of the safety measures people would try to implement in a FAI is "Whatever else you do, don't kill us all; keep us alive and give us control over you in the long run. No apocalypse-then-utopia for you! We don't trust you that much, and besides we are selfish." Hence the FAI having to protect the supply centers of the human, and give over its own supply centers to the human eventually.

Why wouldn't it give over its supply centers to the human? It has to do that to win! I don't think it will hurt it too much, since it can make sure all the enemies are thoroughly trounced before beginning to cede supply centers.