The Darwin Resultspost by Zvi · 2017-11-25T13:30:00.351Z · LW · GW · 10 comments
Epistemic Status: True story (numbers are best recollections)
This is post three in the sequence Zbybpu’f Nezl.
It was Friday night and time to play The Darwin Game. Excited players gathered around their computers to view the scoreboard and message board.
In the first round, my score went up slightly, to something like 109 from the starting 100. One other player had a similar score. A large group scored around 98. Others did poorly to varying degrees, with one doing especially poorly. That one played all 3s.
Three, including David, shot up to around 130.
If it isn't obvious what happened, take a minute to think about it before proceeding.
The CliqueBots had scores of 98 or so. They quickly figured out what happened.
David lied. He sent the 2-0-2 signal, and cooperated with CliqueBots, but instead of playing all 3s against others, he and two others cooperated with others too.
CliqueBots had been betrayed by MimicBots. The three defectors prospered, and the CliqueBots would lose.
Without those three members, the CliqueBots lacked critical mass. Members would die slowly, then increasingly quickly. If the three defectors had submitted CliqueBots, the CliqueBots would have grown in the first round, reaching critical mass. The rest of us would have been wiped out.
Instead, the three defectors would take a huge early lead, and the remaining members would constitute, as our professor put it, their 'packed lunch.'
The opening consisted of CliqueBots being wiped out, along with G-type weirdos, A-type attackers and D-style cooperators that got zero points from the CliqueBots.
Meanwhile, on the message board, the coalition members were pissed.
Everyone who survived into the middle game cooperated with everyone else. Victory would come down to efficiency, and size boosted efficiency. Four players soon owned the entire pool: Me and the three defectors.
I thought I had won. The coalition members wasted three turns on 2-0-2. Nothing could make up for that. My self-cooperation was far stronger, and I would outscore them over the first two rounds when we met due to the 0. It wouldn't go fast, but I would grind them out.
It did not work out that way. David had the least efficient algorithm and finished fourth, but I was slowly dying off as the game ended after round 200. Maybe there was a bug or mistake somewhere. Maybe I was being exploited a tiny bit in the early turns, in ways that seem hard to reconcile with the other program being efficient. I never saw their exact programs, so I'm not sure. I'd taken this risk, being willing to be slightly outscored in early turns to better signal and get cooperation, so that's probably what cost me in the end. Either way, I didn't win The Darwin Game, but did survive long enough to win the Golden Shark. If I hadn't done as well as I did in the opening I might not have, so I was pretty happy.
Many of us went to a class party at the professor's apartment. I was presented with my prize, a wooden block with a stick glued on, at the top of which was a little plastic shark, with plaque on the front saying Golden Shark 2001.
Everyone wanted to talk about was how awful David was and how glad they were I had won while not being him. They loved that my core strategy was so simple and elegant.
I tried gently pointing out David's actions were utterly predictable. I didn't know about the CliqueBot agreement, but I was deeply confused how they didn't see this 'betrayal' coming a mile away. Yes, the fact that they were only one or two CliqueBots short of critical mass had to sting, but was David really going to leave all that value on the table? Even if betraying them hadn't been the plan all along?
They were having none of it. I didn't press. Why spoil the party?
Several tipping points could have led to very different outcomes.
If there had been roughly two more loyal CliqueBots, the CliqueBots would have snowballed. Everyone not sending 2-0-2 would have been wiped out in order of how much they gave in to the coalition (which in turn accelerates their victory). Betrayers would have bigger pools, but from there all would cooperate with all and victory would come down to if anyone tweaked their cooperation algorithms to be slightly more efficient. David's betrayal may have cost him the Golden Shark.
If someone had said out loud "I notice that anyone who cares about winning is unlikely to submit the CliqueBot program, but instead will start 2-0-2 and then cooperate with others anyway" perhaps the CliqueBots reconsider.
If enough other players had played more 2s against the CliqueBots, as each of us was individually rewarded for doing, the CliqueBots would have won. If the signal had been 2-5-2 instead of 2-0-2, preventing rivals from scoring points on turn two, that might have been enough.
If I had arrived in the late game with a slightly larger pool, I would have snowballed and won. If another player submits my program, we each end up with half the pool.
Playing more 2s against attackers might have won me the entire game. It also might have handed victory to the CliqueBots.
If I had played a better split of 2s and 3s at the start, the result would have still depended on the exact response of other programs to starting 2s and 3s, but that too might have been enough.
Thus these paths were all possible:
The game ended mostly in MimicBots winning from the momentum they got from the CliqueBots.
It could have ended in an EquityBot (or even a DefenseBot) riding its efficiency edge in the first few turns to victory after the CliqueBots died out. Scenarios with far fewer CliqueBots end this way; without the large initial size boost, those first three signaling turns are a killer handicap.
It could have ended in MimicBots and CliqueBots winning together and dividing the pool. This could happen even if their numbers declined slightly early on, if they survived long enough while creating sufficient growth of FoldBot.
CliqueBots could have died early but sufficiently rewarded FoldBots to create a world where a BullyBot could succeed, and any BullyBot that survived could turn around and win.
It could have had MimicBots and CliqueBots wipe out everyone else, then ended in victory for very subtle MimicBots, perhaps that in fact played 3s against outsiders, that exploited the early setup turns to get a tiny edge. Choosing an algorithm that can't be gamed this way would mean choosing a less efficient one.
In various worlds with variously sized initial groups of CliqueBots and associated MimicBots, and various other programs, the correct program to submit might be a CliqueBot, a MimicBot that attacks everyone else but cheats on the coordination algorithm, a MimicBot that also cooperates with others, a BullyBot with various tactics, an EquityBot with various levels of folding, or an FoldBot. There are even scenarios where all marginal submissions lose, because the program that would win without you is poisoning the pool for its early advantage, so adding another similar program kills you both.
This is in addition to various tactical settings and methods of coordination that depend on exactly what else is out there.
Everyone's short term interest in points directly conflicts with their long term goal of having a favorable pool. The more you poison the pool, the better you do now, but if bots like you poison the pool too much, you'll all lose.
There is no 'right' answer, and no equilibrium.
What would have happened if the group had played again?
If we consider it only as a game, my guess is that this group would have been unable to trust each other enough to form a coalition, so cooperative bots in the second game would send no signal. Since cooperative bots won the first game, most entries would be cooperative bots. Victory would likely come down to who could get a slight edge during the coordination phase, and players would be tempted to enter true FoldBots and otherwise work with attackers, since they would expect attackers to die quickly. So there's some chance a well-built BullyBot could survive long enough to win, and I'd have been tempted to try it.
If we include the broader picture, I would expect an attempt to use out-of-game incentives to enforce the rules of a coalition. The rise of a true CliqueBot.
I spent so long on the Darwin Game story and my thinking process about it for several reasons.
One, it's a fun true story.
Two, it's an interesting game for its own sake.
Three, because it's a framework we can extend and work with, that has a lot of nice properties. There's lots to maximize and balance at different levels, no 'right' answer and no equilibrium. It isn't obvious what to reward and what to punish.
Four, it naturally ties your current decisions to your future and past decisions, and to what the world looks like and what situations you will find yourself in.
Five, it was encountered 'in the wild' and doesn't involve superhuman-level predictors. A natural objection to models is 'you engineered that to give the answer you want'. Another is 'let's figure out how to fool the predictor.' Hopefully minimizing such issues will help people take these ideas seriously.
There are many worthwhile paths forward. I have begun work on several. I am curious which ones seem most valuable and interesting, or where people think I will go next, and encourage such discussion and speculation.
Comments sorted by top scores.