Posts
Comments
I went and ran this another 100 times, so I could see what it would look like without the variance. The mean scores are:
A: 32.03
B: 28.53
C: 32.48
D: 24.94
E: 28.75
F: 29.62
G: 28.42
H: 26.12
I :26.06
J: 26.10
K: 36.15
L: 27.21
M: 25.14
N: 34.37
O: 31.06
P: 26.55
Q: 34.95
R: 32.93
S: 37.08
T: 26.43
U: 24.24
If you're interested, here's the code for the test (takes a day to run) and the raw output for my run (an inconvenient format, but it shows the stats for the matchups).
One good way to interpret code is to run the code with "eval", which many submitted bots did. This method has no problems with the examples you gave. One important place it breaks down is with bots that behave randomly. In that case a robot may, by chance, be simulated to cooperate and defect in whatever sequence would make it seem worth cooperating with even if it actually ends up defecting. This, combined with a little luck, made the random bots come out ahead. There are ways to get around this problem, and a few bots did so, but they still didn't do better random bots because they had less potential for exploitation in this particular pool of entrants.
You're right. K is a MimicBot with an additional check for proper quining. I primarily intended it to cause defection against CooperateBots, RandomBots, and others that don't simulate their opponents meaningfully. I expected a lot more MimicBot variants and mutual cooperations...