NES-game playing AI [video link and AI-boxing-related comment]
post by Dr_Manhattan · 2013-04-12T13:11:36.365Z · LW · GW · Legacy · 22 commentsContents
22 comments
"Pretty simple" algorithm playing games quite impressively.
http://www.youtube.com/watch?v=xOCurBYI_gY
First, this is awesome - enjoy!
Paper here http://www.cs.cmu.edu/~tom7/mario/mario.pdf
One interesting observation made by Tom Murphy is that the AI found and exploited playable bugs in the game not (commonly) known to human players. I think it's a good example to have available suggesting what a really smart AI might look for to win.
22 comments
Comments sorted by top scores.
comment by Larks · 2013-04-12T13:33:27.723Z · LW(p) · GW(p)
Also, the behaviour at the very end is very much "oh we didn't expect that to happen forever when we wrote that utility function"
Replies from: Dr_Manhattan, CarlShulman↑ comment by Dr_Manhattan · 2013-04-12T14:59:17.861Z · LW(p) · GW(p)
Wireheading :)
After taking Berkeley AI course part 1 it feels like "getting stuck forever" is one of the standard game AI bugs. Makes you wonder if some if the mental mechanisms we have evolved to handle that.
Replies from: JoshuaZ↑ comment by CarlShulman · 2013-04-12T15:57:48.791Z · LW(p) · GW(p)
It was published on April 1.
But he certainly expected the "increment any counters" objective function would not map to perfect play, at least by the time he got to Tetris. It was more of a case of "well, this algorithm is probably not going to beat any of these games, but I'm not sure of all the ways that it will fail."
Replies from: Larks↑ comment by Larks · 2013-04-12T16:45:48.787Z · LW(p) · GW(p)
I don't understand why the algorithm didn't pick up on his score in Tetris monotonically increasing - unless he was just such a bad player that the number of rows also monotonically increased?
Replies from: None, CarlShulman↑ comment by [deleted] · 2013-04-12T16:50:57.500Z · LW(p) · GW(p)
It did pick up on his score increasing. But you get a few points just for putting a block on top of another block, and the search didn't look far enough ahead (or wasn't comprehensive enough) to spot that making lines would give even more points.
↑ comment by CarlShulman · 2013-04-12T16:59:11.784Z · LW(p) · GW(p)
I was referring to the "pause just before dying" behavior, which would have persisted even with enough search depth to make lines.
Replies from: Raemon↑ comment by Raemon · 2013-04-12T17:20:45.229Z · LW(p) · GW(p)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2013-04-13T00:52:30.588Z · LW(p) · GW(p)
I prefer the line where he implies eating a magic mushroom is Mario's drugs.
comment by lukeprog · 2013-04-12T16:28:39.094Z · LW(p) · GW(p)
Tom's Mario page.
Tom's blog.
He promises to post more videos when he gets back from Japan.
I LOLed, for real, at the end of the video.
Replies from: latanius↑ comment by latanius · 2013-04-13T01:02:22.357Z · LW(p) · GW(p)
Also, Tom's academic website. It's the coolest academic website I've ever seen.
comment by latanius · 2013-04-13T01:32:27.318Z · LW(p) · GW(p)
Wow. First thought: who is this guy who submits a really cool scientific result to a thing like SIGBOVIK? He could have sent this thing to a real conference! It's a thing no one has ever tried!
Then I checked out his website. The academic one. And the others.
Well, short description: "Superhero of Productivity". The list of stuff he created doesn't fit on his site. Sites. Also, see this remark of his,
One of the best things about grad school was that if you get your work done then you get to do other stuff too.
(I'm also at CS grad school, am happy if I have time to sleep, and my only productive output is... LW comments... does that count?)
comment by latanius · 2013-04-13T06:32:57.457Z · LW(p) · GW(p)
This thing looks more and more relevant as I think about it. What it does is not just optimizing an objective function in a weird and unexpected way, but actually learning it in all its complicatedness from observed human behavior.
Would it be an overestimation to call this a FAI research paper?
Replies from: Baughn↑ comment by Baughn · 2013-04-14T13:24:51.740Z · LW(p) · GW(p)
AI research paper? Maybe not.
What's friendly about this AI?
Replies from: latanius, pjeby↑ comment by latanius · 2013-04-14T19:42:54.885Z · LW(p) · GW(p)
The point is that it's not, but making it so is a design goal of the paper.
Example: Mario immediately jumping into a pit at level 2. According to the learned utility function of the system, it's a good idea. According to ours, it's not.
Just as with optimizing smiling faces. But while that one was purely a thought experiment, this paper presents a practical, experimentally testable benchmark for utility function learning, and, by the way, shows a not-yet-perfect but working solution for it. (After all, Mario's Flying Goomba Kick of High Munchkinry definitely satisfies our utility functions.)
↑ comment by pjeby · 2013-04-15T04:57:03.742Z · LW(p) · GW(p)
What's friendly about this AI?
Nothing. It's mostly useful to illustrate cognitive biases around AI, demonstrating how alien a simple "utility"-maximizing process is, compared to how humans think about things. It's an example answer to the standard, "But my AI wouldn't do a stupid thing like that" objection. Well, yes, actually, it would. And the simpler and more elegant your design is, the higher the probability that it will do things like that: things you don't even think about because to a human, they're obviously stupid. (At the same time, of course, it will also do things that seem utterly brilliant to a human, for the exact same reason: finding that brilliant move first required doing something stupid, like jumping at an enemy.)
It also illustrates some decision theory concepts, like looking into the future to see how your actions fare, and the importance of matching the machine's "utility" with a human's utility. (In each game, the actual game utility differs in certain ways from the simple utility function derived from scoring, and it's these differences that create the bad-weird moves.)
comment by roystgnr · 2013-04-12T18:54:12.112Z · LW(p) · GW(p)
I would like to see more discussion of this sort of "AI boxing".
Typically the implicit assumption I've seen is that "The AI is in a box where a limited communication channel enables it to learn about the real universe and have conversations with its designers", which I'm convinced is unacceptably fragile.
I think a version like "The AI is in a box where the laws of physics are little more than a video game, it doesn't get any direct information about the real world, and we just watch its behavior in the game world to make inferences about it" might be more interesting. Occam's Razor isn't designed to be proof against overwhelming deception like this, and so it might not be too hard to push any credence the AI gives to "this world is an illusion generated within a much more complex outer world" down to negligible values.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-04-13T03:03:08.580Z · LW(p) · GW(p)
I think this is more AI kickboxing.
comment by Douglas_Knight · 2013-04-22T22:50:18.151Z · LW(p) · GW(p)
I thought that the most interesting observation was in the paper and not in video, namely the use of pause in Bubble Bobble. As in Tetris, it often found that if it paused, it was happier about the results than if it didn't, but each step when paused let it try again to search for good moves and since it wasn't trapped like in Tetris, it eventually found them.
comment by Metus · 2013-04-13T09:35:37.381Z · LW(p) · GW(p)
I would be interested in seeing the learning process for pacman further. I guess the algorithm just ran for a couple of iterations. Also, could we run an experiment with more complicated games, like doom? There also is an obvious way to count, namely the number of killed enemies. Chess? Maybe even some poker?
I will have to read that paper.
Replies from: gwern↑ comment by gwern · 2013-04-13T17:30:35.797Z · LW(p) · GW(p)
Also, could we run an experiment with more complicated games, like doom? There also is an obvious way to count, namely the number of killed enemies.
There is an obvious way to count, yes, but can it be easily located in the raw RAM of a running Doom instance? That's the point here, automatically inferring a measure of progress from somewhere in the raw binary blob.
If you have to explicitly define a reward counter, then you're just doing normal reinforcement-learning or AI kinda stuff, and you might as well go use a non-joke agent like AIXI-MC (already plays Pacman) or something with a more straightforward and less hilariously awesomely bizarre design.
Plus from the paper, it sounds like there are serious performance and scalability issues on just the small NES games he was using, so it may not be feasible to run on as big a program as Doom without real work like using a cluster.
comment by lukeprog · 2013-11-07T23:03:03.547Z · LW(p) · GW(p)
New video by Tom7! NES AI Learnfun & Playfun, ep. 2: Zelda, Punch-Out, stocks, etc.