Replicating Douglas Lenat's Traveller TCS win with publicly-known techniques
post by ChrisHallquist · 2013-10-26T15:47:03.222Z · LW · GW · Legacy · 16 commentsContents
16 comments
Douglas Lenat's program EURISKO is legendary in the AI community for a distinct real-world achievement: allowing Lenat to win the the Traveller TCS roleplaying game tournament two years in a row (and then semi-voluntarily not competing subsequent years). Lenat never released EURISKO's source code, leaving how he managed to pull off this feat somewhat of a mystery. Yet Lenat's later work based on EURISKO does not seem to have yielded anything else in the way of practical benefits.
Some time ago on LessWrong, someone proposed trying to figure out what Lenat did and reimplementing EURISKO. But Eliezer is worried this could be dangerous. So I have another proposal: see if Lenat's accomplishment can be replicated using machine learning and genetic programming techniques that are already publicly known.
My suspicion is that Lenat's TCS win tells us more about TCS than about EURISKO, that TCS is likely a game that's inherently vulnerable to the "find winning strategies by simulating a lots of games on a computer" meta-strategy. I've heard, for example, that battles are often tactically trivial, with the outcome of battles effectively determined by the composition of the two fleets (and fleet composition is what Lenat used EURISKO for). If that hypothesis is correct, though, it suggests it shouldn't be necessary to reimplement EURISKO specifically to get a program that's good at designing TCS fleets. If that turns out not to be the case, it would be evidence that there really is something special about EURISKO after all.
Does anyone know if anyone has tried this? As a novice computer programmer, I think it might be a good project to hone my programming skills. Input on how to approach such a project would be appreciated.
16 comments
Comments sorted by top scores.
comment by passive_fist · 2013-10-26T22:00:47.720Z · LW(p) · GW(p)
A good step would be looking at Lenat's published work around that time to get a feel for how he thought about AI and what strategies he was likely to implement. Lenat was a firm believer in symbolics meaning he worked almost entirely in Lisp. Another thing is that his systems were interactive - they relied to a large degree on user input, possibly in 'selecting' the right strategies which the machine would then investigate in further depth and possibly try to optimize. For an example of this, rule 2 of the AM system is, "If the user has recently referred to X, then boost the priority of any tasks involving X."
Lenat himself said that the Traveller TCS solution was 60% himself and 40% Eurisko, and given that he was trying to promote his software, this is probably an overestimate. It's likely that Lenat won Traveller TCS, not Eurisko, and that Eurisko was simply a tool he used to carry out various simulations and computations that would have been tedious to do by hand. That doesn't mean that the system is useless though. I'm personally extremely interested in interactive AI systems that use the 'best of both worlds', relying on a combination of human intuition and raw machine computational power to perform tasks that neither could have done separately.
comment by brazil84 · 2013-10-27T11:04:59.257Z · LW(p) · GW(p)
I don't know anything about Trillion Credit Squadron, but I would imagine that a game took a few hours, with much of that time being spent rolling dice; looking up what happened; and recording the results. A computer program which simply automated all of that bookkeeping would likely make it a lot easier to play many games and discover the best strategies.
I recall that back in the 80s it became possible to play Risk on a computer. As a result you could play games much quicker and with less mental exertion. And it became a lot easier to observe that controlling Australia at the beginning of the game was a massive advantage.
I wouldn't be surprised if simple automation of tedium was a big part of Eurisko's success.
Replies from: V_V↑ comment by V_V · 2013-10-27T17:30:00.632Z · LW(p) · GW(p)
Moreover, IIUC, most players approached the tournament with a role-playing mindset, while Lenat had a "Munchkin" competitive approach. In fact, he was eventually banned on the grounds that he violated the "spirit" of the game.
Replies from: brazil84↑ comment by brazil84 · 2013-10-27T23:01:11.334Z · LW(p) · GW(p)
Good point.
Actually you reminded me of the IBM Watson versus human Jeopardy game, which was arguably munchkinized by IBM. Although I am pretty confident that computers will eventually beat humans at games like Jeopardy, it was disappointing to see it happen in a way which was not true to the spirit of the game. But instead had the playing field tilted as much as possible in favor of the computer.
Replies from: solipsist↑ comment by solipsist · 2013-10-29T16:27:41.073Z · LW(p) · GW(p)
it was disappointing to see it happen in a way which was not true to the spirit of the game.
How so?
Replies from: brazil84↑ comment by brazil84 · 2013-10-30T01:14:09.273Z · LW(p) · GW(p)
How so?
Well the biggest problem was that the computer rang in with an actuator. So that in situations where more than one player had the correct answer before the question was done, the computer always had first crack. That's a huge advantage. And it's arguably munchkinism where the spirit of the game is to think quickly and accurately.
Of course, one might argue that ringing in quickly is part of the game and therefore part of the spirit of the game. My response to that is that other "parts of the game" were omitted for the benefit of the computer. For example, interpreting the questions by reading from the screen and listening to Alex is part of the game. But the computer had the questions fed to it in the form of a text file. The computer probably would not have been as effective if it had to get the questions by having a camera pan to the correct monitor, zoom in, and then do an OCR to interpret the questions. Or by doing voice recognition on Alex.
Another "part of the game" is that you normally have to play at the studio in Los Angeles. The computer (which was based in New York) would have had to deal with latency problems if the game had been played in the normal location.
Another "part of the game" is audio and visual clues -- as I recall there were no such clues.
On a slightly different topic, it was also a problem that there were two human players and one computer. So that on questions which favor humans over computers, the two humans would have had a tendency to split the points. Quite possibly the computer would have lost if the two humans had agreed in advance that one of them would always wait an extra 2 or 3 seconds before ringing in.
The bottom line is that the computer win was not satisfying. It reminds me of the annoying girl in your advanced math class who was always asking "Will this be on the test?"
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-10-30T21:21:32.509Z · LW(p) · GW(p)
The computer probably would not have been as effective if it had to get the questions by having a camera pan to the correct monitor, zoom in, and then do an OCR to interpret the questions.
The questions are always printed in the same font. Compared to the amount of processing power needed to answer them, the amount needed to do the OCR would be minimal.
Another "part of the game" is that you normally have to play at the studio in Los Angeles. The computer (which was based in New York) would have had to deal with latency problems if the game had been played in the normal location.
That's about 60 msec latency each way — not much.
Replies from: brazil84↑ comment by brazil84 · 2013-10-31T00:15:29.257Z · LW(p) · GW(p)
The questions are always printed in the same font. Compared to the amount of processing power needed to answer them, the amount needed to do the OCR would be minimal.
I don't think it's just a matter of OCR -- it seems that the computer would have to first focus in on the correct area of the game board where the clue is being displayed. I'm not sure how quickly and efficiently that could have been done.
That's about 60 msec latency each way — not much.
As far as I know, a fast human can react in about 100 milliseconds. So 60 milliseconds each way would arguably have put the computer in the same ballpark as humans.
comment by fubarobfusco · 2013-10-26T17:40:46.777Z · LW(p) · GW(p)
His name is Lenat, not Lenant.
(Also, the game is called Traveller) as you have in the title, not Traveler as you have in the text.)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-10-26T22:02:04.795Z · LW(p) · GW(p)
God dammit. Fixed.
comment by solipsist · 2013-10-26T17:04:55.019Z · LW(p) · GW(p)
As a novice computer programmer, I think it might be a good project to hone my programming skills.
How novice? I've seen many new programmers begin writing simulators and AIs for real-world games. They have fun and learn a lot, but generally realize they've bitten off more than they can chew and move on to other projects.
If you've never done anything AIish, small fun projects you can finish quickly include solving sliding puzzles with A* search and (more simply) solving sudoku puzzles.
If you do embark on this game project, reinforcement learning will be your friend.
comment by [deleted] · 2013-10-26T17:08:19.811Z · LW(p) · GW(p)
This sounds like a great idea.
But, how will you know if you managed to replicate EURISKO's success? Are people still playing this? Or is there some archive of tournament entries available so you could see how your program's fleet compares against them?
Replies from: ChrisHallquist, ChrisHallquist↑ comment by ChrisHallquist · 2013-11-11T04:53:18.661Z · LW(p) · GW(p)
Ah, found Lenant's fleet:
LINK%20-%20Professor%20Lenat%20and%20EURISKO's%20Winning%20Fleet.htm)
This is partly for my own reference. May as well cut and paste the whole thing in case the link goes bad:
Winning TCS fleet - TL 12
Four Garter class: TB-Garter TB-K1567F3-B41106-34009-1 MCr 17,584.104 Bearing C 1 EE 7 12,000 tons Batteries C 1 EE 7 crew=170 Agility=4; Fuel=840; Cargo=4.3 low=170 Note: L-Hyd drop tanks add 6000 tons of fuel and mass,change the agility to 4, and cost MCr6.01.(TB-K1344F3) The ship is designed to manuever when carrying up to 72,000 tons of drop tanks and one Wasp fighter.
Four Cisor class: BD-Cisor BD-K9525F3-E41100-340C5-0 MCr22,291.175 Bearing 1 11 1U 19,980 tons Batteries 1 11 1U crew= ? Agility=0; Fuel=999; Cargo=19.1 low=95 Note: L-Hyd drop tanks add 9,990 tons of fuel and mass, and cost MCr10. (BD- L9313F3) The ship is designed to manuever when carrying up to 29,970 tons of drop tanks.
Three Queller class: BH-Queller BH-K1526F3-B41106-34Q02-1 MCr27,802.392 Bearing Z 1 NN1 N 19,600 tons Batteries Z 1 NN1 N crew=263 Agility=0; Fuel=1,176; Cargo=10.72 low=232; marines=200 Note: L-hyd drop tanks add 9,800 tons of fuel and mass, and cost MCr9.81. (BH-L1314F3) The ship is designed to manuever when carrying up to 29,400 tons of drop tanks and two fighters (one Wasp and one Bee).
Seventy-five Eurisko class: BA-Eurisko Ba-K952563-J41100-34003-0 MCr13,030.385 Bearing 1 11 V 11,100 tons Batteries 1 11 V crew=131 Agility=2; Fuel=555; Cargo=8 low=0; marines=35 Note: L-hyd drop tanks add 5,550 tons of fuel and mass, change the agility to 1, and cost 5.56. (BA-K931363) The ship is designed to manuever when carrying up to 16,650 tons of drop tanks.
Seven Wasp class: IL-Wasp Il-A90ZZF2-J00000-00009-0 MCr896.75 Bearing 1 1,000 tons Batteries 1 Crew=19 Agility=6; Fuel=60; Cargo=0 low=0
Three Bee class: FF-Bee FF-0906661-A30000-00001-0 MCr127.945 Bearing 1 2 99 tons Batteries 1 2 crew=2 Agility=0; Fuel=5.94; Cargo=0
↑ comment by ChrisHallquist · 2013-10-26T22:14:56.051Z · LW(p) · GW(p)
A Google search turns up no references to recent iterations of the tournament, which makes me think it's not still going on. The first test, I think, is to see what kind of fleet the computer program comes up with and see if it matches the descriptions I've found of the fleet EURISKO came up with. Finding an archive of the tournament entries would be helpful to. Really, the more information I can find on the history of the tournament, the better, but that may be moderately difficult, since this happened in '81-'82, before every damn thing that happened got archived on the internet.
comment by Shmi (shminux) · 2013-10-27T04:09:37.071Z · LW(p) · GW(p)
As a novice computer programmer
How novice? Have you written, say, Tetris successfully?
comment by summerstay · 2013-10-27T13:12:11.241Z · LW(p) · GW(p)
You can read a paper on EURISKO here. My impression is that the program quickly exhausted the insights he put in as heuristics, and began journeying down eccentric paths that were not of interest to a human mathematician.