Posts

Comments

Comment by guest4 on Expected Creative Surprises · 2008-10-26T15:11:29.000Z · LW · GW

pdf23ds, most chess engines store analyzed subtrees into a transposition table. This is not the same as having a persistent strategy, but it does mean that much evaluation work can be saved by keeping a persistent state between moves.

This is similar to how iterative deepening works: in theory you could obtain the same result by performing a depth-first search directly, but iterative deepening provides a hint at the "best" moves which makes search more efficient.

Comment by guest4 on Belief in Intelligence · 2008-10-25T17:14:57.000Z · LW · GW

"Is gravity really such a genius?"

Gravity may not be a genius, but it's still an optimization problem, since the ball "wants" to minimize its potential energy. Of course, there are limits to such a reasoning: perhaps the ball will get stuck somewhere and never reach the lowest-energy state.

Comment by guest4 on Expected Creative Surprises · 2008-10-25T08:12:44.000Z · LW · GW

Nick Hay - IIRC the minus-log probability of an outcome is usually called "surprisal" or "self-information". The Shannon entropy of a random variable is just the expected value of its surprisal.

Comment by guest4 on Expected Creative Surprises · 2008-10-25T08:02:49.000Z · LW · GW

If the possibility of "creative surprises" depends on ignorance about the logical consequences of a game move, it seems that this would be best modeled as an asymmetric information problem.

It's interesting to note that the usual "Dutch-book" arguments for subjective probability break down under asymmetric information - the only way to avoid losing money to a possibly better-informed opponent is refusing to enter some bets, i.e. adopting an upper-lower probability interval.

Of course, such upper-lower spreads show up routinely in illiquid financial markets; I wonder whether any connections have been made between rational pricing conditions in such markets and imprecise probability theories like Dempster-Shafer, etc.