Modifying Universal Intelligence Measure

post by Alex_Altair · 2012-09-18T23:44:08.864Z · LW · GW · Legacy · 6 comments

Contents

6 comments

In 2007, Legg and Hutter wrote a paper using the AIXI model to define a measure of intelligence. It's pretty great, but I can think of some directions of improvement.

Any thought or research on this by others? I imagine lots of discussion has occurred over these topics; any referencing would be appreciated.

6 comments

Comments sorted by top scores.

comment by gwern · 2012-09-19T01:17:22.561Z · LW(p) · GW(p)

but if you are trying to determine how powerful an agent will be in this universe, you probably want to replace the Solomonoff prior with the posterior resulting from updating the Solomonoff prior with data from our universe.

Hence the Schmidhuber (IIRC) paper on speeding up optimal problem solvers with an initial package of encoded knowledge about the domain?

Replies from: Alex_Altair
comment by Alex_Altair · 2012-09-19T01:49:04.632Z · LW(p) · GW(p)

Awesome, do you know the title?

Replies from: Kawoomba, gwern
comment by Kawoomba · 2012-09-19T06:48:39.994Z · LW(p) · GW(p)

The optimal ordered problem solver OOPS will save earlier solutions as a reference for later problems, see here, for the efficacy (or lack thereof) and caveats of that approach, see here.

In general we would like our learner to continually profit from useful information conveyed by solutions to earlier tasks. (...) Storage for the first found program computing a solution to the current task becomes nonwriteable.

For a related paper on how to decide what information to save from previous tasks ("inductive transfer of knowledge from one task solution to the next"), see here.

(Side note: awsome quote I stumbled upon in the first paper:

And since constants beyond 2^500 do not even make sense within this universe, (Levin Search) may be viewed as academic exercises demonstrating that the O() notation can sometimes be practically irrelevant despite its wide use in theoretical computer science.)

comment by gwern · 2012-09-19T03:11:57.377Z · LW(p) · GW(p)

Not offhand, sorry.

comment by JoshuaFox · 2012-09-19T06:56:17.231Z · LW(p) · GW(p)

Concerning the last item, see this article by Ben Goertzel

"Pragmatic general intelligence measures the capability of an agent to achieve goals in environments, relative to prior distributions over goal and environment space. Efficient pragmatic general intelligence measures this same capability, but normalized by the amount of computational resources utilized in the course of the goal-achievement."

comment by V_V · 2012-09-19T10:30:15.805Z · LW(p) · GW(p)

Can we formalize AIXI and the intelligence measure in terms of utility functions, instead?

No. Preference utility functions are defined on world states, which are, in the general case, not completely accessible to the agent.

AIXI discounts the future by requiring that total future reward is bounded, and therefore so does the intelligence measure. This seems to me like a constraint that does not reflect reality

If cumulative reward is not bounded, you might end up comparing infinities. Anyway, I suppose they chose to have a fixed horizon rather than, say, exponential discounting, so that at any prediction step any individual predictor program needs to run only for finite time. Otherwise AIXI would be, in some sense, doubly uncomputable.

Perhaps by using an average of utility functions weighted by their K-complexity

That yields worst-case complexity.

This might okay if your agent is a playing arbitrary games on a computer, but if you are trying to determine how powerful an agent will be in this universe, you probably want to replace the Solomonoff prior with the posterior resulting from updating the Solomonoff prior with data from our universe.

That's what the agent is supposed to do by itself.