New intro textbook on AIXI

post by Alex_Altair · 2024-05-11T18:18:50.945Z · LW · GW · 5 comments

Contents

  Basic info
  Table of contents:
    Part I: Introduction
    Part II: Algorithmic Prediction
    Part III: A Family of Universal Agents
    Part IV: Approximating Universal Agents
    Part V: Alternative Approaches
    Part VI: Safety and Discussion
None
5 comments

Marcus Hutter and his PhD students David Quarel and Elliot Catt have just published a new textbook called An Introduction to Universal Artificial Intelligence.

"Universal AI" refers to the body of theory surrounding Hutter's AIXI [? · GW], which is a model of ideal agency combining Solomonoff induction and reinforcement learning. Hutter has previously published a book-length exposition of AIXI in 2005, called just Universal Artificial Intelligence, and first introduced AIXI in a 2000 paper. I think UAI is well-written and organized, but it's certainly very dense. An introductory textbook is a welcome addition to the canon.

I doubt IUAI will contain any novel results, though from the table of contents, it looks like it will incorporate some of the further research that has been done since his 2005 book. As is common, the textbook is partly based on his experiences teaching the material to students over many years, and is aimed at advanced undergraduates.

I'm excited for this! Like any rationalist, I have plenty of opinions about problems with AIXI (it's not embedded [? · GW], RL is the wrong frame for agents, etc) but as an agent foundations researcher, I think progress on foundational theory is critical for AI safety.

Basic info

Table of contents:

Part I: Introduction

1.  Introduction

2. Background

Part II: Algorithmic Prediction

3. Bayesian Sequence Prediction

4. The Context Tree Weighting Algorithm

5. Variations on CTW

Part III: A Family of Universal Agents

6. Agency

7. Universal Artificial Intelligence

8. Optimality of Universal Agents

9. Other Universal Agents

10. Multi-agent Setting

Part IV: Approximating Universal Agents

11. AIXI-MDP

12.  Monte-Carlo AIXI with Context Tree Weighting

13. Computational Aspects

Part V: Alternative Approaches

14. Feature Reinforcement Learning

Part VI: Safety and Discussion

15.  AGI Safety

16.  Philosophy of AI

5 comments

Comments sorted by top scores.

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-05-12T11:48:16.016Z · LW(p) · GW(p)

I have heard of AIXI but haven't looked deeply into it. I'm curious about it. What are some results you think are cool in this field ?

Replies from: david-quarel, Alex_Altair
comment by David Quarel (david-quarel) · 2024-05-12T17:14:37.832Z · LW(p) · GW(p)

AIXI isn't isn't a practically realisable model due to its incomputability, but there's nice optimality results, and it gives you an ideal model of intelligence that you can approximate (https://arxiv.org/abs/0909.0801). It uses a universal Bayesian mixture over environments, using the Solomonoff prior (in some sense the best choice of prior) to learn, (in a way you can make formal) as fast as possible, as fast as any agent possibly could. There's some recent work done on trying to build practical approximations using deep learning instead of the CTW mixture (https://arxiv.org/html/2401.14953v1).

(Sorry for the lazy formatting, I'm on a phone right now. Maybe now is the time to get around to making a website for people to link)

comment by Alex_Altair · 2024-05-14T15:40:18.509Z · LW(p) · GW(p)

I think the biggest thing I like about it is that it exists! Someone tried to make a fully formalized agent model, and it worked. As mentioned above it's got some big problems, but it helps enormously to have some ground to stand on to try to build on further.

comment by Cole Wyeth (Amyr) · 2024-06-10T15:38:43.441Z · LW(p) · GW(p)

I reommend the new book as a first introduction to AIXI. This book is much more readable than the previous one, with fewer dry convergence results and more recent content. Some definitions are slightly less detailed. One important difference is that the new textbook was written after the Leike's "Bad Universal Priors and Notions of Optimality," which means it is less optimistic about convergence guarantees. Work from Leike's thesis/papers has been integrated in many places, in particular in the discussion of AIXI's computability level and the grain of truth problem. I have recently submitted a game theory paper with Professor Hutter to SAGT 2024 that improves the exposition of the grain of truth problem, so watch for that if you find the section interesting. There is some work on embeddedness from Laurent Orseau - I have a different take on this than he does, but it is definitely worth a read if you are interested in A.I. safety and agent foundations. There is a little original mathematics but mostly to tie things together. 

comment by harfe · 2024-05-12T10:57:07.340Z · LW(p) · GW(p)

I haven't watched it yet, but there is also a recent technical discussion/podcast episode about AIXI and relatedd topics with Marcus Hutter: https://www.youtube.com/watch?v=7TgOwMW_rnk