Human-Aligned AI Summer School: A Summary

post by Michaël Trazzi (mtrazzi) · 2018-08-11T08:11:00.789Z · LW · GW · 5 comments

Contents

  Value Learning (Daniel Filan)
    Inverse Reinforcement Learning
    Beyond Inverse Reinforcement Learning
  Agent Foundations (Abram Demski)
  Bounded Rationality (Daniel Filan / Daniel Braun)
    Information-Theoretic Bounded Rationality (Daniel Braun)
    Human irrationality in planning (Daniel Filan)
  Side effects (Victoria Krakovna)
None
5 comments

(Disclaimer: this summary is incomplete and does not accurately represent all the content presented at the summer school, but only what I remember and seem to have understood from the lectures. Don't hesitate to mention important ideas I missed or apparent confusion.)

Last week, I attended the first edition of the human-aligned AI summer school in Prague. After three days, my memories are already starting to fade, and I am unsure about what I will retain in the long-term.

Here, I try to remember the content of about 15h of talks. It serves the following purposes:

Value Learning (Daniel Filan)

Value Learning aims at infering human values from their behavior. Paul Christiano distinguishes ambitious value learning vs. narrow value learning:

Inverse Reinforcement Learning

Inverse Reinforcement Learning (IRL) studies which reward best explains a behaviour. Two methods of IRL were discussed (the state-of-the-art builds on top of those two, for instance using neural networks):

Why not to do value learning:

Beyond Inverse Reinforcement Learning

The main problem of traditional IRL is that it does not take into account the deliberate interactions between a human and an AI (e.g. the human could be slowing down his behaviour to help learning).

Cooperative IRL solves this issue by introducing a two-player game between the human and the AI, where both are rewarded according to the human's reward function. This incentivizes the human to teach the AI his preferences (if the human only choses its best action, the AI would learn the wrong distribution). Using a similar dynamic, the off-switch game encourages the AI to allow himself to be switched off.

Another adversity when implementing IRL is that the reward function is difficult to completely specify, and will often not capture all of what the designer wants. Inverse reward design makes the AI quantify his uncertainty about states. If the AI is risk-averse, it will avoid uncertain states, for instance situations where it believes humans have not completely defined the reward function because they did not know much about it.

Agent Foundations (Abram Demski)

Abram's first talk was about his post "Probability is Real, and Value is Complex" [LW · GW]. At the end of the talk, several people (including me) were confused about the "magic correlation" between probabilities and expected utility, and asked Abram about the meaning of his talk.

From what I understood, the point was to show a counter-intuitive consequence of choosing Jeffrey-Bolker axioms in decision theory over Savage axioms. Because Bayes' algorithm can be formalized using Jeffrey-Bolker axioms, this counter-intuitive result challenges potential agent designs that would use Bayesian updates.

The second talk was more general, and addressed several problems faced by embedded agents (e.g. naturalized induction).

Bounded Rationality (Daniel Filan / Daniel Braun)

To make sure an AI would be able to understand humans, we need to make sure it understands their bounded rationality, i.e. how sparse information and a bounded computational power limit rationality.

Information-Theoretic Bounded Rationality (Daniel Braun)

The first talk on the topic introduced a decision-complexity C(A|B) that expressed the "cost" of going from the reference B to the target A (proportional to the Shannon Information of A given B). Intuitively, it represents the cost in search process when going from a prior B to a posterior A. After some mathematical manipulations, a concept of "information cost" is introduced, and the final framework highlights a trade-off between some "information utility" and this "information cost" (for more details see here, pp. 14-18).

Human irrationality in planning (Daniel Filan)

Humans seem to exhibit a strong preference in planning hierarchically, and are "irrational" in that sense, or at least not "Boltzmann-rational" (Cundy & Filan, 2018).

Hierarchical RL is a framework used in planning that introduces "options" in Markov Decision Processes where Bellman Equations still hold.

State-of-the-art methods in Hierarchical RL include meta-learning of the hierarchy or a two-modules neural network.

Side effects (Victoria Krakovna)

Techniques aiming at minimizing negative side effects include minimizing unnecessary disruptions when achieving a goal (e.g. turning Earth into paperclips) or designing low-impact agents (avoiding large side effects in general).

To correctly measure impact, several questions must be answered:

A "side-effect measure" should penalize unnecessary actions (necessity), understand what was caused by the agent vs. caused by the environment (causation) and penalize irreversible actions (asymmetry).

Hence, an agent may be penalized for an outcome different from an "inaction baseline" (where the agent would not have done anything) or for any irreversible action.

However, those penalties introduce bad incentives to avoid irreversible actions but still let them happen anyway (for instance preventing a vase to be broken to gain a reward, then break the vase anyway to go back to the "inaction baseline"). Relative reachability provides an answer to this behaviour, by penalizing the agent for making states less reachable than there would be by default (for instance breaking a vase makes the states with an unbroken vase unreachable) and leads to safe behaviors in the Sokoban-like and conveyor belt gridworlds.

Open questions about this approach are:


I thank Daniel Filan and Jaime Molina for their feedback, and apologize for the talks I did not summarize.

5 comments

Comments sorted by top scores.

comment by Jan_Kulveit · 2018-08-09T20:27:24.155Z · LW(p) · GW(p)

Thanks for summary of some of the talks!

Just to avoid some unnecessary confusion, I'd like to point out the name of the event was Human-aligned AI Summer School.

A different event, AI Safety Camp, is also happening in Prague, in October.

While there is a substantial overlap between both organizers and participants, the events have somewhat different goals, are geared toward slightly different target audiences. The summer school is pretty much in the format of an "academic summer school", where you have talks, coffee breaks, social events, and similar structured program, but usually not something like substantial amount of time to do your own independent research. The camp is the complement - lot of time to do independent research, not much structured program, no talks by senior researchers, no coffee breaks and also no university backing.

Maybe, at some point we may try some mixture, but now there are large differences. It is important to understand them and have different expectations from each event.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-08-09T21:09:29.066Z · LW(p) · GW(p)

I agree that the "Camp" in the title was confusing, so I changed it to "Summer School". Thank you!

Replies from: Error
comment by Error · 2018-08-10T01:05:45.549Z · LW(p) · GW(p)

Written as "Human-Aligned Summer School", I first read it as an educational experiment aimed at not making kids suffer. For some reason I find the misinterpretation hilarious.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-08-10T06:49:26.484Z · LW(p) · GW(p)

Added "AI" to prevent death from laughter.

comment by Rohin Shah (rohinmshah) · 2018-08-10T16:43:12.200Z · LW(p) · GW(p)

Typo: Cundy and Filan, 2018 (not 2008)