[Linkpost] Concept Alignment as a Prerequisite for Value Alignment
post by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2023-11-04T17:34:36.563Z · LW · GW · 0 commentsThis is a link post for https://arxiv.org/abs/2310.20059
Contents
No comments
Value alignment is essential for building AI systems that can safely and reliably interact with people. However, what a person values -- and is even capable of valuing -- depends on the concepts that they are currently using to understand and evaluate what happens in the world. The dependence of values on concepts means that concept alignment is a prerequisite for value alignment -- agents need to align their representation of a situation with that of humans in order to successfully align their values. Here, we formally analyze the concept alignment problem in the inverse reinforcement learning setting, show how neglecting concept alignment can lead to systematic value mis-alignment, and describe an approach that helps minimize such failure modes by jointly reasoning about a person's concepts and values. Additionally, we report experimental results with human participants showing that humans reason about the concepts used by an agent when acting intentionally, in line with our joint reasoning model.
We propose a theoretical framework for formally introducing concepts to inverse reinforcement learning and show that conceptual misalignment (i.e., failing to consider construals) can lead to severe value misalignment (i.e., reward mis-specification; large performance gap). We validate these theoretical results with a case study using a simple gridworld environment where we find that IRL agents that jointly model construals and reward outperform those that only model reward. Finally, we conduct a study with human participants and find that people do model construals, and that their inferences about rewards are a much closer match to the agent that jointly models construals and rewards. Our theoretical and empirical results suggest that the current paradigm of just trying to directly infer human reward functions or preferences from demonstrations is insufficient for value-aligning real AI systems that need to interact with real people; it is crucial to also model and align on the concepts people use to reason about the task in order to understand their true values and intentions.
0 comments
Comments sorted by top scores.