AIs should learn human preferences, not biases

post by Stuart_Armstrong · 2022-04-08T13:45:06.910Z · LW · GW · 0 comments

Contents

1 comment

A new paper by Rebecca Gorman and me, building on her ideas: The dangers in algorithms learning humans' values and irrationalities.

In essence, it is better for AIs at all power and alignment levels to learn human preferences (labelled as preferences) than to learn human biases (labelled as biases).

For an artificial intelligence (AI) to be aligned with human values (or human preferences), it must first learn those values. AI systems that are trained on human behavior, risk miscategorising human irrationalities as human values -- and then optimising for these irrationalities. Simply learning human values still carries risks: AI learning them will inevitably also gain information on human irrationalities and human behaviour/policy. Both of these can be dangerous: knowing human policy allows an AI to become generically more powerful (whether it is partially aligned or not aligned at all), while learning human irrationalities allows it to exploit humans without needing to provide value in return. This paper analyses the danger in developing artificial intelligence that learns about human irrationalities and human policy, and constructs a model recommendation system with various levels of information about human biases, human policy, and human values. It concludes that, whatever the power and knowledge of the AI, it is more dangerous for it to know human irrationalities than human values. Thus it is better for the AI to learn human values directly, rather than learning human biases and then deducing values from behaviour.

0 comments

Comments sorted by top scores.

comment by eg · 2022-04-08T17:00:27.316Z · LW(p) · GW(p)