Preferences and biases, the information argument

post by Stuart_Armstrong · 2021-03-23T12:44:46.965Z · LW · GW · 5 comments

I've recently thought of a possibly simpler way of expressing the argument from the Occam's razor paper. Namely:

Thus, in order to deduce human biases and preferences, we need more information than the human policy caries.

This extra information is contained in the "normative assumptions [LW · GW]": the assumptions we need to add, so that an AI can learn human preferences from human behaviour.

We'd ideally want to do this with as few extra assumptions as possible. If the AI is well-grounded [LW · GW] and understands what human concepts mean, we might be able to get away with a simple reference: "look through this collection of psychology research and take it as roughly true" could be enough assumptions to point the AI to all the assumptions it would need.

5 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2021-03-24T04:56:10.396Z · LW(p) · GW(p)

But is that true? Human behavior has a lot of information. We normally say that this extra information is irrelevant to the human's beliefs and preferences (i.e. the agential model of humans is a simplification), but it's still there.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2021-03-24T07:30:52.389Z · LW(p) · GW(p)

Look at the paper linked for more details ( https://arxiv.org/abs/1712.05812 ).

Basically "humans are always fully rational and always take the action they want to" is a full explanation of all of human behaviour, that is strictly simpler than any explanation which includes human biases and bounded rationality.

comment by shminux · 2021-03-23T19:13:35.940Z · LW(p) · GW(p)

"look through this collection of psychology research and take it as roughly true"

Well, you are an intelligence that "is well-grounded and understands what human concepts mean", do you think that the above approach would lead you to distill the right assumptions?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2021-03-24T07:31:41.903Z · LW(p) · GW(p)

No. But I expect that it would be much more in the right ballpark than other approaches, and I think it might be refined to be correct.

comment by sxae · 2021-03-23T15:31:02.679Z · LW(p) · GW(p)

I suppose the question is whether we can predict the "hidden inner mind" through some purely statistical model, as opposed to requiring some deeper understanding of human psychology of an AI. I'm not sure that a typical psychologist would claim to be able to predict behaviour through their training, whereas we have seen cases where even simple, statistical predictive systems can know more about you than you know about yourself - [1].

There's also the idea that social intelligence is the ability to simulate other people, so perhaps that is something that an AI would need to do in order to understand other consciousnesses - running shallow simulations of those other minds.