In AI Risk what is the base model of the AI?
post by jmh · 2023-05-01T03:25:11.821Z · LW · GW · No commentsThis is a question post.
Contents
Answers 2 Charlie Steiner None No comments
That is not the best statement but close enough to keep a simple title/question.
When I read discussions and comments about AI risks I find myself thinking that there might be two (unstated?) base models in place. I suspect when people talk about the "AI wants" or go about applying utility functions they actually are using humans as some primitive model from which the AI derives.
Similarly, when I hear talk about extinction potential I have the view that the model is that about biology and biological evolution and competition within environmental niches.
Is that something anyone even talks about? If so, what is the view -- any specific papers or comments where I can take a look? If not, does it sound like a reasonable inference about implicit assumptions/maps in this area?
Answers
There's not going to be one right answer.
- The outcome pump [LW · GW]. This cashes out "wants" in a non-anthropomorphic way. John Wentworth has some good work using this in more non-obvious ways.
- Model-based RL. Potentially brain-inspired. This is what I try to think about most of the time.
- Model-free RL. I think a lot of inner alignment arguments, and also some "shard theory" type arguments, use a background model-free RL picture.
- Predictive models. Large language models are important, people often interpret them as a prototype for future AI.
- Anthropomorphism. Usually not valid, but sometimes used anyway.
No comments
Comments sorted by top scores.