Which of these five AI alignment research projects ideas are no good?
post by rmoehn · 2019-08-08T07:17:28.959Z · LW · GW · 13 commentsContents
13 comments
I'll post five AI alignment research project ideas as comments. It would be great if you could approval-vote on them by using upvotes. Ie. when you think the project idea isn't good, you leave the comment as is; otherwise you give it a single upvote.
The project ideas follow this format (cf. The Craft of Research):
I'm studying <topic>,
because I want to <question that guides the search>,
in order to help my reader understand <more significant
question that would be informed by an answer to the
previous question>.
The project ideas are fixed-width in order to preserve the indentation. If they get formatted strangely, you might be able to fix it by increasing the width of your browser window or zooming out.
13 comments
Comments sorted by top scores.
comment by rmoehn · 2019-08-08T07:10:59.539Z · LW(p) · GW(p)
I'm studying Bayesian machine learning,
because I want to understand how to make ML systems that notice when they
are confused
in order to help my reader understand how to make ML systems that will
ask the overseer for input when doing otherwise would lead to failure.
- More a study project than a research project.
comment by rmoehn · 2019-08-08T07:11:38.685Z · LW(p) · GW(p)
I'm studying ways to improve the sample efficiency of a supervised learner,
because I want to know how to reduce the number of calls to H in
‘Supervising strong learners by amplifying weak experts’
(https://www.lesswrong.com/s/EmDuGeRw749sD3GKd/p/xKvzpodBGcPMq7TqE),
in order to help my reader understand how we can adapt that
proof-of-concept for solving real world tasks that require even more
training data.
- This doesn't just mean achieving more with the samples we have. It can mean
finding new kinds of samples that convey more information, and finding new
ways of extracting them from the human and conveying them to the learner.
comment by rmoehn · 2019-08-10T00:28:19.919Z · LW(p) · GW(p)
Thanks for the votes so far! The poll is still open.
By the way, I'd prefer if you only give upvotes. That's how approval voting works. If you're concerned that it would skew my total karma, feel free to balance your upvotes by voting down this comment.
Replies from: Alicorn↑ comment by Alicorn · 2019-08-10T18:03:31.681Z · LW(p) · GW(p)
Are you aware that people's votes are worth different amounts? I do not think there's a way to vote less than one's default vote amount.
Replies from: rmoehn↑ comment by rmoehn · 2019-08-10T21:58:18.300Z · LW(p) · GW(p)
No, I wasn't aware of that. Then I guess I have to come up with a different mechanism for my next poll.
Replies from: Benito, Raemon↑ comment by Ben Pace (Benito) · 2019-08-10T22:52:05.347Z · LW(p) · GW(p)
Note that if people do only give upvotes, then you can hover over a comment’s score to see the total number of votes on it, which is what you’re looking for here.
Replies from: rmoehncomment by rmoehn · 2019-08-08T07:12:14.646Z · LW(p) · GW(p)
I'm studying the use of a discriminator in imitation learning,
because I want to find out how to help humans produce demonstrations that
the agent can imitate,
in order to help my reader understand how we might use imitation
learning to solve the reward engineering problem.
comment by rmoehn · 2019-08-08T07:12:39.691Z · LW(p) · GW(p)
I'm studying the effects of importance sampling on the behaviour that an RL
agent learns,
because I want to find out whether it can lead to undesirable outcomes
in order to help my reader understand whether importance sampling can
solve the problem of widely varying rewards in reward engineering.
comment by rmoehn · 2019-08-08T07:12:57.266Z · LW(p) · GW(p)
I'm studying the effects of an inconsistent comparison function on optimizing
with comparisons,
because I want to know whether it prevents the two agents from converging on
a desirable equilibrium quickly enough
in order to help my reader understand whether optimizing with
comparisons can solve the problem of inconsistency and unreliability in
reward engineering.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2019-08-10T04:37:10.391Z · LW(p) · GW(p)
Can you explain this one a bit more? It seems to me that if the human is giving inconsistent answers, in the sense that the human says A > B and B > C and C > A, then the thing to do is to flag this and ask them to resolve the inconsistency instead of trying to find a way to work around it. Interpretability > Magic, I say.
Replies from: rmoehn↑ comment by rmoehn · 2019-08-11T05:10:12.160Z · LW(p) · GW(p)
I don't think that would work in this case. I derived the project idea from Thoughts on reward engineering [LW · GW], section 2. There the overseer generates rewards based on its preferences and provides these rewards to RL agents.
Suppose the training starts with the overseer generating rewards from its preferences and the agents updating their value functions accordingly. After a while the agents propose something new and the overseer generates a reward that is inconsistent with those it has generated before. But it happens that this one is the true preference and the proper fix would be to revise the earlier rewards. However, rewarded is rewarded – I guess it would be hard to reverse the corresponding changes in the value functions.
Of course one could record all actions and rewards and snapshots of the value functions, then rewind and reapply with revised rewards. But given today's model sizes and training volumes, it's not that straightforward.