How might we make better use of AI capabilities research for alignment purposes?
post by Jemal Young (ghostwheel) · 2022-08-31T04:19:33.239Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 2 Martin Vlach None 2 comments
When I check ArXiv for new AI alignment research papers, I see mostly capabilities research papers, presumably because most researchers are working on capabilities. I wonder if there’s alignment-related value to be extracted from all that capabilities research, and how we might get at it. Is anyone working on this, or does anyone have any good ideas?
Answers
I'm fairly interested in that topic and wrote a short draft here [LW · GW] explaining a few basic reasons to explicitly develop capabilities measuring tools as it would improve risk mitigations. What resonates from your question is that for 'known categories' we could start from what the papers recognise and dig deeper for more fine coarsed (sub-)capabilities.
2 comments
Comments sorted by top scores.
comment by P. · 2022-08-31T09:24:50.305Z · LW(p) · GW(p)
Do you mean from what already exists or from changing the direction of new research?
Replies from: ghostwheel↑ comment by Jemal Young (ghostwheel) · 2022-08-31T15:46:23.585Z · LW(p) · GW(p)
I mean extracting insights from capabilities research that currently exists, not changing the direction of new research. For example, specification gaming is on everyone's radar because it was observed in capabilities research (the authors of the linked post compiled this list of specification-gaming examples, some of which are from the 1980s). I wonder how much more opportunity there might be to piggyback on existing capabilities research for alignment purposes, and maybe to systemize that going forward.