A survey of tool use and workflows in alignment research
post by Logan Riggs (elriggs), Jan (jan-2), janus, jacquesthibs (jacques-thibodeau) · 2022-03-23T23:44:30.058Z · LW · GW · 4 commentsContents
4 comments
TL;DR: We are building language model powered tools to augment alignment researchers and accelerate alignment progress. We could use your feedback on what tools would be most useful. We’ve created a short survey that can be filled out here.
We are a team from the current iteration of the AI Safety camp and are planning to build a suite of tools [? · GW] to help AI Safety researchers.
We’re looking for feedback on what kinds of tools would be most helpful to you as an established or prospective alignment researcher. We’ve put together a short survey to get a better understanding of how researchers work on alignment. We plan to analyze the results and make them available to the community (appropriately anonymized). The survey is here. If you would also be interested in talking directly, please feel free to schedule a call here.
This project is similar in motivation to Ought’s Elicit, but more focused on human-in-the-loop and tailored for alignment research. One example of a tool we could create would be a language model that intelligently condenses existing alignment research into summaries or expands rough outlines into drafts of full Alignment Forum posts. Another idea we’ve considered is a brainstorming tool that can generate new examples/counterexamples, new arguments/counterarguments, or new directions to explore.
In the long run, we’re interested in creating seriously empowering tools that fall under categorizations like STEM AI [LW · GW], Microscope AI [LW · GW], superhuman personal assistant AI [LW · GW], or plainly Oracle AI [? · GW]. These early tools are oriented towards more proof-of-concept work, but still aim to be immediately helpful to alignment researchers. Our prior that this is a promising direction is informed in part by our own very fruitful and interesting experiences using language models as writing and brainstorming aids.
One central danger of tools with the ability to increase research productivity is dual-use [LW · GW] for capabilities research. Consequently, we’re planning to ensure that these tools will be specifically tailored to the AI Safety community and not to other scientific fields. We do not intend to publish the specifics methods we use to create these tools.
We welcome any feedback, comments, or concerns about our direction. Also, if you'd like to contribute to the project, feel free to join us at the #accelerating-alignment channel in the EleutherAI channel.
Thanks in advance!
4 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2022-03-28T09:18:31.565Z · LW(p) · GW(p)
I'm curious how well a model finetuned on the Alignment Newsletter performs at summarizing new content (probably blog posts; I'd assume papers are too long and rely too much on figures). My guess is that it doesn't work very well even for blog posts, which is why I haven't tried it yet, but I'd still be interested in the results and would love it on the off chance that it actually was good enough to save me some time.
Replies from: jacques-thibodeau↑ comment by jacquesthibs (jacques-thibodeau) · 2022-04-04T21:35:06.016Z · LW(p) · GW(p)
We could definitely look into making the project evolve in this direction. In fact, we're building a dataset of alignment-related texts and a small part of the dataset includes a scrape of arXiv papers extracted from the Alignment Newsletter. We're working towards building GPT models fine-tuned on the texts.
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2022-04-05T17:33:48.842Z · LW(p) · GW(p)
Ya, I was even planning on trying:
[post/blog/paper] rohinmshah karma: 100 Planned summary for the Alignment Newsletter: \n>
Then feed that input to.
Planned opinion:
to see if that has some higher-quality summaries.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2022-09-04T07:43:03.909Z · LW(p) · GW(p)
Well, one "correct" generalization there is to produce much longer summaries, which is not actually what we want.
(My actual prediction is that changing the karma makes very little difference to the summary that comes out.)