Superalignment
post by Douglas_Reay · 2023-11-18T22:37:41.577Z · LW · GW · 4 commentsThis is a link post for https://openai.com/blog/introducing-superalignment
Contents
OpenAI have announced the approach they intend to use, to ensure humans stay in control of AIs smarter than they are: None 4 comments
OpenAI have announced the approach they intend to use, to ensure humans stay in control of AIs smarter than they are:
Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.
To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:
- To provide a training signal on tasks that are difficult for humans to evaluate, we can leverage AI systems to assist evaluation of other AI systems (scalable oversight). In addition, we want to understand and control how our models generalize our oversight to tasks we can’t supervise (generalization).
- To validate the alignment of our systems, we automate search for problematic behavior (robustness) and problematic internals (automated interpretability).
- Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).
4 comments
Comments sorted by top scores.
comment by Capybasilisk · 2023-11-19T23:56:37.924Z · LW(p) · GW(p)
When Jan Leike (OAI's head of alignment) appeared on the AXRP podcast, the host asked how they plan on aligning the automated alignment researcher. Jan didn't appear to understand the question (which had been the first to occur to me). That doesn't inspire confidence.
comment by sudo · 2023-11-20T02:14:51.651Z · LW(p) · GW(p)
dupe https://www.lesswrong.com/posts/Hna4aoMwr6Qx9rHBs/linkpost-introducing-superalignment [LW · GW]
comment by kithpendragon · 2023-11-19T12:16:00.534Z · LW(p) · GW(p)
So the plan is to add layers of human and dubiously-aligned-human-level-AI intervention in an effort to discover how to keep AI aligned. That is to say, "If we throw enough additional complexity at it, the systems that we already don't understand won't hurt us!"
Like the man said, "the bureaucratic mentality is the only constant in the universe".
comment by simon · 2023-11-19T04:23:06.377Z · LW(p) · GW(p)
This plan, as currently worded, has me somewhat concerned.
I think that using AI to solve alignment should be possible, but to me this relies on it not being agentic (i.e., not making decisions based on satisfying preferences about the future of the world).
This "superalignment" plan, the way it is currently worded
- doesn't seem to show any visible recognition of the importance of avoiding the AI being agentic (e.g. it calls for an "automated alignment researcher" instead of a "research assistant" or something), and
- is proposing to do things that seem to me might pose a risk of causing or enhancing agency (e.g. it looks like it amounts to recursive self-improvement plus extra safety steps)
But hey, maybe the safety stuff works and avoids the researcher being agentic.
Even then, in order for the ultimate aligned AI to wind up "aligned", it does have to care about the future at least indirectly (via human preferences about the future). Even an aligned AI doesn't have to (and IMO cannot, to be *really* aligned) care about the future directly (i.e. for any reason other than human preferences about the future), but if designed without an understanding of this, so that people just try to instill good-looking-to-humans preferences and cause good-looking-to-humans-in-the-short-run actions, then it will end up with these independent preferences about the future which inevitably won't be perfectly the same as humans' preferences.