EleutherAI's GPT-NeoX-20B release
post by leogao · 2022-02-10T06:56:41.155Z · LW · GW · 3 commentsThis is a link post for http://eaidata.bmk.sh/data/GPT_NeoX_20B.pdf
Contents
3 comments
GPT-NeoX-20B is a 20 billion parameter autoregressive language model whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights. In this paper, we describe the model architecture and training, evaluate its performance, and discuss the broader impacts of its release. We are open-sourcing the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox.
The alignment section of the paper would probably be especially of interest to this community.
3 comments
Comments sorted by top scores.
comment by Owain_Evans · 2022-02-10T18:00:45.337Z · LW(p) · GW(p)
I get "potential security risk" from Firefox for that pdf link.
Replies from: alex-ray↑ comment by A Ray (alex-ray) · 2022-02-10T19:20:06.273Z · LW(p) · GW(p)
Just copy-pasting the section
Replies from: leogaoWe believe that Transformative Artificial Intelligence (TAI) [Karnofsky et al., 2016] is approaching [Cotra, 2020, Grace et al., 2018], and that these systems will cause catastrophic damage if they are misaligned with human values [Fox and Shulman, 2013, Omohundro, 2008]. As such, we believe it is essential to prioritize and help facilitate technical research that ensures TAI’s values will be aligned with ours.
AI Alignment generally refers to the problem of how to ensure increasingly powerful and autonomous AI systems perform the users’ wishes faithfully and without unintended consequences. Alignment is especially critical as we approach human and superhuman levels of intelligence, as powerful optimization processes amplify small errors in goal specification into large misalignments [Goodhart, 1984, Manheim and Garrabrant, 2019, Fox and Shulman, 2013], and misalignments in this regime will result in runaway optimization processes that evade alteration or shutdown [Omohundro, 2008, Benson-Tilsen and Soares, 2016, Turner et al., 2021], posing a significant existential risk to humanity. Additionally, even if the goal is specified correctly, superhuman models may still develop deceptive subsystems that attempt to influence the real world to satisfy their objectives [Hubinger et al., 2021]. While current systems are not yet at the level where the consequences of misalignment pose an existential threat, rapid progress in the field of AI has increased the concern that the alignment problem may be seriously tested in the not-too-distant future.
Much of the alignment literature focuses on the more theoretical aspects of alignment [Demski and Garrabrant, 2020, Yudkowsky and Soares, 2018, Taylor, 2016, Garrabrant et al., 2016, Armstrong and Mindermann, 2018, Hubinger et al., 2021], abstracting away the specifics of how intelligence will be implemented, due to uncertainty over the path to TAI. However, with the recent advances in capabilities, it may no longer be the case that the path to TAI is completely unpredictable. In particular, recent increases in the capabilities of large language models (LLMs) raises the possibility that the first generation of transformatively powerful AI systems may be based on similar principles and architectures as current large language models like GPT. This has motivated a number of research groups to work on “prosaic alignment” [Christiano, 2016, Askell et al., 2021, Ouyang et al., 2021], a field of study that considers the AI alignment problem in the case of TAI being built primarily with techniques already used in modern ML. We believe that due to the speed of AI progress, there is a significant chance that this assumption is true, and, therefore, that contributing and enabling contributions to prosaic alignment research will have a large impact.
The open-source release of this model is motivated by the hope that it will allow alignment researchers who would not otherwise have access to LLMs to use them. While there are negative risks due to the potential acceleration of capabilities research, which may place further time pressure on solving the alignment problem, we believe the benefits of this release outweigh the risks of accelerating capabilities research.