AI Governance & Strategy: Priorities, talent gaps, & opportunities

post by Akash (akash-wasil) · 2023-03-03T18:09:26.659Z · LW · GW · 2 comments

Contents

  Priority Areas
    Model evaluations
    Compute governance
    Security
    Publication and model-sharing policies
    Communicating about AI x-risk
  Conclusion
None
2 comments

Over the last few weeks, I’ve had 1:1s with several AI governance professionals to develop better models of the current AI governance & strategy landscape. Some topics that regularly came up include:

This post is my attempt to summarize some takeaways from those conversations. I list some “priority areas” in AI governance & strategy, summarize them briefly, and describe potential talent gaps in each area. I don't claim that my list is comprehensive, and I welcome people to add their own ideas in the comments. 

If you think you may have some of the relevant talents/aptitudes and are interested in working in any of these areas, feel free to reach out to me, and I may connect you to relevant professionals. (Feel free to have a low bar for reaching out; I'll ask you for more information if needed.)

Please also be aware that there are downside risks in each of these areas. I suggest you get in touch with relevant professionals before “jumping in” to any of these areas.

Priority Areas

I refer to “priority areas” as topics that frequently came up when talking with AI governance professionals. Caveats: This is not a rigorous method, this list is not comprehensive, some topics were excluded intentionally, the list probably overweights topics that I evaluate as valuable (on my inside view), and priorities will inevitably change as the field continues to evolve.

For each priority area, I offer a brief summary, as well as a description of the kinds of career aptitudes that might make someone an especially good fit for working in the area.

Model evaluations

Summary: There are many ways models could be dangerous, but it’s difficult to detect these failure modes. Can we develop and implement “tests” that help us determine if a model is dangerous? 

Some people are working on technical tests that can determine if a model has dangerous capabilities or appears to be misaligned. Others are thinking more broadly about what kinds of evals would be useful. Some people are focused on creating agreements that labs or governments could implement (e.g., if a Deception Eval is triggered, everyone agrees to stop scaling until Y evidence is acquired). 

Current gaps:

Additional resources: See this post by Beth [LW · GW], this post by me [LW · GW], and this paper by Ethan Perez.

Compute governance

Summary: AI progress has largely been driven by compute. Can we understand compute trends and identify regulations based on compute?

Current gaps:

Additional resources: See this sequence [? · GW] and this reading list by Lennart Heim, as well as this post [LW · GW] by Mauricio.

Security

Summary: AI labs have valuable information; adversaries might try to access that information, AI systems may become capable of assisting with hacking, and AI systems themselves may be capable of (autonomously) hacking. Furthermore, security professionals often possess a deep security mindset [LW · GW], which could be useful across a variety of decisions that AI labs make in the upcoming years. Can security professionals help AI labs avoid information security risks and generally cultivate a culture centered on security mindset?

Current gaps

Additional resources: See this post by Jeffrey Ladish and Lennart Heim [EA · GW], this post by elspood [LW · GW], this post by Eliezer Yudkowsky [LW · GW], and the information security section in this post by Holden Karnofsky [LW · GW].

Publication and model-sharing policies

Summary: AI labs face difficult decisions about whether to publish research findings and how widely to share models. Can we develop and implement reasonable policies that balance the benefits of sharing while mitigating the risks?

Current gaps

Additional resources: See this paper by Toby Shevlane and Allan Dafoe, this paper by Nick Bostrom, and this paper by Toby Shevlane.

Communicating about AI x-risk

Summary: Several governance ideas will require that policymakers, industry leaders, and other groups have a strong understanding of the dangers and potential catastrophic risks of advanced AI systems. How can we communicate ideas and threat models clearly and responsibly to these audiences?

Current gaps

Additional resources: See this post by Holden Karnofsky [LW · GW].

Conclusion

As mentioned, please feel free to reach out if you have relevant skills/aptitudes and think you may want to contribute in any of these areas. 

For each of these areas, I’m aware of professionals/researchers who are interested in talking with junior folks who have relevant skills & backgrounds.

Also, be aware that there are downside risks, and standard advice like “talk to people before doing things”, “it is easy for well-intentioned people to accidentally produce net negative work”, and “be wary of taking unilateralist actions” apply. 

With that in mind, I’m excited to see more people thinking carefully and seriously about these topics. I hope you think about ways you might be able to contribute in some of these areas or identify areas that aren’t on this list. 

I’m grateful to Lennart Heim and Jeffrey Ladish for providing feedback on sections of this post

2 comments

Comments sorted by top scores.

comment by [deleted] · 2023-03-04T02:32:19.203Z · LW(p) · GW(p)

This is a great post - concise and clear. 

comment by Michael Soareverix (michael-soareverix) · 2023-03-03T21:06:37.721Z · LW(p) · GW(p)

Hey Akash, I sent you a message about my summer career plans and how I can bring AI Alignment into that. I'm a senior in college who has a few relevant skills and I'd really like to connect with some professionals in the field. I'd love to connect or learn from you!