AI labs' requests for input

post by Zach Stein-Perlman · 2023-08-18T17:00:26.377Z · LW · GW · No comments

This is a question post.

Contents

No comments

AI labs sometimes publicly ask for input on their actions, their products, and the future of AI. See OpenAI's Democratic inputs to AI and ChatGPT Feedback Contest, and maybe bug bounty programs for security vulnerabilities (OpenAI, Google, Meta). I'd like to collect these; please reply with other examples you're aware of. I'm also interested in ideas/recommendations for labs on what they should request input on or how they should do so (e.g. bug bounty for model outputs [LW · GW]).


Labs also seek input non-publicly. For example, labs have used external red-teaming and model evals, worked with biosecurity experts to understand how near-future AI systems can contribute to biological weapons, and consulted external forecasters. Various kinds of external audits have been proposed. I'm also interested in collecting examples and ideas for this non-public input, but less so.

Crossposted to the EA Forum [EA · GW].

Answers

No comments

Comments sorted by top scores.