Is OpenAI net negative for AI Safety?
post by Lysandre Terrisse · 2024-11-02T16:18:02.859Z · LW · GW · 1 commentThis is a question post.
Contents
1 comment
I recently saw a post [LW · GW] arguing that top AI labs should shut down. This let me wonder whether the AI Safety community thinks OpenAI is net negative for AI safety. I chose OpenAI because I consider it as the most representative top AI lab (in the sense that, if we ask someone to think about an AI lab, they would probably think about that one), but if you want, you can also talk about other AI labs as well.
Answers
1 comment
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2024-11-05T02:07:04.225Z · LW(p) · GW(p)
Yes, humanity would be more likely to survive if OpenAI never existed or if it closed tomorrow.
I wonder why no one else has answered. Am I stepping on some landmine I don't know about?
What would drastically increase P(survival) is stopping all large training runs. If that turns out to be impossible, then some labs might be "better" than the average lab in that if their model is the first one to become capable of extincting humanity, it might choose not to do that because of technical details in how the lab made the model. (I don't put a lot of hope in this possibility.)
To my knowledge, no one who is not employed at OpenAI and who is not an investor in OpenAI believes that OpenAI is one of these "better" labs. OK, that is not literally true, but it is unlikely that anyone who is not employed at OpenAI and who is not an investor in OpenAI and who believes OpenAI is one of the "better" labs can provide an argument longer than 50 words that you or I would consider rational and coherent.
But even if OpenAI is shut down tomorrow and everyone working there is permanently prevented from working in AI (a pleasant thought!) the AI enterprise would still be the thorniest danger facing humanity.