how do the CEOs respond to our concerns?
post by KvmanThinking (avery-liu) · 2025-02-11T23:39:27.654Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 5 Brendan Long 3 Mordechai Rorvig None 2 comments
When a decent rationalist walks up to Sam Altman, for example, and presents our arguments for AI doom, how does he respond? What stops us from simply walking up to the people in charge of these training runs, explaining to them the concept of AI doom very slowly and carefully while rebutting all their counterarguments, and helping them to all coordinate to stop all their AI capabilities development at the same time, giving our poor AI safety freelancers enough time to stumble and philosophize their way to a solution? What is their rejection of our logic?
Or is it simply that they have a standard rejection for everything?
Answers
Sam Altman is almost certainly aware of the arguments and just doesn't agree with them. The OpenAI emails [? · GW] are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.
Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM
History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.
The recent example of Microsoft's AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person.
That is why we created OpenAI.
They also had a specific AI safety team relatively early on, and mention explicitly the reasons in these emails:
- Put increasing effort into the safety/control problem, rather than the fig leaf you've noted in other institutions. It doesn't matter who wins if everyone dies. Related to this, we need to communicate a "better red than dead" outlook — we're trying to build safe AGI, and we're not willing to destroy the world in a down-to-the-wire race to do so.
They also explicitly reference this Slate Star Codex article, and I think Elon Musk follows Eliezer's twitter.
↑ comment by KvmanThinking (avery-liu) · 2025-02-16T17:38:23.843Z · LW(p) · GW(p)
Has Musk tried to convince the other AI companies to also worry about safety?
Replies from: weibac↑ comment by Milan W (weibac) · 2025-02-18T18:01:16.112Z · LW(p) · GW(p)
Main concern right now is very much lab proliferation, ensuing coordination problems, and disagreements / adversarial communication / overall insane and polarized discourse.
- Google Deepmind: They are older than OpenAI. They also have a safety team. They are very much aware of the arguments. I don't know about Musk's impact on them.
- Anthropic: They split from OpenAI. To my best guess, they care about safety at least roughly as much as them. Many safety researchers have been quitting OpenAI to go work for Anthropic over the past few years.
- xAI: Founded by Musk several years after he walked out from OpenAI. People working there have previously worked at other big labs. General consensus seems to be that their alignment plan (as least as explained by Elon) is quite confused.
- SSI: Founded by Ilyia Sutskever after he walked out from OpenAI, which he did after participating in a failed effort to fire Sam Altman from OpenAI. Very much aware of the arguments.
- Meta AI: To the best of my knowledge, aware of the arguments but very dismissive of them (at least at the upper management levels).
- Mistral AI: I don't know much but probably more or less the same or worse than Meta AI.
- Chinese labs: No idea. I'll have to look into this.
I am confident that there are relatively influential people within Deepmind and Anthropic who post here and/or on the Aligment Forum. I am unsure about people from other labs, as I am nothing more than a relatively well-read outsider.
I don't know, but I would suspect that Sam Altman and other OpenAI staff have strong views in favor of what they're doing. Isn't there probably some existing commentary out there on what he thinks? I haven't checked. But also, it is problematic to assume that science is rational. It isn't, and many people often hold differing views up to and including the time that something becomes incontrovertibly established.
Further, an issue here is when someone has a strong conflict of interest. If a human is being paid millions of dollars per year to pursue their current course of action—buying vacation homes on every continent, living the most luxurious lifestyle imaginable—then it is going to be hard for them to be objective about serious criticisms. This is why I can't just go up to the Exxon Mobil CEO and say, 'Hey! Stop drilling, it's hurting the environment!' and expect them to do anything about it, even though it would be a completely non-controversial statement.
↑ comment by KvmanThinking (avery-liu) · 2025-02-12T15:46:31.878Z · LW(p) · GW(p)
The difference is that if the Exxon Mobil CEO internalizes that (s)he is harming the environment, (s)he has to go and get a completely new job, probably building dams or something. But if Sam Altman internalizes that he is increasing our chance of extinction, all he has to do is tell all his capability researchers to work on alignment, and money is still coming in; only now, less of it comes from ChatGPT subscriptions and more of it comes from grants from the Long-Term Future Fund. It's a much easier and lighter shift. Additionally, he knows that he can go back to working on capabilities once he's reasonably sure that he's not going to kill everyone, while the Exxon Mobil CEO knows (s)he can never drill again.
If we frame this as a little indefinite pause that all the tech companies undertake at the same time, to allow them to chill, work on safety, smell the roses, and seem all wise and understanding to the general public, then I think they would be on board. Although the rich and powerful are not exactly known for being able to update their beliefs.
2 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2025-02-19T15:33:04.104Z · LW(p) · GW(p)
Your question would have been better without the dig at theists and non-vegans.
Replies from: avery-liu↑ comment by KvmanThinking (avery-liu) · 2025-02-19T22:54:57.794Z · LW(p) · GW(p)
Good point. Really sorry. Just changed it.