Posts
Comments
When you visit the website for PauseAI, you might find some very steep proposals for Pausing AI [...] (I could train one)
Their website is probably outdated. I read their proposals as “keep the current level of AI, regulate stronger AI”. Banning current LLaMA models seems silly from an x-risk perspective, in hindsight. I think PauseAI is perfectly fine with pausing “too early”, which I personally don't object to.
If you look at the kind of claims that PauseAI makes in their risks page
PauseAI is clearly focused on x-risk. The risks page seems like an attempt to guide the general public from naively-realistic "Present dangers" slowly towards introducing (exotic-sounding) x-risk. You can disagree with that approach, of course. I would disagree that mixing AI Safety and AI Ethics is being "very careless about truth".
Thank you for answering my question! I wanted to know what you people think about PauseAI, so this fits well.
[...] in this respect the discourse around PauseAI seems unexceptionable and rather predictable.
Yes. I hope we can be better at coordination... I would frame PauseAI as "the reasonable [aspiring] mass-movement". I like that it is easy to support or join PauseAI even without having an ML PhD. StopAI is an organization more radical than them.
Thank you for responding!
A: Yeah. I'm mostly positive about their goal to work towards "building the Pause button". I think protesting against "relatively responsible companies" makes a lot of sense when these companies seem to use their lobbying power more against AI-Safety-aligned Governance than in favor of it. You're obviously very aware of the details here.
B: I asked my question because I'm frustrated with that. Is there a way for AI Safety to coordinate a better reaction?
C:
There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.
I phrased that a bit sharply, but I find your reply very useful:
Obviously P(doom | no slowdown) < 1. Many people's work reduces risk in both slowdown and no-slowdown worlds, and it seems pretty clear to me that most of them shouldn't switch to working on increasing P(slowdown).[1]
These are quite strong claims! I'll take that as somewhat representative of the community. My attempt at paraphrasing: It's not (strictly?) necessary to slow down AI to prevent doom. There is a lot of useful AI Safety work going on that is not focused on slowing/pausing AI. This work is useful even if AGI is coming soon.
- ^
it seems pretty clear to me that most of them shouldn't switch to working on increasing P(slowdown).
Saying "PauseAI good" does not take a lot of an AI Safety researcher's time.