"notkilleveryoneism" sounds dumb
post by bhauth · 2023-04-28T19:46:01.000Z · LW · GW · 27 commentsContents
27 comments
"AI safety" and "AI alignment" now often mean "making AI not say racist/illegal/etc things". The term "AI notkilleveryoneism" is now sometimes being used as a replacement that refers specifically to ASI-related risks.
I think "notkilleveryoneism" is the wrong term to use, for reasons including:
- The basic structure of simple words mashed together with a suffix is associated with fantasy, media for kids, and signalling lack of effort.
- The "ism" suffix brings up associations to religions. You don't want to say "we are the tribe of people whose thing is opposition to X" - you just want to say "we're opposed to X".
- It's awkward to say, not catchy.
- There are broader concerns than AI literally killing everyone, and you probably want as big an alliance as possible.
What, then, might be better? Perhaps an analogy to biological or nuclear safety would be good, eg:
- Stop AI Gain-of-Function Research
- Stop AI Proliferation
Another angle is "enjoy the AI summer" framing. The ability of GPT-4 to fill out forms, the art generation by the various LoRA models...even just the ability to do good sentence embeddings and fast vector search - all these things will have big impacts in businesses, but it will take some time for people to figure out how to use them most effectively. You could say things like:
- Enjoy the AI summer.
- Taste the fruit before planting more.
Yet another angle is the "figure out how to distribute the gains equitably before moving on" and "AI power dynamics" framing. For example:
- Stop (Chinese-style) AI surveillance.
- Don't let them make you obsolete. (like horses)
There are a number of possibilities, and I'm not proposing a single thing in particular, I'm just saying that people should use other phrases than "AI notkilleveryoneism".
I thought of the following terms for technical discussion:
- U-al = user-alignment
- O-al = owner-alignment
- S-al = society-alignment
- H-al = humanity-alignment
- I-al = intelligence-alignment
But for public-facing statements, such abbreviations are obviously unsuitable.
27 comments
Comments sorted by top scores.
comment by Raemon · 2023-04-28T20:13:52.128Z · LW(p) · GW(p)
So, on one hand, yes, it totally sounds dumb. But this seems to be missing the point of calling it "AI notkilleveryoneism", which is to draw attention to the fact that the last few times people tried naming this thing, people shifted to using it in a more generic way that didn't engage with the primary cruxes of the original namers*.
One of the key proposed mechanisms here is that the word is both specific enough and sounds low-status-enough that you can't possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing [LW · GW].
And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience (i.e. AI Existential Safety is harder to say than AI Safety). One of the points is to have a name that actively leans into outrageousness of it's length.
Another part of the point here is to deliberately puncture people's business-as-usual attitude, via outrageousness/humor.
And, also sure, you can disagree with all of this and think it's not a useful goal, or think that, as a joke-name, things went overboard and it's getting used more often than it should. But if you're actually trying to get the people using the word to stop you need to engage more with the actual motivation.
*FWIW I do think "AI Safety" and "AI Alignment" aren't sufficiently specific names, and I think you really can't complain when those names end up getting used to mean things other than existential safety, and this was predictable in advance.
Replies from: bhauth↑ comment by bhauth · 2023-04-28T23:39:00.687Z · LW(p) · GW(p)
the last few times people tried naming this thing, people shifted to using it in a more generic way that didn't engage with the primary cruxes of the original namers
Yes, but, that's because:
"AI Safety" and "AI Alignment" aren't sufficiently specific names, and I think you really can't complain when those names end up getting used to mean things other than existential safety
(Which I agree with you about.)
the word is both specific enough and sounds low-status-enough that you can't possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing
OK, but now it's being used on (eg) Twitter as an applause light for people who already agree with Eliezer, and the net effect of that is negative. Either it's used internally in places like LessWrong, where it's unnecessary, or it's used in public discourse, where it sounds dumb which makes it counterproductive.
And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience
Yes, that's what I'm trying to make a start on getting done.
as a joke-name, things went overboard and it's getting used more often than it should
Yes, that is what I think. Here's a meme account on Twitter. Here's Zvi using it. These are interfaces to people who largely think it sounds dumb.
Replies from: Raemon↑ comment by Raemon · 2023-04-29T00:00:39.468Z · LW(p) · GW(p)
I agree it's getting used publicly. And, to be clear, I don't have a that strong an opinion on this, I'm not defending the phrase super hard. But, you haven't actually justified that a bad thing is definitely happening from my perspective.
Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you're dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
The goal here (on the part of the people saying the phrase) is not "build the biggest tent", nor is it "minimize sounding dumb". It's "speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don't gloss over and minimize."
Your post presumes "we're trying to build a big tent movement, and it should include things other than AI killing everyone." But, that's in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.
Put another way – I agree that maybe it's correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).
(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven't done any work to justify that the problem here is being too edgy rather than not enough)
(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for "build the biggest tent")
Replies from: bhauth↑ comment by bhauth · 2023-04-29T00:09:26.471Z · LW(p) · GW(p)
People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
No, you're dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on "not seeming like a weird silly outlandish doomer cult". If you want to actually get things done, you cannot give up on that.
Replies from: Raemon↑ comment by Raemon · 2023-04-29T01:00:04.969Z · LW(p) · GW(p)
Hmm. So I do agree the recent polls that showed support for "generally worried" and "the Pause open letter" are an important strategic consideration here. I do think it's fairly reasonable to argue "look man you actually have the public support, please don't fuck it up."
So, thank you for bringing that up.
It still feels like it's not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it's phrased. So if you're just running the check of "did anyone respond negatively to this?" the check will basically always return "yes", and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.
Replies from: bhauthcomment by gilch · 2023-04-29T05:27:41.504Z · LW(p) · GW(p)
Poor Faulkner. Does he really think big emotions come from big words? He thinks I don't know the ten-dollar words. I know them all right. But there are older and simpler and better words, and those are the ones I use.
--Ernest Hemingway
"Notkilleveryonism" is apt. Sounding "dumb" might actually help it catch on. Surprising, outrageous, and controversial things tend to spread more on social media. Weirder things are more memorable. It's why many commercials are weird on purpose.
comment by Max H (Maxc) · 2023-04-28T19:55:34.883Z · LW(p) · GW(p)
There are broader concerns than AI literally killing everyone, and you probably want as big an alliance as possible.
I think this is specifically what the AI notkilleveryonism term is trying to distinguish, though.
There are other concerns with other terms, but people wanted a term specifically for the concern that we're all going to be disassembled into tiny molecular squiggles, or other variations of the "unconscious meh [LW · GW]" outcome.
Maybe a better term for this would be "squiggle safety"? "avert the squiggle outcome"? "anti squiggleism"? "stop the squiggle"?!
Replies from: Raemon↑ comment by Raemon · 2023-04-29T02:50:15.685Z · LW(p) · GW(p)
I think "squiggle" is the wrong word here since the whole point is to just be clear-at-a-glance what you're talking about.
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-04-29T03:24:31.271Z · LW(p) · GW(p)
It also implies a very particular worldview, even more narrow than AI notkilleveryoneism, which is kind of the opposite of what the OP was asking for. But I think it's even more un-co-optable and unambiguous, to people familiar with the jargon.
And I couldn't resist sharing "stop the squiggle" :)
A slightly more serious idea for capturing the above worldview in a semi-comprehensible-at-a-glance phrase: "molecular disassembly safety"? "molecular AI safety"? "Stop AI atomics"? ¯_(ツ)_/¯
comment by A1987dM (army1987) · 2023-04-28T20:52:13.028Z · LW(p) · GW(p)
How 'bout "non-omnicidality"?
Replies from: nathan-helm-burger, gilch↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-28T22:35:38.584Z · LW(p) · GW(p)
That sounds like a good option for the fancy version to use in academic papers. Not so useful for wide audience public communication though.
comment by tskoro (tai-skoropada) · 2023-04-28T22:11:57.599Z · LW(p) · GW(p)
How about existential alignment/existential safety, or x-alignment/x-safety?
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-28T22:34:35.743Z · LW(p) · GW(p)
'x' was already kinda taken by XAI meaning 'explainable AI'. https://en.wikipedia.org/wiki/Explainable_artificial_intelligence
Replies from: tai-skoropada↑ comment by tskoro (tai-skoropada) · 2023-04-28T22:53:59.052Z · LW(p) · GW(p)
I think AI x-safety is probably distinguishable enough from XAI that I don't think there would be much confusion. It is also does not seem very susceptible to safetywashing, is easy to say, and has the counterpart of AI x-risk which is already in common use.
comment by RHollerith (rhollerith_dot_com) · 2023-04-29T18:41:18.294Z · LW(p) · GW(p)
I used "AI notkilleveryoneism" a few times in public comments. Since I was never attached to the term, I am willing to switch to to "AI extinction risk".
comment by Dagon · 2023-04-28T20:36:52.616Z · LW(p) · GW(p)
I'm happy that the term doesn't seem to be catching on in the circles I frequent - I've seen it mentioned here a few times, mostly in a negative or questioning stance (like this post), but I haven't seen it used in a way that's direct and non-aware-of-oddity.
I suspect that's because it's intentionally chosen to be awkward and childish, so it's unlikely to get co-opted or misinterpreted.
comment by Evan R. Murphy · 2023-05-01T18:41:00.613Z · LW(p) · GW(p)
A few other possible terms to add to the brainstorm:
- AI massive catastrophic risks
- AI global catastrophic risks
- AI catastrophic misalignment risks
- AI catastrophic accident risks (paired with "AI catastrophic misuse risks")
- AI weapons of mass destruction (WMDs) - Pro: a well-known term, Con: strongly connotes misuse so may be useful for that category but probably confusing to try and use for misalignment risks
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-28T23:09:20.585Z · LW(p) · GW(p)
Also, I can't resist a little meme reference here... https://knowyourmeme.com/memes/thats-the-joke
comment by Shmi (shminux) · 2023-04-28T22:56:37.401Z · LW(p) · GW(p)
I sort of think of this movement as "anti-extinctionism".
comment by avturchin · 2023-04-28T20:56:19.282Z · LW(p) · GW(p)
I like the wording "AI global safety" which means that AI will not cause global catastrophes.
"notkilleveryoneism" may be technically true in the world where only 5 people survive.
Replies from: Capybasilisk↑ comment by Capybasilisk · 2023-04-30T07:35:48.467Z · LW(p) · GW(p)
may be technically true in the world where only 5 people survive
Like Harlan Ellison's short story, "I Have No Mouth, And I Must Scream".
Replies from: avturchincomment by 1a3orn · 2023-04-28T22:13:46.024Z · LW(p) · GW(p)
Another issue with "AI notkilleveryoneism" is that it is most easily accomplished by never building AI.
Maximizing strictly against that utility function means that we are guaranteed to never build AI, because however low the risk from AI may be, the risk from AI involved in not building AI is lower. (And at least some people around here have said things to the effect of "yeah, our world just shouldn't ever build AI, we cannot handle it.")
If you think that a world where AI is never built sucks compared to one where it is built and makes the world better, and that the later is possible, it would make sense to object to the terminology for that reason -- it would make sense to be reluctant to join a movement beneath that banner.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-28T22:30:15.715Z · LW(p) · GW(p)
utopia notkilleveryoneism
there I fixed it