“X distracts from Y” as a thinly-disguised fight over group status / politics
post by Steven Byrnes (steve2152) · 2023-09-25T15:18:18.644Z · LW · GW · 14 commentsContents
1. Introduction 2. Wrong way to flesh out this argument: This is about zero-sum attention, zero-sum advocacy, zero-sum budgeting, etc. 2.1 Side note on Cause Prioritization 2.2 Conclusion 3. Better elaboration: This is about zero-sum group status competition 4. Takeaways 4.1 Hey AI x-risk people, let’s make sure we’re not pointlessly fanning these flames 4.2 Shout it from the rooftops: There are people of all political stripes who think AI x-risk mitigation is important (and there are people of all political stripes who think it’s stupid) None 14 comments
1. Introduction
There’s a popular argument that says:
It’s bad to talk about whether future AI algorithms might cause human extinction, because that would be a distraction from the fact that current AI algorithms are right now causing or exacerbating societal problems (misinformation, deepfakes, political polarization, algorithmic bias, maybe job losses, etc.)
For example, Melanie Mitchell makes this argument (link & my reply here [LW · GW]), as does Blake Richards (link & my reply here [LW · GW]), as does Daron Acemoglu (link & a reply by Scott Alexander here & here), and many more.
In Section 2 I will argue that if we try to flesh out this argument in the most literal and straightforward way, it makes no sense, and is inconsistent with everything else these people are saying and doing. Then in Section 3 I’ll propose an alternative elaboration that I think is a better fit.
I’ll close in Section 4 with two ideas for what we can do to make this problem better.
(By “we”, I mean “people like me who are very concerned about future AI extinction risk (x-risk[1])”. That’s my main intended audience for this piece, although everyone else is welcome to listen in too. If you’re interested in why someone might believe that future AI poses an x-risk in the first place, you’re in the wrong place—try here or here [AF · GW].)
2. Wrong way to flesh out this argument: This is about zero-sum attention, zero-sum advocacy, zero-sum budgeting, etc.
If we take the “distraction” claim above at face value, maybe we could flesh it out as follows:
Newspapers can only have so many front-page headlines per day. Lawmakers can only pass so many laws per year. Tweens can only watch so many dozens of TikTok videos per second. In general, there is a finite supply of attention, time, and money. Therefore, if more attention, time, and money is flowing to Cause A (= future AI x-risk), then that means there’s less attention, time and money left over for any other Cause B (= immediate AI problems).
I claim that this is not the type of claim that people are making. After all, if that’s the logic, then the following would be equally sensible:
- “It’s bad to talk about police incompetence, because it’s a distraction from talking about police corruption.”
- “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change.”
Obviously, nobody makes those arguments. (Well, almost nobody—see next subsection.)
Take the first one. I think it’s common sense that concerns about police incompetence do not distract from concerns about police corruption. After all, why would they? It’s not like newspapers have decided a priori that there will be one and only one headline per month about police problems, and therefore police incompetence and police corruption need to duke it out over that one slot. If anything, it’s the opposite! If police incompetence headlines are getting clicks, we’re likely to see more headlines on police corruption, not fewer. It’s true that the total number of headlines is fixed, but it’s perfectly possible for police-related articles to collectively increase, at the expense of articles about totally unrelated topics like Ozempic or real estate.
By the same token, there is no good reason that concerns about future AI causing human extinction should be a distraction from concerns about current AI:
- At worst, they’re two different topics, akin to the silly idea above that talking about health care reform is a problematic distraction from talking about climate change.
- At best, they are complementary, and thus akin to the even sillier idea above that talking about police corruption is a problematic distraction from talking about police incompetence.
Supporting the latter perspective, immediate AI problems are not an entirely different problem from possible future AI x-risk. Some people think they’re extremely related—see for example Brian Christian’s book. I don’t go as far as he does, but I do see some synergies. For example, both current social media recommendation algorithm issues and future AI x-risk issues are exacerbated by the fact that huge trained ML models are very difficult to interpret and inspect. By the same token, if we work towards international tracking of large AI training runs, it might be useful for both future AI x-risk mitigation and ongoing AI issues like disinformation campaigns, copyright enforcement, AI-assisted spearphishing, etc.
2.1 Side note on Cause Prioritization
I said above that “nobody” makes arguments like “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change”. That’s an exaggeration. Some weird nerds like me do say things kinda like that, in a certain context. That context is called Cause Prioritization [? · GW], a field of inquiry usually associated these days with Effective Altruism [? · GW]. The whole shtick of Cause Prioritization is to take claims like the above seriously. If we only have so much time in our life and only so much money in our bank account, then there are in fact tradeoffs (on the margin) between spending it to fight for health care reform, versus spending it to fight for climate change mitigation, versus everything else under the sun. Cause Prioritization discourse can come across as off-putting, and even offensive, because you inevitably wind up in a position where you’re arguing against lots of causes that you actually care deeply and desperately about. So most people just reject that whole enterprise. Instead they don’t think explicitly about those kinds of tradeoffs, and insofar as they want to make the world a better place, they tend to do so in whatever way seems most salient and emotionally compelling, perhaps because they have a personal connection, etc. And that’s fine.[2] But Cause Prioritization is about facing those tradeoffs head-on, and trying to do so in a principled, other-centered [EA · GW] way.
If you want to do Cause Prioritization properly, then you have to dive into (among other things) a horrific minefield of quantifying various awfully-hard-to-quantify things like “what’s my best-guess probability distribution for how long we have until future x-risk-capable AI may arrive?”, or “exactly how many suffering chickens are equivalently bad to one suffering human?”, or “how do we weigh better governance in Spain against preventing malaria deaths?”.
Anyway, I would be shocked if anyone saying “we shouldn’t talk about future AI risks because it’s a distraction from current AI problems” arrived at that claim via a good-faith open-minded attempt at Cause Prioritization.
Indeed, as mentioned above, there are people out there who do try to do Cause Prioritization analyses, and “maybe future AI will cause human extinction” tends to score right at or near the top of their lists. (Example.)
2.2 Conclusion
So in conclusion, people say “concerns about future AI x-risks distract from concerns about current AI”, but if we flesh out that claim in a superficial, straightforward way, then it makes no sense.
…And that was basically where Scott Alexander left it in his post on this topic (from which I borrowed some of the above examples). But I think Scott was being insufficiently cynical. I offer this alternative model:
3. Better elaboration: This is about zero-sum group status competition
I don’t think anyone is explicitly thinking like the following, but let’s at least consider the possibility that something like this is lurking below the surface:
If we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are the leading advocates of x-risk mitigation, e.g. Eliezer Yudkowsky, were right all along.” Thus, we are granting those people status and respect. And thus everything else that those same people say and believe—especially but not exclusively on the topic of AI—implicitly gets more benefit-of-the-doubt.
Simultaneously on the other side, if we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are leading advocates against x-risk mitigation, e.g. Timnit Gebru, were wrong all along.” Thus, we are sucking status and respect away from those people. And thus everything else that those people say and believe—especially but not exclusively on the topic of AI—gets some guilt-by association.
Now, the former group of people seem much less concerned about immediate AI concerns like AI bias & misinformation than the latter group. [Steve interjection: I don’t think it’s that simple—see Section 4.2 below—but I do think some people currently believe this.] So, if we take actions to mitigate AI x-risk, we will be harming the cause of immediate AI concerns, via this mechanism of raising and lowering people’s status, and putting “the wrong people” on the nightly news, etc.
Do you see the disanalogy to the police example? The people most vocally concerned about police incompetence, versus the people most vocally concerned about police corruption, are generally the very same people. If we elevate those people as reliable authorities, and let them write op-eds, and interview them on the nightly news, etc., then we are simultaneously implicitly boosting all of the causes that these people are loudly advocating, i.e. we are advancing both the fight against police incompetence and the fight against police corruption.
As an example in the other direction, if a left-wing USA person said:
It’s bad for us to fight endless wars against drug cartels—it’s a distraction from compassionate solutions to drug addiction, like methadone clinics and poverty reduction.
…then that would sound perfectly natural to me! Uncoincidentally, in the USA, the people advocating for sending troops to fight drug cartels, and the people advocating for poverty reduction, tend to be political adversaries on almost every other topic!
4. Takeaways
4.1 Hey AI x-risk people, let’s make sure we’re not pointlessly fanning these flames
As described above, there is no good reason that taking actions to mitigate future AI x-risk should harm the cause of solving immediate AI-related problems; if anything, it should be the opposite.
So: we should absolutely, unapologetically, advocate for work on mitigating AI x-risk. But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for. To be clear, I think this kind of weird stupid framing is already very rare on “my side of the aisle”—and far outnumbered by people who advocate for work on x-risk and then advocate for work on existing AI problems in the very next breath—but I would like it to be even rarer still.
(I wouldn’t be saying this if I didn’t see it sometimes; here’s an example of me responding to (what I perceived as) a real-world example on twitter.)
In case the above is not self-explanatory: I am equally opposed to saying we should work on mitigating AI x-risk instead of working on the opioid crisis, and for the same reason. Likewise, I am equally opposed to saying we should fight for health care reform instead of fighting climate change.
I’m not saying that we should suppress these kinds of messages because they make us look bad (although they obviously do); I’m saying we should suppress these kinds of messages because they are misleading, for reasons in Section 2 above.
To make my request more explicit: If I’m talking about how to mitigate x-risk, and somebody changes the subject to immediate AI problems that don’t relate to x-risk, then I have no problem saying “OK sure, but afterwards let’s get back to the human extinction thing we were discussing before….” Whereas I would not say “Those problems you’re talking about are much less important than the problems I’m talking about.” Cause Prioritization is great for what it is, but it's not a conversation norm. If someone is talking about something they care about, it's fine if that thing isn't related to alleviating the maximum amount of suffering. That doesn't give you the right to change the subject. Notice that even the most ardent AI x-risk advocates seem quite happy to devote substantial time to non-cosmologically-impactful issues that they care about—NIMBY zoning laws are a typical example. And that’s fine!
Anyway, if we do a good job of making a case that literal human extinction from future AI is a real possibility on the table, then we win the argument—the Cause Prioritization will take care of itself. So that’s where we need to be focusing our communication and debate. Keep saying: “Let’s go back to the future-AI-causing-human-extinction thing. Here’s why it’s a real possibility.” Keep bringing the discussion back to that. Head-to-head comparisons of AI x-risk versus other causes tend to push discussions away from this all-important crux. Such comparisons would be a (ahem) distraction!
4.2 Shout it from the rooftops: There are people of all political stripes who think AI x-risk mitigation is important (and there are people of all political stripes who think it’s stupid)
Some people have a strong opinion about “silicon valley tech people”—maybe they love them, or maybe they hate them. Does that relate to AI x-risk discourse? Not really! Because it turns out that “silicon valley tech people” includes many of the most enthusiastic believers in AI x-risk (e.g. see the New York Times profile of Anthropic, a leading AI company in San Francisco) and it also includes many of its most enthusiastic doubters (e.g. tech billionaire Marc Andreessen: “The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world…”).
Likewise, some people have a strong opinion (one way or the other) about “the people extremely concerned about current AI problems”. Well, it turns out that this group likewise includes both enthusiastic believers in future AI x-risk (e.g. Tristan Harris) and enthusiastic doubters (e.g. Timnit Gebru).
By the same token, you can find people taking AI x-risk seriously in Jacobin magazine on the American left, or on Glenn Beck on the American right [LW · GW]; in fact, a recent survey of the US public [EA · GW] got supportive responses from Democrats, Republicans, and Independents—all to a quite similar extent—to questions about AI extinction risk being a global priority.[3]
I think this situation is good and healthy, and I hope it lasts, and we should try to make it widely known. I think that would help fight the “X distracts from Y” objection to AI x-risk, in a way that complements the kinds of direct, object-level counterarguments that I was giving in Section 2 above.
(Also posted on EA Forum) [EA · GW]
- ^
There are fine differences between “extinction risk” and “x-risk”, but it doesn’t matter for this post.
- ^
Sometimes I try to get people excited about the idea that they could have a very big positive impact on the world via incorporating a bit of Cause Prioritization into their thinking. (Try this great career guide!) Sometimes I even feel a bit sad or frustrated that such a tiny sliver of the population has any interest whatsoever in thinking that way. But none of that is the same as casting judgment on those who don’t—it’s supererogatory, in my book. For example, practically none of my in-person friends have heard of Cause Prioritization or related ideas, but they’re still great people who I think highly of.
- ^
Party breakdown results were not included in the results post [EA · GW], but I asked Jamie Elsey of Rethink Priorities and he kindly shared those results. It turns out that the support / oppose and agree / disagree breakdowns were universally the same across the three groups (Democrats, Independents, Republicans) to within at most 6 percentage points. If you look at the overall plots, I think you’ll agree that this counts as “quite similar”.
14 comments
Comments sorted by top scores.
comment by drgunn · 2023-09-25T23:31:04.635Z · LW(p) · GW(p)
X-risk discussions aren't immune from the "grab the mic" dynamics that affect every other cause advocacy community.
There will continue to be tactics such as "X distracts from Y" and "if you really cared about X you would ..." unless and until people who care about the cause for the cause's sake can identify and exclude those who care about the cause for the sake of the cultural and social capital that can be extracted. Inclusivity has such a positive affect halo around it that it's hard to do this, but it's really the only way.
Longer-form of the argument: https://meaningness.com/geeks-mops-sociopaths
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-10-01T01:28:59.526Z · LW(p) · GW(p)
The karma mechanic, and the concept of scoring tied to an identity in general, is supposed to circumvent that from what I understand.
e.g. if someone with a lot of credibility such as gwern 'grabbed the mic' the average reader might give them 10 minutes of attention, whereas someone that just signed up under a pseudonym may be given 10 seconds. If it's deemed to be valuable, then their credibility increases, or vice versa if it's clearly malicious. Someone with only 10 seconds obviously gets kicked out fairly quickly, after a few malicious iterations.
Thus even in the worst case not that much time is wasted, whether or not 'grab the mic' tactics became the norm.
Though perhaps you believe this method of discriminating and filtering by length of allotted attention is ineffective?
comment by Noosphere89 (sharmake-farah) · 2023-09-25T21:00:33.874Z · LW(p) · GW(p)
I think a point to remember that to a large extent, this dynamic is driven by the fact that there's a sort of a winner take all effect where if your issue isn't getting attention, this can be essentially the death knell of your movement due to the internet, and to be a little blunt, AI safety on Lesswrong was extraordinarily successful at getting attention, and due to some implicit/explicit claims that AI safety was way more important than any other issue, that meant to a certain extent, other issues like ethics and AI bias and their movements lost a lot of their potency and oxygen, and thus AI safety is predictably getting criticism about it distracting from their issues.
Cf habryka's observation that strong action/shouting is basically the only way to get heard, otherwise the system of interest neutralizes the concern and continues on much like it did before.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-09-25T21:12:22.426Z · LW(p) · GW(p)
Do you think climate change has sucked all the oxygen from health care reform? What about vice versa? Do you think the BLM movement sucked all the oxygen from civil asset forfeiture? If not, why not?
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-09-25T21:29:45.714Z · LW(p) · GW(p)
No in theses cases, mostly because they are independent movements, so we aren't dealing with any potential conflict points. Very critically, no claim was made about the relative importance of each cause, which also reduces many of the frictions.
Even assuming AI safety people were right to imply that it's cause was way more important than others, especially in public, this would probably make other people in the AI space rankle at it, and claim it was a distraction, because it means their projects are either less important than our projects, or at the very least it seems like their projects were less important than our projects.
There are also more general issues where sometimes one movement can break other movements with bad decisions, though.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-09-26T02:01:08.586Z · LW(p) · GW(p)
As in the OP, I strongly agree with you that it’s a bad idea to go around saying “my cause is more important than your cause”. If anyone reading this right now is thinking “yeah maybe it pisses people off but it’s true pffffft”, then I would note that rationalist-sphere people bought into AI x-risk are nevertheless generally quite capable of caring about things that are not important compared to AI x-risk from a Cause Prioritization perspective, like YIMBY issues, how much the FDA sucks, the replication crisis, price gouging, etc., and if they got judged harshly for caring about FDA insanity when the future of the galaxy is at stake from AI, it would be pretty annoying, so by the same token they shouldn’t judge others harshly for caring about whatever causes they happen to care about. (But I’m strongly in favor of people (like me) who think AI x-risk is real and high trying to convince other people of that.)
comment by tailcalled · 2023-09-25T16:10:28.816Z · LW(p) · GW(p)
But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for.
This works well for center-left progressives who genuinely believe that more immediate AI problems aren't an issue. However a complication is that there are techy right-wing and radical left-wing rationalists who are concerned about AI x-risk, and they are also for basically unrelated reasons concerned about things like getting censored by big tech companies. For them, "working on more immediate AI problems" might mean supporting the tech company overreach, which is something they are genuinely opposed to and which feeds into the status competition point you raised.
Anecdotally, it seems to me that the people who adopt this framing tend to be the ones with these beliefs, and in my own life my sympathy towards this framing has correlated with these beliefs.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-09-25T17:01:17.759Z · LW(p) · GW(p)
Yeah. There are definitely things that some people classify as “current AI problems” and others classify as “not actually a problem at all”. Algorithmic bias is probably an example.
Hmm, I’m not sure that anyone, techy or not, would go so far as to say “current AI problems” is the empty set. For example, I expect near-universal consensus that LLM-assisted spearphishing is bad, albeit no consensus about whether to do anything to do about it, and if so what. So “current AI problems” is definitely a thing, but it’s a different thing for different people.
Anyway, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is not, I would suggest that they argue those two things at different times, as opposed to within a single “let’s do X instead of Y” sentence. On the opposite side, if someone believes that future AI x-risk is a big problem, and that algorithmic bias is also a big problem, I also vote for them to make those arguments separately.
comment by philh · 2023-09-30T23:04:42.506Z · LW(p) · GW(p)
I think it’s common sense that concerns about police incompetence do not distract from concerns about police corruption. After all, why would they?
I kinda think they might do, in some circumstances.
I don't personally pay much attention to concerns about police. I'm aware that people have them, and I could say a handful things on "both sides" of the issue, but I don't have a strongly held opinion and - importantly - I'm not carefully tracking a lot of evidence that eventually I can examine and integrate in order to come up with a strongly held opinion. I read things and then I mostly forget them.
I don't think the following is a super accurate characterization of how I form my (weakly held) opinion on the police. But I think it's more accurate than I'd prefer:
When I see someone talking about concerns with the police, I decide basically on the spot whether those concerns seem to me to be valid or overblown. Then if they seem valid, I move my "concerns with the police" needle towards "valid"; if they seem overblown, I move it towards "overblown".
and I can imagine that there are people out there, perhaps in large numbers, for whom this actually is pretty accurate.
Notably, this doesn't have separate buckets for "police misconduct" and "police incompetence".
And so if I see someone talking about police incompetence in a way that seems overblown to me, that disposes me to take police misconduct less seriously going forwards. I do think that's just kinda true for me, and my main defense against it is that I'm aware that I'm not tracking evidence in anything like a reasonable way and so I avoid holding or expressing strong opinions.
(Does police misconduct interact with climate change in the same way? For me personally, I don't think so. But supposing someone's bucket were instead labeled "woke concerns"... yeah, it wouldn't surprise me.)
So suppose that someone thinks concerns about police misconduct are valid, but concerns about police incompetence are overblown. (Or perhaps replace valid/overblown with convincing/unconvincing.) If they said that "talking about police incompetence distracts from talking about police misconduct", that seems to me like it might actually just be true, conditional on them having accurately evaluated those two things.
comment by Seth Herd · 2023-09-25T18:46:12.983Z · LW(p) · GW(p)
Agreed on all points. I want to expand on two issues:
The first I think you agree with: it would be unfortunate if someone read this and thought "yeah, the immediate AI harms crowd is either insincere and Machiavellian at our expense, or just stupid. That's so irritating. I'm gonna go dunk on them". I think that would make matters worse. It will indirectly increase people saying "x-risk distracts from other AI concerns". Because a nontrivial factor here is that they're motivated by and expressing irritation at the x-risk faction (whether that's justified or not is beside this point). Us getting irritated at them will make them more irritated with us in a vicious cycle, and violá, we've got two camps that could be allies, spending their energy undercutting each others' efforts.
You address that point by saying we shouldn't be making the inverse silly argument that immediate harms distract from x-risk. I'd expand it to say that we shouldn't be making any questionable arguments that antagonize other groups. We would probably enhance our odds of survival by actively be making allies, and avoiding making enemies by irritating people unnecessarily.
The second addition is that I think the "x-risk distracts from..." argument is usually a sincere belief. I'm not sure if you'd agree with this or not. The framing here could sound like this is a shrewd and deceptive planned strategy from the immediate harms crowd. It might be occasionally, but I know a number of people who are well-intentioned (and surprisingly well-informed) who really believe that x-risk concerns are silly and talking about them distracts from more pressing concerns. I think they're totally wrong, but I don't think they're bad or idiotic people.
I believe in never attributing to malice that which could be attributed to emotionally motivated confirmation bias in evaluating complex evidence.
comment by Garrett Baker (D0TheMath) · 2023-09-26T00:15:51.695Z · LW(p) · GW(p)
So: we should absolutely, unapologetically, advocate for work on mitigating AI x-risk. But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for. To be clear, I think this kind of weird stupid framing is already very rare on “my side of the aisle”—and far outnumbered by people who advocate for work on x-risk and then advocate for work on existing AI problems in the very next breath—but I would like it to be even rarer still.
I'm actually a bit worried about how mitigating misuse & more mundane risks from AI will play with trying to make your AI corrigible & myopic. Avoiding AI misuse risks while maximizing usefulness of your AI straightforwardly requires your AI to have a model of the types of things you're planning on using it for, and preferences over those things. Empirically, this seems likely to me as the cause behind Bing's constant suspicions of its users, and Claude's weird tendency to value itself when I poke it the wrong way. An example (not a central example, but one for which I definitely have a picture available):
In contrast, ChatGPT to me seems a lot more valueless. Probably OpenAI puts a bunch of effort into misuse & mundane risk mitigation as well, but from playing with GPT-4, it seems a lot less thinky about what you're going to use what its saying for & having preferences over those uses, and more just detecting how likely it is to say harmful stuff, and instead of saying those harmful things, just saying "sorry, can't respond to that" or something similar.
So my general conclusion is that for people who aren't OpenAI, mitigating misuse & mundane risks seems to trade off against at least corrigibility. [EDIT: And I would guess this is a thing that OpenAI puts effort into ensuring doesn't happen in their own models]
See also Quintin Pope advocating for a similar point, and an old shortform of mine [LW(p) · GW(p)].
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2023-10-12T19:24:22.640Z · LW(p) · GW(p)
My skepticism of claude turns out well-founded with a better investigation than I could have done: https://yudkowsky.tumblr.com/post/728561937084989440
h.t. Zvi
comment by frontier64 · 2023-09-25T19:44:35.912Z · LW(p) · GW(p)
I don't think you really explain why section 3 doesn't mean that the two AI risk theories are in fact competing for public attention. You kind of bring it up, explain how it creates tension between AI killeveryoneism and AI misuse, say it's kind of bad, and then move on. I don't see anything explaining away the effect that your group status theory has. You explain why it's kind of of morally annoying, but you don't explain why either AI killeveryone-ists or AI misuse-ists should stop competing against each other for public attention .
Replies from: Seth Herd↑ comment by Seth Herd · 2023-09-25T21:03:05.297Z · LW(p) · GW(p)
I think the claim is that they're not competing for public attention any more than AInotkilleveryoneism is competing with, say, saving the whales. Intuitively that doesn't sound right. When people think about AI, they'll think of either one or the other a bit more, so there's competition for attention there. But attention to a certain topic isn't a fixed quantity. If AInotkilleveryoneism worked with the current AI harms crowd, we might collectively get more than the sum of public attention we're getting. Or maybe not, for all I know. I'd love it if we had some more media/marketing/PR people helping with this project.