Posts

Californians, tell your reps to vote yes on SB 1047! 2024-08-12T19:50:09.817Z
Holly Elmore and Rob Miles dialogue on AI Safety Advocacy 2023-10-20T21:04:32.645Z
TOMORROW: the largest AI Safety protest ever! 2023-10-20T18:15:18.276Z
Global Pause AI Protest 10/21 2023-10-14T03:20:27.937Z
Protest against Meta's irreversible proliferation (Sept 29, San Francisco) 2023-09-19T23:40:30.202Z
Holly_Elmore's Shortform 2023-06-18T11:54:18.790Z
Seeking beta readers who are ignorant of biology but knowledgeable about AI safety 2022-07-27T23:02:57.192Z
Virtue signaling is sometimes the best or the only metric we have 2022-04-28T04:52:53.884Z
Instead of "I'm anxious," try "I feel threatened" 2019-06-28T05:24:52.593Z

Comments

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-24T21:23:22.527Z · LW · GW

Yeah I suspect that these one-shot big protests are drawing on a history of organizing in those or preceding fields. The Women’s March coalition comes together all for one big event but draws on a far on deeper history involving small demonstrations and deliberate organizing to make it to that point, is my point. Idk about Free Internet but I would bet it leaned on Free Speech organizing and advocacy.

I sure wish someone would put on a large AI Safety protest if they know a way to do this in one leap. If I got a sponsor for a concert or some other draw then perhaps I could see a larger thing happening quickly in the family of AI Safety protest, but I’d like the keep the brand pretty earnest and message-focused.

I have to note, based on our history, I interpret your posts as attacking, like the subtext is that I’m just not a good organizer and, if you wanted to, you could organize a way bigger movement way faster. If that’s true, I wish you would! I’m trying my best with my understanding of how this can work for me and I wish more people like you were embracing broad messaging like protests.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:54:05.288Z · LW · GW

?

I’m saying he’s projecting his biases onto others. He clearly does think PauseAI rhymes with unabomber somehow, even if he personally knows better. The weird pro-tech vs anti-tech dichotomy, and especially thinking that others are blanketly anti-tech, is very rationalist.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:51:32.034Z · LW · GW

Do you think those causes never had organizing before the big protest? 

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:50:00.816Z · LW · GW

Yeah I unintentionally baited the “not always” rationalist reflex by talking normally 

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:39:38.472Z · LW · GW

I think the relevant question is how often social movements begin with huge protests, and that’s exceedingly rare. It’s effective to create the impression that the people just rose up, but there’s basically always organizing groundwork for that to take off.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:31:43.347Z · LW · GW

Do you guys seriously think that big protests just materialize?

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:31:00.984Z · LW · GW

Yeah the SF protests have been about constant (25-40) in attendance, but we have more locations now and have put a lot more infrastructure in place 

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T20:28:05.274Z · LW · GW

The thing is there isn’t a great dataset— even with historical case studies where the primary results have been achieved, there are a million uncontrolled variables and we don’t and will never have experimentally established causation. But, yes, I’m confident in my model of social change.

What leapt out to me about your model was that is was very focused how an observer of the protests would react with a rationalist worldview. You didn’t seem to have given much thought to the breadth of social movements and how a diverse public would have experienced them. Like, most people aren’t gonna think PauseAI is anti-tech in general and therefore similar to the unabomber. Rationalists think that way, and few others.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T19:34:47.600Z · LW · GW

Sounds like you are saying that you have those associations and I still see no evidence to justify your level of concern.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T06:29:15.977Z · LW · GW

Small protests are the only way to get to big protests, and I don’t think there’s a significant risk of backfire or cringe reaction making trying worse than not trying. It’s the backfire supposition that is baseless.

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T03:57:53.974Z · LW · GW

Appreciate your conclusion tho— that reaching the public is our best shot. Fortunately, different approaches are generally multiplicative and complementary. 

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T03:55:09.958Z · LW · GW

People usually say this when they personally don’t want to be associated with small protests. 

Comment by Holly_Elmore on The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better · 2025-02-23T03:54:18.275Z · LW · GW
  • As-is, this is mostly going to make people's first exposure to AI X-risk be "those crazy fringe protestors". See my initial summary regarding effective persuasion: that would be lethal, gravely sabotaging our subsequent persuasion efforts.

Pretty strong conclusion with no evidence.

Comment by Holly_Elmore on Everywhere I Look, I See Kat Woods · 2025-01-21T19:45:57.532Z · LW · GW

Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.

Comment by Holly_Elmore on Everywhere I Look, I See Kat Woods · 2025-01-20T22:26:03.352Z · LW · GW

Come on, William. "But they said their criticism of this person's reputation wasn't personal" is not good enough. It's like calling to "no take backs" or something. 

Comment by Holly_Elmore on Everywhere I Look, I See Kat Woods · 2025-01-20T20:02:08.832Z · LW · GW

I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn't working bc they feel angry at PETA when they feel judged or accused, but they update on how it's okay to treat animals, and that's the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don't have to be popular and well-liked to push the Overton window. You also don't have to be a group that people want to identify with. 

But I don't think PETA's an accurate comparison for Kat. It seems like you're comparing Kat and PETA bc you would be embarrassed to be implicated by both, not bc they have the same tactics or extremity of message. And then the claim that other people will be turned off or misinformed becomes a virtuous pretext to get them and their ideas away from your social group and identity. But you haven't open-mindedly tried to discover what's good for the cause. You're just using your kneejerk reaction to justify imposing your preferences.

There's a missing mood here-- you're not interested in learning if Kat's strategy is effective at AI Safety. You're just asserting that what you like would be the best for saving everyone's lives too and don't really seem concerned about getting the right answer to the larger question.

Again, I have contempt for treating moral issues like a matter of ingroup coolness. This is the banality of evil as far as I'm concerned. It's natural for humans but you can do better. The LessWrong community is supposed to help people not to do this but they aren't honest with themselves about what they get out of AI Safety, which is something very similar to what you've expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what's actually the right approach to help the world. 

Comment by Holly_Elmore on Everywhere I Look, I See Kat Woods · 2025-01-18T00:04:57.888Z · LW · GW

Yeah actually the employees of Lightcone have led the charge in trying to tear down Kat. Its you who has the better standards, Maxwell, not this site.

Comment by Holly_Elmore on Everywhere I Look, I See Kat Woods · 2025-01-18T00:03:46.908Z · LW · GW

Getting a strong current of “being smart and having interesting and current tastes is more important than trying to combat AI Danger, and I want all my online spaces to reflect this” from this. You even seem upset that Kat is contaminating subreddits that used to not be about Safety with Safety content… Like you’re mad about progress in embrace of AI Safety. You critique her for making millennial memes as if millennials don’t exist anymore (lesswrong is millennial and older) and content should only be for you.

You seem kinda self-aware of this at one point, but doesn’t that seem really petty and selfish of you?

I appreciate how upfront you are here, bc a lot of people who feel the same way disguise it behind moralistic or technical arguments. And your clarity should make it easier for you to get over yourself and come to your senses.

Comment by Holly_Elmore on Californians, tell your reps to vote yes on SB 1047! · 2024-08-20T23:01:39.202Z · LW · GW

Meritorious!

Comment by Holly_Elmore on Californians, tell your reps to vote yes on SB 1047! · 2024-08-14T07:20:04.680Z · LW · GW

I take it you've already called, Oli?

Comment by Holly_Elmore on Californians, tell your reps to vote yes on SB 1047! · 2024-08-14T07:15:34.119Z · LW · GW

The bill is in danger of not passing Appropriations because of lobbying and misinformation. That's what calling helps address. Calling does not make SB 1047 cheaper, and therefore does not address the Suspense File aspects of what it's doing in Appropriations. 

Comment by Holly_Elmore on Californians, tell your reps to vote yes on SB 1047! · 2024-08-14T06:34:22.376Z · LW · GW

Why is "dishonesty" your choice of words here? Our mistake cut against our goal of getting people to call at an impactful time. It wasn't manipulative. It was merely mistaken. I understand holding sloppiness against us but not "dishonesty". 

I think the lack of charity is probably related to "activism dumb".

Comment by Holly_Elmore on Californians, tell your reps to vote yes on SB 1047! · 2024-08-14T06:10:03.223Z · LW · GW

It was corrected.

Comment by Holly_Elmore on Sam Altman fired from OpenAI · 2023-11-18T03:42:49.769Z · LW · GW

What kind of securities fraud could he have committed? 

Comment by Holly_Elmore on Holly Elmore and Rob Miles dialogue on AI Safety Advocacy · 2023-11-08T06:47:39.871Z · LW · GW

No, sacrificing truth is fundamentally an act of self-deception. It is making yourself a man who believes a falsehood, or has a disregard for the truth. It is Gandhi taking the murder-pill. That is what I consider irreversible.

This is what I was talking about, or the general thing I had in mind, and I think it is reversible. Not a good idea, but I think people who have ever self-deceived or wanted to believe something convenient have come back around to wanting to know the truth. I also think people can be truthseeking in some domains while self-deceiving in others. Perhaps if this weren’t the case, it would be easier to draw lines for acceptable behavior, but I think that unfortunately it isn’t.

Very beside my original point about being willing to speak more plainly, but I think you get that.

Comment by Holly_Elmore on If a little is good, is more better? · 2023-11-04T07:22:52.719Z · LW · GW

I get the sense that "but Google and textbooks exist" is more of a deontological argument, like if the information is public at all "the cat's out of the bag" and it's unfair to penalize LLMs bc they didn't cross any new lines, just increased accessibility.

Comment by Holly_Elmore on Holly Elmore and Rob Miles dialogue on AI Safety Advocacy · 2023-10-26T23:12:59.854Z · LW · GW

Does that really seem true to you? Do you have no memories of sacrificing truth for something else you wanted when you were a child, say? I'm not saying it's just fine to sacrifice truth but it seems false to me to say that people never return to seeking the truth after deceiving themselves, much less after trying on different communication styles or norms. If that were true I feel like no one could ever be rational at all. 

Comment by Holly_Elmore on TOMORROW: the largest AI Safety protest ever! · 2023-10-23T17:30:01.222Z · LW · GW

That’s why I said “financially cheap”. They are expensive for the organizer in terms of convincing people to volunteer and to all attendees as far as their time and talents, and getting people to put in sweat equity is what makes it an effective demonstration. But per dollar invested they are very effective.

I would venture that the only person who was seriously prevented from doing something else by being involved in this protest was me. Of course there is some time and labor cost for everyone involved. I hope it was complementary to whatever else they do, and, as Ben said, perhaps even allowing them to flex different muscles in an enriching way.

Comment by Holly_Elmore on Holly Elmore and Rob Miles dialogue on AI Safety Advocacy · 2023-10-23T01:43:59.529Z · LW · GW

I’m down for a followup!

Comment by Holly_Elmore on TOMORROW: the largest AI Safety protest ever! · 2023-10-23T01:39:27.418Z · LW · GW

It’s hard to say what the true impact of the events will be at this time, but they went well! I’m going to write a post-mortem for the SF PauseAI protest yesterday and the Meta protest in September and post it on EAF/LW that will cover the short-term outcomes.

Considering they are financially cheap to do (each around $2000 if you don’t count my salary), I’d call them pretty successful already. Meta protest got good media coverage, and it remains to be seen how this one will be covered since most of the coverage happened in the two following weeks last time.

Comment by Holly_Elmore on TOMORROW: the largest AI Safety protest ever! · 2023-10-23T01:38:36.504Z · LW · GW
Comment by Holly_Elmore on TOMORROW: the largest AI Safety protest ever! · 2023-10-20T22:16:49.320Z · LW · GW

You could share the events with your friends and family who may be near, and signal boost media coverage of the events after! If you want to donate to keep me organizing events, I have a GoFundMe (and if anyone wants to give a larger amount, I'm happy to talk about how to do that :D). If you want to organize future events yourself, please DM me. Even putting the pause emoji ⏸️ in your twitter name helps :)

Here are the participating cities and links:
October 21st (Saturday), in multiple countries

Comment by Holly_Elmore on TOMORROW: the largest AI Safety protest ever! · 2023-10-20T21:34:22.439Z · LW · GW

Personally, I'm interested in targeting hardware development and that will be among my future advocacy directions.  I think it'll be a great issue for corporate campaigns pushing voluntary agreements and for pushing for external regulations simultaneously. This protest is aimed more at governments (attending the UK Summit) and their overall plans for regulating AI, so we're pushing compute governance as way to most immediately address the creation of frontier models. Imo hardware tracking at the very least is going to have to be part of enforcing such a limit if it is adopted, and slowing the development of more powerful hardware will be important to keeping an acceptable compute threshold high enough that we're not constantly on the verge of someone illegally getting together enough chips to make something dangerous. 

Comment by Holly_Elmore on Holly Elmore and Rob Miles dialogue on AI Safety Advocacy · 2023-10-20T21:26:41.679Z · LW · GW

If you found yourself interested in advocacy, the largest AI Safety protest ever is happening Saturday, October 21st! 

https://www.lesswrong.com/posts/abBtKF857Ejsgg9ab/tomorrow-the-largest-ai-safety-protest-ever 

Comment by Holly_Elmore on The International PauseAI Protest: Activism under uncertainty · 2023-10-14T03:25:16.925Z · LW · GW

Check out the LessWrong event here: https://www.lesswrong.com/events/ZoTkRYdqGuDCnojMW/global-pause-ai-protest-10-21

Comment by Holly_Elmore on Evaluating the historical value misspecification argument · 2023-10-05T20:30:09.576Z · LW · GW

I think you’re correct that the paradigm has changed, Matthew, and that the problems that stood out to MIRI before as possibilities no longer quite fit the situation.

I still think the broader concern MIRI exhibited is correct: namely, that that an AI could appear to be aligned but not actually be aligned, and that this may not come to light until it is behaving outside of the context of training/in which the command was written. Because of the greater capabilities of an AI, the problem may have to do with differences in superficially similar goals that wouldn’t matter at the human capabilities level.

I’m not sure if the fact that LLMs solve the cauldron-filling problem means that we should consider the whole broader class of problems easier to solve than we thought. Maybe it does. But given the massive stakes of the issue I think we ought to consider not knowing if LLMs will always behave as intended OOD a live problem.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-27T16:59:10.198Z · LW · GW

Change log: I removed the point about Meta inaccurately calling itself "open source" because it was confusing. 

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-22T18:51:38.931Z · LW · GW

Particularly in the rationalist community it seems like protesting is seen as a very outgroup thing to do. But why should that be? Good on you for expanding your comfort zone-- hope to see you there :)

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-22T18:48:36.242Z · LW · GW

^ all good points, but I think the biggest thing here is the policy of sharing weights continuing into the future with more powerful models. 

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-21T23:50:11.696Z · LW · GW

Yeah, I’ve been weighing a lot whether big tent approaches are something I can pull off at this stage or whether I should stick to “Pause AI”. The Meta protest is kind of an experiment in that regard and it has already been harder than I expected to get the message about irreversible proliferation across well. Pause is sort of automatically a big tent because it would address all AI harms. People can be very aligned on Pause as a policy without having the same motivations. Not releasing model weights is more of a one-off issue and requires a lot of inferential distance crossing even with knowledgeable people. So I’ll probably keep the next several events focused on Pause, a message much better suited to advocacy.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-21T23:45:19.731Z · LW · GW

Yeah, I’m afraid of this happening with AI even as the danger becomes clearer. It’s one reason we’re in a really important window for setting policy.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-21T06:49:42.879Z · LW · GW

Reducing the harm of irreversible proliferation potentially addresses almost all AI harms, but my motivating concern is x-risk.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-21T06:48:12.806Z · LW · GW

This strikes me as the kind of political thinking I think you’re trying to avoid. Contempt is not good for thought. Advocacy is not the only way to be tempted to lower your epistemic standards. I think you’re doing it right now when you other me or this type of intervention.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-20T06:34:06.381Z · LW · GW

I commend your introspection on this.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-20T06:30:06.532Z · LW · GW

I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/

The public doesn’t understand the risks of sharing model weights so getting media attention to this issue will be helpful.

Comment by Holly_Elmore on Protest against Meta's irreversible proliferation (Sept 29, San Francisco) · 2023-09-20T00:29:28.100Z · LW · GW

I actually did not realize they released the base model. There's research showing how easy it is to remove the safety fine-tuning, which is where I got the framing and probably Zvi too, but perhaps that was more of a proof of concept than the main concern in this case. 

The concept of being able to remove fine-tuning is pretty important for safety, but I will change my wording where possible to also mention it being bad to release the base model without any safety fine-tuning. Just asked to download llama 2 so I'll see what options they give.

Comment by Holly_Elmore on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-13T19:42:54.611Z · LW · GW

Yeah, it felt like Eliezer was rounding off all of the bad faith in the post to this one stylistic/etiquette breach, but he didn't properly formulate the one rule that was supposedly violated. 

Comment by Holly_Elmore on Introducing the Center for AI Policy (& we're hiring!) · 2023-08-30T05:37:00.116Z · LW · GW

Sorry, what harmful thing would this proposal do? Require people to have licenses to fine-tune llama 2? Why is that so crazy?

Comment by Holly_Elmore on Introducing the Center for AI Policy (& we're hiring!) · 2023-08-28T21:18:57.071Z · LW · GW

I endorse!

Comment by Holly_Elmore on Holly_Elmore's Shortform · 2023-08-19T19:28:23.131Z · LW · GW

A weakness I often observe in my numerous rationalist friends is "rationalizing and making excuses to feel like doing the intellectually cool thing is the useful or moral thing". Fwiw. If you want to do the cool thing, own it, own the consequences, and own the way that changes how you can honestly see yourself.