Posts
Comments
I think that veganism is deontological, or at least has a deontological component to it; it relies on the act-omission distinction.
Imagine a world in which child sex abuse was as common and accepted as animal exploitation is in ours. In this world of rampant child sex abuse, some people would adopt protectchildrenism, the ethical stance that commits you to avoid causing the sexual exploitation (and suffering?) of human children as far as practicable – i.e. analogous to my definition of veganism.
It seems inaccurate/misleading for a child sex abuser to call themselves a protectchildrenist because they save more children from sexual abuse (by donating to effective child protection charities) than they themselves sexually abuse.
Also it seems morally worse for the child sex abuser to a) save more children through donations than they themselves abuse than b) not abuse the children but not donate; even though the world in which a) happens is a better world than the world in which b) happens (all else being equal).
Hi, I'm Alistair – am mostly new to LessWrong and rationality, but have been interested in and to differing extents involved in effective altruism for about five years now.
Some beliefs I hold:
- Veganism is a moral obligation, i.e. not being vegan is morally unjustifiable
- Defining veganism not as a 100% plant-based diet, but rather as the ethical stance that commits one to avoid causing the exploitation (non-consensual use) and suffering of sentient non-humans, as far as practicable
- This definition is inspired by but in my view better than the Vegan Society's definition
- "As far as practicable" is ambiguous – this is deliberate
- And there may be some trade-offs between exploitation and suffering when you take into account e.g. crop deaths and wild animal suffering
- Defining veganism not as a 100% plant-based diet, but rather as the ethical stance that commits one to avoid causing the exploitation (non-consensual use) and suffering of sentient non-humans, as far as practicable
- Transformative AI is probably coming, and it's probably coming soon (maybe 70% chance we start seeing a likely irreversible paradigm shift in the world by 2027)
- If the way we treat members of other species relative to whom we are superintelligent is anything to go by, AGI/ASI/TAI could go really badly for us and all sentients
- As far as I can tell, the only solution we currently have is not building AGI until we know it won't e.g. invasively experiment on us (i.e. PauseAI)
I'm currently lead organiser of the AI, Animals, & Digital Minds conference in London in June 2025, and would love to speak to people who are interested in the intersection of those three things, especially if they're in London.
I'll be co-working in the LEAH Coworking Space and the Ambitious Impact office in London in 2025, and will be in the Bay Area in California in Feb-Mar for EAG Bay Area (if accepted) and the AI for Animals conference there.
Please reach out by DM!
Interested in:
- Sentience- & suffering-focused ethics; sentientism; painism; s-risks
- Animal ethics & abolitionism
- AI safety & governance
- Activism, direct action & social change
- Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate
Bio:
- From London
- BA in linguistics at the University of Cambridge
- Almost five years in the British Army as an officer
- MSc in global governance and ethics at University College London
- One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising
- Now pivoting to the (future) impact of AI on biologically and artifically sentient beings
Protests create an us vs. them mentality. Two groups are pitted against each other, with the protestors typically cast in the role of victims who are demanding to be heard.
Dilemma ("choose a side") is a principle of non-violent direct action; why is an us vs. them mentality necessarily a bad thing? Do you oppose protest in principle?
If people push OpenAI to be for or against AI development, they are going to be for development. A protest, as I see it, risks making them dig in to a position and be less open to cooperating on safety efforts.
Would you say this about the climate movement pressuring fossil fuel companies to transition away from fossil fuels?
I'd rather see continued behind the scenes work to get them to be more cautious, e.g. like the work ARC Evals is doing. It seems more likely we can have a positive influence by working with them rather than directly opposing them.
I think we need both – here's evidence for the radical flank effect.
It's not clear to me there's a mass movement to oppose what OpenAI is doing, so it's hard for me to see what positive impact a protest would have.
Strongly agree – and this is how all mass movements start, no?
You've done basically zero of the hard work required to rally people behind a successful protest (other than write this announcement)
Isn't this how most social movements start – with a single protest, attended by a small number of people?
You'd really need a concrete policy ask before I would think of joining your protest
I think this is why Percy posted here: to discuss what that might look like! And perhaps he doesn't need specific demands – look at Occupy Wall Street as an example of a movement with underspecified/vague demands that was effective in some ways (and failed in others).
More generally I don't really like the dynamic where the first person to say "me" is suddenly able to direct a bunch of free-energy
Again – surely this is how all social movements start? This picket won't be perfect; in my view it will highly likely be better than nothing.
I do think you could do something smarter than this attempt, and try harder to figure out what might work.
Do you have any suggestions?