EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem

post by Elizabeth (pktechgirl) · 2023-09-28T23:30:03.390Z · LW · GW · 246 comments

This is a link post for https://acesounderglass.com/2023/09/28/ea-vegan-advocacy-is-not-truthseeking-and-its-everyones-problem/

Contents

  Introduction
  Definitions
  Audience
  How EA vegan advocacy has hindered truthseeking
    Active suppression of inconvenient questions
      Counter-Examples
    Ignore the arguments people are actually making
    Frame control/strong implications not defended/fuzziness
      Counter-Examples
    Sound and fury, signifying no substantial disagreement 
      Counter-Examples
    Bad sources, badly handled 
      Counter-Examples
    Taxing Facebook
    Ignoring known falsehoods until they’re a PR problem
  Why I Care
  What do EA vegan advocates need to do?
  All Effective Altruists need to stand up for our epistemic commons
  Acknowledgments
  Appendix
    Terrible anti-meat article
    Edits
None
247 comments

Introduction

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its reference category, and unjustified in that it is far from meeting its own standards. We’ve already seen dire consequences of the inability to detect bad actors who deflect investigation into potential problems, but by its nature you can never be sure you’ve found all the damage done by epistemic obfuscation because the point is to be self-cloaking. 

My concern here is for the underlying dynamics of  EA’s weak epistemic immune system, not any one instance. But we can’t analyze the problem without real examples, so individual instances need to be talked about. Worse, the examples that are easiest to understand are almost by definition the smallest problems, which makes any scapegoating extra unfair. So don’t.

This post focuses on a single example: vegan advocacy, especially around nutrition. I believe vegan advocacy as a cause has both actively lied and raised the cost for truthseeking, because they were afraid of the consequences of honest investigations. Occasionally there’s a consciously bad actor I can just point to, but mostly this is an emergent phenomenon from people who mean well, and have done good work in other areas. That’s why scapegoating won’t solve the problem: we need something systemic. 

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

Definitions

I picked the words “vegan advocacy” really specifically. “Vegan” sometimes refers to advocacy and sometimes to just a plant-exclusive diet, so I added “advocacy” to make it clear.

I chose “advocacy” over “advocates” for most statements because this is a problem with the system. Some vegan advocates are net truthseeking and I hate to impugn them. Others would like to be epistemically virtuous but end up doing harm due to being embedded in an epistemically uncooperative system. Very few people are sitting on a throne of plant-based imitation skulls twirling their mustache thinking about how they’ll fuck up the epistemic commons today. 

When I call for actions I say “advocates” and not “advocacy” because actions are taken by people, even if none of them bear much individual responsibility for the problem. 

I specify “EA vegan advocacy” and not just “vegan advocacy” not because I think mainstream vegan advocacy is better, but because 1. I don’t have time to go after every wrong advocacy group in the world. 2. Advocates within Effective Altruism opted into a higher standard. EA has a right and responsibility to maintain the standards of truth it advocates, even if the rest of the world is too far gone to worry about. 

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

How EA vegan advocacy has hindered truthseeking

EA vegan advocacy has both pushed falsehoods and punished people for investigating questions it doesn’t like. It manages this even for positions that 90%+ of effective altruism and the rest of the world agree with, like “veganism is a constraint”. I don’t believe its arguments convince anyone directly, but end up having a big impact by making inconvenient beliefs too costly to discuss. This means new entrants to EA are denied half of the argument, and harm themselves due to ignorance.

This section outlines the techniques I’m best able to name and demonstrate. For each technique I’ve included examples. Comments on my own posts are heavily overrepresented because they’re the easiest to find; “go searching through posts on veganism to find the worst examples” didn’t feel like good practice. I did my best to quote and summarize accurately, although I made no attempt to use a representative sample. I think this is fair because a lot of the problem lies in the fact that good comments don’t cancel out bad, especially when the good comments are made in parallel rather than directly arguing with the bad. I’ve linked to the source of every quote and screen shot, so you can (and should) decide for yourself. I’ve also created a list of all of my own posts I’m drawing from, so you can get a holistic view. 

My posts:

I should note I quote some commenters and even a few individual comments in more than one section, because they exhibit more than one problem. But if I refer to the same comment multiple times in a row I usually only link to it once, to avoid implying more sources than I have. 

My posts were posted on my blog, LessWrong, and EAForum. In practice the comments I drew from came from LessWrong (white background) and EAForum (black background).  I tried to go through those posts and remove all my votes on comments (except the automatic vote for my own comments) so that you could get an honest view of how the community voted without my thumb on the scale, but I’ve probably missed some, especially on older posts. On the main posts, which received a lot of traffic, I stuck to well-upvoted comments, but I included some low (but still positive) karma comments from unpopular posts. 

The goal here is to make these anti-truthseeking techniques legible for discussion, not develop complicated ways to say “I don’t like this”, so when available I’ve included counter examples. These are comments that look similar to the ones I’m complaining about, but are fine or at least not suffering from the particular flaw in that section. In doing this I hope to keep the techniques’ definitions narrow.

Active suppression of inconvenient questions

A small but loud subset of vegan advocacy will say outright you shouldn’t say true things, because it leads to outcomes they dislike. This accusation is even harsher than “not truthseeking”, and would normally be very hard to prove. If I say “you’re saying that because you care more about creating vegans than the health of those you create”, and they say “no I’m not”, I don’t really have a come back. I can demonstrate that they’re wrong, but not their motivation. Luckily, a few people said the quiet part out loud. 

Commenter Martin Soto pushed back very hard on my first nutrition testing study. Finally I asked him outright if he thought it was okay to share true information about vegan nutrition. His response was quite thoughtful and long, so you should really go read the whole thing [LW(p) · GW(p)], but let me share two quotes

[LW(p) · GW(p)]

[LW(p) · GW(p)]

He goes on to say:

[LW(p) · GW(p)]

[LW(p) · GW(p)]

And in a later comment

[LW · GW]

[LW · GW]

EDIT 2023-10-03: Martin disputes [LW(p) · GW(p)] my summary of his comments. I think it’s good practice to link to disputes like this, even though I stand by my summary. I also want to give a heads-up that I see his comments in the dispute thread as continuing the patterns I describe (which makes that thread a tax on the reader). If you want to dig into this, I strongly suggest you first read his original comments [LW(p) · GW(p)] and come up with your own summary, so you can compare that to each of ours.

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition. There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report [LW(p) · GW(p)] on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”. Most people flinch away from explicit trade-offs like that, and I appreciate that he did them and own the conclusion. But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

EDIT 2023-10-04: . Using Faunalytics numbers for self-reported health issues and improvements after quitting veg*nism, I calculated that 20% of veg*ns develop health issues. This number is sensitive to your assumptions; I consider 20% conservative but it could be an overestimate. I encourage you to read the whole post and play with my model, and of course read the original work.  

Most people aren’t nearly this upfront. They will go through the motions of calling an idea incorrect before emphasizing how it will lead to outcomes they dislike. But the net effect is a suppression of the exploration of ideas they find inconvenient. 

This post on Facebook is a good example. Normally I would consider facebook posts out of bounds, especially ones this old (over five years). Facebook is a casual space and I want people to be able to explore ideas without being worried that they’re creating a permanent record that will be used against them.  In this case I felt that because the post was permissioned to public and a considered statement (rather than an off the cuff reply), the truth value outweighed the chilling effect. But because it’s so old and I don’t know how the author’s current opinion, I’m leaving out their name and not linking to the post. 

The author is midlist EA- I’d heard of them for other reasons, but they’re certainly not EA-famous. 

There are posts very similar to this one I would have been fine with, maybe even joyful about. You could present evidence against the claims that X is harmful, or push people to verify things before repeating them, or suggest we reserve the word poison for actual kill-you-dead molecules and not complicated compound constructions with many good parts and only weak evidence of mild long-term negative effects. But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you’d expect if the health claims were true but inconvenient. I don’t know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

A subtler version comes from the AHS-2 post. At the time of this comment the author, Rockwell, described herself as the leader of EA NYC and an advisor to philanthropists on animal suffering, so this isn’t some rando having a feeling. This person has some authority.

[EA(p) · GW(p)]

[EA(p) · GW(p)]

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient. And if they’d written that counter-argument they promised I’d be putting this in the counter-examples section. But it’s been three months and they have not written anything where I can find it, nor responded to my inquiries. So even if literal claim were correct, she’s using a technique whose efficacy is independent of truth. 

Over on the Change My Mind post the top comment says that vegan advocacy is fine because it’s no worse than fast food or breakfast cereal ads

[LW · GW]

[LW · GW]

I’m surprised someone would make this comment. But what really shocks me is the complete lack of pushback from other vegan advocates. If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them. 

Counter-Examples

This post [? · GW] on EAForum came out while I was finishing this post. The author asks if they should abstain from giving bad reviews to vegan restaurants, because it might lead to more animal consumption- which would be a central example of my complaint. But the comments are overwhelmingly “no, there’s not even a good consequentialist argument for that”, and the author appears to be taking that to heart. So from my perspective this is a success story.

Ignore the arguments people are actually making

I’ve experienced this pattern way too often.

Me: goes out of my way to say not-X in a post
Comment: how dare you say X! X is so wrong!
Me: here’s where I explicitly say not-X.
*crickets*

This is by no means unique to posts about veganism. “They’re yelling at me for an argument I didn’t make” is a common complaint of mine. But it happens so often, and so explicitly, in the vegan nutrition posts. Let me give some examples.

My post:

[EA · GW]

[EA · GW]

Commenter:

[EA(p) · GW(p)]

[EA(p) · GW(p)]

My post:

[EA · GW]

[EA · GW] [EA · GW]

Commenters:

[EA(p) · GW(p)] [EA(p) · GW(p)]

[LW(p) · GW(p)]

[LW(p) · GW(p)]

My post: [LW · GW]

[LW · GW]

[LW · GW]

Commenter: 

[LW(p) · GW(p)]

[LW(p) · GW(p)]

My post: 

[LW · GW]

[LW · GW]

Commenter:

[LW(p) · GW(p)]

[LW(p) · GW(p)]

My post:

Commenter:

[EA(p) · GW(p)]

You might be thinking “well those posts were very long and honestly kind of boring, it would be unreasonable to expect people to read everything”. But the length and precision are themselves a response to people arguing with positions I don’t hold (and failing to update when I clarify). The only things I can do are spell out all of my beliefs or not spell out all of my beliefs, and either way ends with comments arguing against views I don’t have. 

Frame control/strong implications not defended/fuzziness

This is the hardest one to describe. Sometimes people say things, and I disagree, and we can hope to clarify that disagreement. But sometimes people say things and responding is like nailing jello to a wall. Their claims aren’t explicit, or they’re individually explicit but aren’t internally consistent, or play games with definitions. They “counter” statements in ways that might score a point in debate club but don’t address the actual concern in context. 

One example is the top-voted comment on LW on Change My Mind

[LW(p) · GW(p)]

[LW(p) · GW(p)]

Over a very long exchange I attempt to nail down his position: 

So what exactly does he disagree with me on? 

He also had a very interesting exchange with another commenter. That thread got quite long, and fuzziness by its nature doesn’t lend itself to excerpts, so you should read the whole thing, but I will share highlights. 

Before the screenshot: Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it’s fine because if people get tired they can go to a doctor.

[LW(p) · GW(p)]

[LW(p) · GW(p)]

That reply doesn’t contain any false statements, and would be perfectly reasonable if we were talking about ER triage protocols. But it’s irrelevant when the conversation is “can we count on veganism-induced fatigue being caught?”. (The answer is no, and only some of the reasons have been brought up here)

You can see how the rest of this conversation worked out in the Sound and Fury section.

A much, much milder example can be seen in What vegan food resources have you found useful? [LW · GW]. This was my attempt to create something uncontroversially useful, and I’d call it a modest success. The post had 20-something karma on LW and EAForum, and there were several useful-looking resources shared on EAForum. But it also got the following comment on LW: 

[LW(p) · GW(p)]

[LW(p) · GW(p)]

I picked this example because it only takes a little bit of thought to see the jujitsu, so little it barely counts. He disagreed with my implicit claim that… well okay here’s the problem. I’m still not quite sure where he disagrees. Does he think everyone automatically eats well as a vegan? That no one will benefit from resources like veganhealth.org? That no one will benefit from a cheat sheet for vegan party spreads? That there is no one for whom veganism is challenging? He can’t mean that last one because he acknowledges exceptions in his later comment, but only because I pushed back. Maybe he thinks that the only vegans who don’t follow his steps are those with medical issues, and that no-processed-food diets are too unpopular to consider? 

I don’t think this was deliberately anti-truthseeking, because if it was he would have stopped at “nothing special” instead of immediately outlining the special things his partner does. That was fairly epistemically cooperative. But it is still an example of strong claims made only implicitly. 

Counter-Examples

I think this comment makes a claim (“vegans moving to naive omnivorism will hurt themselves”) clearly, and backs it up with a lot of details.

[EA(p) · GW(p)]

[EA(p) · GW(p)]

The tone is kind of obnoxious and he’s arguing with something I never claimed, but his beliefs are quite clear. I can immediately understand which beliefs of his I agree with (“vegans moving to naive omnivorism will hurt themselves” and “that would be bad”) and make good guesses at implicit claims I disagree with (“and therefore we should let people hurt themselves with naive veganism”? “I [Elizabeth] wouldn’t treat naive mass conversion to omnivorism seriously as a problem”?). That’s enough to count as epistemically cooperative.

Sound and fury, signifying no substantial disagreement 

Sometimes someone comments with an intense, strongly worded, perhaps actively hostile, disagreement. After a laborious back and forth, the problem dissolves: they acknowledge I never held the position they were arguing with, or they don’t actually disagree with my specific claims. 

Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.

The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.

I do feel like vegan advocates are entitled to a certain amount of defensiveness. They encounter large amounts of concern trolling and outright hostility, and it makes sense that that colors their interactions. But that allowance covers one comment, maybe two, not three to eight (Wilkox, depending on which ones you count). 

For example, I’ve already quoted Wilkox’s very fuzzy comment (reminder: this was the top voted comment on that post on LW). That was followed by a 13+ comment exchange [LW(p) · GW(p)] in which we eventually found he had little disagreement with any of my claims about vegan nutrition, only the importance of these facts. There really isn’t a way for me to screenshot this: the length and lack of specifics is the point.

You could say that the confusion stemmed from poor writing on my part, but:

[LW(p) · GW(p)]

[LW(p) · GW(p)]

I really appreciate the meta-honesty here, but since the exchange appears to have eaten hours of both of our time just to dig ourselves out of a hole, I can’t get that excited about it. 

Counter-Examples

I want to explicitly note that Sound and Fury isn’t the same as asking questions or not understanding a post. E.g. here [EA(p) · GW(p)] Ben West identifies a confusion, asks me, and accepts both my answer and an explanation of why answering is difficult. 

[EA(p) · GW(p)]

Or in that same post, someone asked me to define nutritionally dense [EA(p) · GW(p)]. It took a bit for me to answer and we still disagreed afterward, but it was a great question and the exchange felt highly truthseeking.  

Bad sources, badly handled 

Citations should be something of a bet: if the citation (the source itself or your summary of it) is high quality and supports your point, that should move people closer to your views. But if they identify serious relevant flaws, that should move both you and your audience closer to their point of view. Of course our beliefs are based on a lot of sources and it’s not feasible or desirable to really dig into all of them for every disagreement, so the bet may be very small. But if you’re not willing to defend a citation, you shouldn’t make it.

What I see in EA vegan advocacy is deeply terrible citations, thrown out casually, and abandoned when inconvenient. I’ve made something of a name for myself checking citations and otherwise investigating factual claims from works of nonfiction. Of everything I’ve investigated, I think citations from EA vegan advocacy have the worst effort:truth ratio. Not outright more falsehoods, I read some pretty woo stuff, but those can be dismissed quickly. Citations in vegan advocacy are often revealed to be terrible only after great effort.

And having put in that effort, my reward is usually either crickets, or a new terrible citation. Sometimes we will eventually drill into “I just believe it”, which is honestly fine. We don’t live our lives to the standard of academic papers. But if that’s your reason, you need to state it from the beginning. 

For example, in the top voted comment [EA(p) · GW(p)] on Change My Mind post on EAF,  Rockwell (head of EA NYC) has five links in her post. Only links 1 and 4 are problems, but I’ll describe them in order to avoid confusion.

[EA(p) · GW(p)]

Of the five links: 

  1. Wilkox’s comment [LW(p) · GW(p)] on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn’t happened at the time of linking).
  2. cites my past work, if anything too generously.
  3. an estimation of nutrient deficiency in the US. I don’t love that this uses dietary intake as opposed to testing values (people’s needs vary so wildly), but at least it used EAR and not RDA. I’d want more from a post but for a comment this is fine.
  4. an absolutely atrocious article, which the comment further misrepresents. We don’t have time to get all the flaws in that article, so I’ve put my first hour of criticisms in the appendix. What really gets me here is that I would have agreed the standard American diet sucks without asking for a source. I thought I had conceded that point preemptively, albeit not naming Standard American Diet explicitly.

    And if she did feel a need go the extra mile on rigor for this comment, it’s really not that hard to find decent-looking research about the harms of the Standard Shitty American Diet.  I found this paper on heart disease in 30 seconds, and most of that time was spent waiting for Elicit to load. I don’t know if it’s actually good, but it is not so obviously farcical as the cited paper.
  5. The fifth link goes to a description of the Standard American Diet. 

Rockwell did not respond to my initial reply (that fixing vegan issues is easier than fixing SSAD), or my asking [EA(p) · GW(p)] if that paper on the risks of meat eating was her favorite.

A much more time-consuming version of this happened with Adventist Health Study-2. Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism). There’s one commenter on LessWrong [LW(p) · GW(p)] and two [EA(p) · GW(p)] on EAForum [EA(p) · GW(p)] (one of whom had previously co-authored a blog post on the study and offered to answer questions). As I discussed here, that study is one of the best we have on nutrition and I’m very glad people brought it to my attention. But calling it a pseudo-RCT that supports veganism is deeply misleading. It is nowhere near randomized, and doesn’t cleanly support veganism even if you pretend it is.

(EDIT 2023-10-03: To be clear, the noise in the study overwhelms most differences in outcomes, even ignoring the self-sorting. My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported. One commenter has said she only meant it as evidence that a vegan diet can work for some people, which I agree with, as stated in the post she was responding to. She disagrees with other parts of my summary as well, you can read more here [LW(p) · GW(p)])

It’s been three months, and none of the recommenders have responded to my analysis of the main AHS-2 paper, despite repeated requests. 

But finding a paper is of lower quality and supports an entirely different conclusion is still not the worst-case scenario. The worst outcome is citation whack-a-mole.

A good example of this is from the post “Getting Cats Vegan is Possible and Imperative [EA · GW]”, by Karthik Sekar. Karthik is a vegan author and data scientist at a plant-based meat company. 

[Note that I didn’t zero out my votes on this post’s comments, because it seemed less important for posts I didn’t write]

Karthik cites a lot of sources in that post. I picked what looked like his strongest source and investigated. It was terrible. It was a review article, so checking it required reading multiple studies. Of the cited studies, only 4  (with a total of 39 combined subjects) use blood tests rather than owner reports, and more than half of those were given vegetarian diets, not vegan (even though the table header says vegan). The only RCT didn’t include carnivorous diets. 

Karthik agrees [EA(p) · GW(p)] that paper (that he cited) is not making its case “strong nor clear”, and cites another one (which AFAICT was not in the original post).

I dismiss [EA(p) · GW(p)] the new citation on the basis of “motivated [study] population and minimal reporting”. 

He retreats to [EA(p) · GW(p)] “[My] argument isn’t solely based on the survey data. It’s supported by fundamentals of biochemistry, metabolism, and digestion too […] Mammals such as cats will digest food matter into constituent molecules. Those molecules are chemically converted to other molecules–collectively, metabolism–, and energy and biomass (muscles, bones) are built from those precursors. For cats to truly be obligate carnivores, there would have to be something exceptional about meat: (A) There would have to be essential molecules–nutrients–that cannot be sourced anywhere else OR (B) the meat would have to be digestible in a way that’s not possible with plant matter. […So any plant-based food that passes AAFCO guidelines is nutritionally complete for cats. Ami does, for example.]

I point [EA(p) · GW(p)] out that AAFCO doesn’t think meeting their guidelines is necessarily sufficient. I expected him to dismiss this as corporate ass-covering, and there’s a good chance he’d be right. But he didn’t.

Finally, he gets to his real position [EA(p) · GW(p)]:

[EA(p) · GW(p)]

[EA(p) · GW(p)]

Which would have been a fine aspirational statement, but then why include so many papers he wasn’t willing to stand behind? 

On that same post someone else [EA(p) · GW(p)] says that they think my concerns are a big deal, and Karthik probably can’t convince them without convincing me. Karthik responds [EA(p) · GW(p)]:

[EA(p) · GW(p)]

So he’s conceded that his study didn’t show what he claimed. And he’s not really defending the AAFCO standards. But he’s really sure this will work anyway? And I’m the one who won’t update their beliefs. 

In a different comment the same someone else [EA(p) · GW(p)] notes a weird incongruity in the paper. Karthik doesn’t respond.

This is the real risk of the bad sources: hours of deep intellectual work to discover that his argument boils down to a theoretical claim the author could have stated at the beginning. “I believe vegan cat food meets these numbers and meeting these numbers is sufficient” honestly  isn’t a terrible argument, and I’d have respected it plainly stated, especially since he explicitly calls [EA · GW] for RCTs. Or I would, if he didn’t view those RCTs primarily as a means to prove what he already knows.  

[EA · GW]

Counter-Examples

This commenter [EA(p) · GW(p)] starts out pretty similarly to the others, with a very limited paper implied to have very big implications. But when I push back on the serious limitations of the study, he owns the issues [EA(p) · GW(p)] and says he only ever meant the paper to support a more modest claim (while still believing the big claim he did make?). 

Taxing Facebook

When I joined EA Facebook in 2014, it was absolutely hopping. Every week I met new people and had great discussions with them where we both walked away smarter. I’m unclear when this trailed off because I was drifting away from EA at the same time, but let’s say the golden age was definitively over by 2018. Facebook was where I first noticed the pattern with EA vegan advocacy. 

Back in 2014 or 2015, Seattle EA watched some horrifying factory farming documentaries, and we were each considering how we should change our diets in light of that new information. We tried to continue the discussion on Facebook, only to have Jacy Reese Anthis (who was not a member of the local group and AFAIK had never been to Seattle) repeatedly insist that the only acceptable compromise was vegetarianism, humane meat doesn’t exist, and he hadn’t heard of health conditions benefiting from animal products so my doctor was wrong (or maybe I made it up?). 

I wish I could share screenshots on this, but the comments are gone (I think because the account has been deleted). I’ve included shots of the post and some of my comments (one of which refers to Jacy obstructing an earlier conversation, which I’d forgotten about). A third commenter has been cropped out, but I promise it doesn’t change the context.

(his answer was no, and that either I or my doctor were wrong because Jacy had never heard of any medical issue requiring consumption of animal products)

That conversation went okay. Seattle EA discussed suffering math on different vertebrates, someone brought up eating bugs, Brian Tomasik argued against eating bugs. It was everything an EA conversation should be.

But it never happened again.

Because this kind of thing happened every time animal products, diet, and health came up anywhere on EA Facebook. The commenters weren’t always as aggressive as Jacy, but they added a tremendous amount of cumulative friction. An omnivore would ask if lacto-vegetarianism worked, and the discussion would get derailed by animal advocates insisting you didn’t need milk.  Conversations about feeling hungry at EAG inevitably got a bunch of commenters saying they were fine, as if that was a rebuttal. 

Jeff Kaufman mirrors his FB posts onto his actual blog, which makes me feel more okay linking to it. In this post he makes a pretty clear point- that veganism can be any of cheaper, or healthier, or tastier, but not all at once.  He gets a lot of arguments. One person argues that no one thinks that, they just care about animals more. 

One vegetarian says they’d like to go vegan but just can’t beat eggs for their mix of convenience, price, macronutrients, and micronutrients. She gets a lot of suggestions for substitutes, all of which flunk on at least one criterion.  Jacy Reese Anthis has a deleted comment, which from the reply looks like he asserted the existence of a substitute without listing one. 

After a year or two of this, people just stopped talking about anything except the vegan party line on public FB. We’d bitch to each other in private, but that was it. And that’s why, when a new generation of people joined EA and were exposed to the moral argument for veganism, there was no discussion of the practicalities visible to them. 

[TBF they probably wouldn’t have seen the conversations on FB anyway, I’m told that’s an old-person thing now. But the silence has extended itself]

Ignoring known falsehoods until they’re a PR problem

This is old news, but: for many years ACE said leafletting was great. Lots of people (including me and some friends, in 2015) criticized their numbers. This did not seem to have much effect; they’d agree their eval was imperfect and they intended to put up a disclaimer, but it never happened.

In late 2016 a scathing anti-animal-EA piece was published on Medium, making many incendiary accusations, including that the leafleting numbers are made up. I wouldn’t call that post very epistemically virtuous; it was clearly hoping to inflame more than inform. But within a few weeks (months?), ACE put up a disavowal of the leafletting numbers.

I unfortunately can’t look up the original correction or when they put it up; archive.org behaves very weirdly around animalcharityevaluators.org. As I remember it made the page less obviously false, but the disavowal was tepid and not a real fix. Here’s the 2022 version:

There are two options here: ACE was right about leafleting, and caved to public pressure rather than defend their beliefs. Or ACE was wrong about leafleting (and knew they were wrong, because they conceded in private when challenged) but continued to publicly endorse it.

Why I Care

I’ve thought vegan advocates were advocating falsehoods and stifling truthseeking for years. I never bothered to write it up, and generally avoided public discussion, because that sounded like a lot of work for absolutely no benefit. Obviously I wasn’t going to convince the advocates of anything, because finding the truth wasn’t their goal, and everyone else knew it so what did it matter? I was annoyed at them on principle for being wrong and controlling public discussion with unfair means, but there are so many wrong people in the world and I had a lot on my plate. 

I should have cared more about the principle.

I’ve talked before about the young Effective Altruists [LW(p) · GW(p)] who converted to veganism with no thought for nutrition, some of whom suffered for it. They trusted effective altruism to have properly screened arguments and tell them what they needed to know. After my posts went up I started getting emails from older EAs who weren’t getting the proper testing either; I didn’t know because I didn’t talk to them in private, and we couldn’t discuss it in public. 

Which is the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed. 

What do EA vegan advocates need to do?

  1. Acknowledge that nutrition is a multidimensional problem, that veganism is a constraint, and that adding constraints usually makes problems harder, especially if you’re already under several.
  2. Take responsibility for the nutritional education of vegans you create. This is not just an obligation, it’s an opportunity to improve the lives of people who are on your side. If you genuinely believe veganism can be nutritionally whole, then every person doing it poorly is suffering for your shared cause for no reason.
    1. You don’t even have to single out veganism. For purposes of this point I’ll accept “All diet switches have transition costs and veganism is no different, and the long term benefits more than compensate”. I don’t think your certainty is merited, and I’ll respect you more if you express uncertainty, but I understand that some situations require short messaging and am willing to allow this compression.
  3. Be epistemically cooperative [LW · GW], at least within EA spaces. I realize this is a big ask because in the larger world people are often epistemically uncooperative towards you. But obfuscation is a symmetric weapon and anger is not a reason to believe someone. Let’s deescalate this arms race and have both sides be more truthseeking.

    What does epistemic cooperation mean?
    1. Epistemic legibility [LW · GW]. Make your claims and cruxes clear. E.g. “I don’t believe iron deficiency is a problem because everyone knows to take supplements and they always work” instead of “Why are you bothering about iron supplements?”
    2. Respond to the arguments people actually make, or say why you’re not. Don’t project arguments from one context onto someone else. I realize this one is a big ask, and you have my blessing to go meta and ask work from the other party to make this viable, as long as it’s done explicitly. 
    3. Stop categorically dismissing omnivores’ self-reports. I’m sure many people do overestimate the difficulties of veganism, but that doesn’t mean it’s easy or even possible for everyone.
      1. A scientific study, no matter how good, does not override [EA(p) · GW(p)] a specific person telling you they felt hungry at a specific time. 
    4. If someone makes a good argument or disproves your source, update accordingly. 
  4. Police your own. If someone makes a false claim or bad citation while advocating veganism, point it out. If someone dismisses a detailed self-report of a failed attempt at veganism, push back. 

All Effective Altruists need to stand up for our epistemic commons

Effective Altruism is supposed to mean using evidence and reason to do the most good. A necessary component of that is accurate evidence. All the spreadsheets and moral math in the world mean nothing if the input is corrupted. There can be no consequentialist argument for lying to yourself or allies1 because without truth you can’t make accurate utility calculations2. Garbage in, garbage out.

One of EA’s biggest assets is an environment that rewards truthseeking more than average. Without uniquely strong truthseeking, EA is just another movement of people who are sure they’re right. But high truthseeking environments are fragile, exploiting them is rewarding, and the costs of violating them are distributed and hard to measure. The only way EA’s has a chance of persisting is if the community makes preserving it a priority. Even when it’s hard, even when it makes people unhappy, and even when the short term rewards of defection are high. 

How do we do that? I wish I had a good answer. The problem is complicated and hard to reason about, and I don’t think we understand it enough to fix it. Thus far I’ve focused on vegan advocacy as a case study in destruction of the epistemic commons because its operations are relatively unsophisticated and easy to understand. Next post I’ll be giving more examples from across EA, but those will still have a bias towards legibility and visibility. The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet. 


Acknowledgments

Thanks to the many people I’ve discussed this with over the past few months. 

Thanks to Patrick LaVictoire and Aric Floyd for beta reading this post.

Thanks to Lightspeed Grants for funding this work. Note: a previous post referred to my work on nutrition and epistemics as unpaid after a certain point. That was true at the time and I had no reason to believe it wouldn’t stay true, but Lightspeed launched a week after that post and was an unusually good fit so I applied. I haven’t received a check yet but they have committed to the grant so I think it’s fair to count this as paid. 

Appendix

Terrible anti-meat article

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.] 

  1. ”Are you saying I can’t lie to Nazis about the contents of my attic?” No more so than you’re banned from murdering them or slashing their tires. Like, you should probably think hard about how it fits into your strategy, but I assumed “yourself or allies” excluded Nazis for everyone reading this. 

    “Doesn’t that make the definition of enemies extremely morally load bearing?” It reflects that fact, yes. 

    “So vegan advocates can morally lie as long as it’s to people they consider enemies?”  I think this is, at a minimum, defensible and morally consistent. In some cases I think it’s admirable, such as lying to get access to a slaughterhouse in order to take horrifying videos. It’s a declaration of war, but I assume vegan advocates are proud to declare the meat industry their enemy. ↩
  2. I’ll allow that it’s conceptually possible to make deontological or virtue ethics arguments for lying to yourself or allies, but it’s difficult, and the arguments are narrow and/or wrong. Accurate beliefs turn out to be critical to getting good outcomes in all kinds of situations.  ↩

Edits

You will notice a few edits in this post, which are marked with the edit date. The original text is struck through.

When I initially published this post on 2023-09-28, several images failed to copy over from the google doc to the shitty WordPress editor. These were fixed within a few hours.

I tried to link to sources for every screenshot (except the Facebook ones). On 2023-10-05 I realized that a lot of the links were missing (but not all, which is weird) and manually added them back in. In the process I found two screenshots that never had links, even in the google doc, and fixed those. Halfway through this process the already shitty editor flat out refused to add links to any more images. This post is apparently already too big for WordPress to handle, so every attempted action took at least 60 seconds, and I was constantly afraid I was going to make things worse, so for some images the link is in the surrounding text. 

If anyone knows of a blogging website that will gracefully accept cut and paste from google docs, please let me know. That is literally all an editor takes to be a success in my book and last time I checked I could not find a single site that managed it.

246 comments

Comments sorted by top scores.

comment by Ninety-Three · 2023-09-29T01:01:43.278Z · LW(p) · GW(p)

The other reason vegan advocates should care about the truth is that if you keep lying, people will notice and stop trusting you. Case in point, I am not a vegan and I would describe my epistemic status as "not really open to persuasion" because I long ago noticed exactly the dynamics this post describes and concluded that I would be a fool to believe anything a vegan advocate told me. I could rigorously check every fact presented but that takes forever, I'd rather just keep eating meat and spend my time in an epistemic environment that hasn't declared war on me.

Replies from: tailcalled, adamzerner, jacques-thibodeau, Slapstick
comment by tailcalled · 2023-09-29T06:37:59.804Z · LW(p) · GW(p)

My impression is that while vegans are not truth-seekings, carnists are also not truth-seeking. This includes by making ag-gag laws, putting pictures of free animals on packages containing factory farmed animal flesh, denying that animals have feelings and can experience pain using nonsense arguments, hiding information about factory farming from children, etc..

So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals. And I suppose as a human it's easier to be in the latter, as long as you don't mind hiring people to torture animals for your pleasure.

Edit/clarification: I don't mean that you can't choose to figure it out in more detail, only that if you do give up on figuring it out in more detail, you're more constrained. [LW(p) · GW(p)]

Replies from: ariel-kwiatkowski, dr_s, Serine, tailcalled, adamzerner, ztzuliios
comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2023-09-29T09:14:50.961Z · LW(p) · GW(p)

There's a pretty significant difference here in my view -- "carnists" are not a coherent group, not an ideology, they do not have an agenda (unless we're talking about some very specific industry lobbyists who no doubt exist). They're just people who don't care and eat meat.

Ideological vegans (i.e. not people who just happen to not eat meat, but don't really care either way) are a very specific ideological group, and especially if we qualify them like in this post ("EA vegan advocates"), we can talk about their collective traits.

Replies from: pktechgirl, tailcalled, Green_Swan, None
comment by Elizabeth (pktechgirl) · 2023-09-29T19:36:52.407Z · LW(p) · GW(p)

TBF, the meat/dairy/egg industries are specific groups of people who work pretty hard to increase animal product consumption, and are much better resourced than vegan advocates. I can understand why animal advocacy would develop some pretty aggressive norms in the face of that, and for that reason I consider it kind of besides the point to go after them in the wider world. It would basically be demanding unilateral disarmament from the weaker side.

But the fact that the wider world is so confused there's no point in pushing for truth is the point. EA needs to stay better than that, and part of that is deescalating the arms race when you're inside its boundaries. 

Replies from: tailcalled
comment by tailcalled · 2023-09-29T19:56:23.633Z · LW(p) · GW(p)

But the fact that the wider world is so confused there's no point in pushing for truth is the point. EA needs to stay better than that, and part of that is deescalating the arms race when you're inside its boundaries. 

Agree with this. I mean I'm definitely not pushing back against your claims, I'm just pointing out the problem seems bigger than commonly understood.

comment by tailcalled · 2023-09-29T19:54:57.124Z · LW(p) · GW(p)

Could you expand on why you think that it makes a significant difference?

  • E.g. if the goal is to model what epistemic distortions you might face, or to suggests directions of change for fewer distortions, then coherence is only of limited concern (a coherent group might be easier to change, but on the other hand it might also more easily coordinate to oppose change).
  • I'm not sure why you say they are not an ideology, at least under my model of ideology that I have developed for other purposes, they fit the definition (i.e. I believe carnism involves a set of correlated beliefs about life and society that fit together).
  • Also not sure what you mean by carnists not having an agenda, in my experience most carnists have an agenda of wanting to eat lots of cheap delicious animal flesh.
Replies from: bec-hawk
comment by Rebecca (bec-hawk) · 2023-09-30T18:58:20.033Z · LW(p) · GW(p)

Could you clarify who you are defining as carnists?

Replies from: tailcalled
comment by tailcalled · 2023-09-30T19:40:26.896Z · LW(p) · GW(p)

I tend to think of ideology as a continuum, rather than a strict binary. Like people tend to have varying degrees of belief and trust in the sides of a conflict, and various unique factors influencing their views, and this leads to a lot of shades of nuance that can't really be captured with a binary carnist/not-carnist definition.

But I think there are still some correlated beliefs where you could e.g. take their first principal component as an operationalization of carnism. Some beliefs that might go into this, many of which I have encountered from carnists:

  • "People should be allowed to freely choose whether they want to eat factory-farmed meat or not."
  • "Animals cannot suffer in any way that matters."
  • "One should take an evolutionary perspective and realize that factory farming is actually good for animals. After all, if not for humans putting a lot of effort into farming them, they wouldn't even exist at their current population levels."
  • "People who do enough good things out of their own charity deserve to eat animals without concerning themselves with the moral implications."
  • "People who design packaging for animal products ought to make it look aesthetically pleasing and comfortable."
  • "It is offensive and unreasonable for people to claim that meat-eating is a horribly harmful habit."
  • "Animals are made to be used by humans."
  • "Consuming animal products like meat or milk is healthier than being strictly vegan."

One could make a defense of some of the statements. For instance Elizabeth has made a to-me convincing defense of the last statement. I don't think this is a bug in the definition of carnism, it just shows that some carnist beliefs can be good and true. One ought to be able to admit that ideology is real and matters while also being able to recognize that it's not a black-and-white issue.

comment by Jacob Watts (Green_Swan) · 2023-10-03T02:28:42.616Z · LW(p) · GW(p)

While I agree that there are notable differences between "vegans" and "carnists" in terms of group dynamics, I do not think that necessarily disagrees with the idea that carnists are anti-truthseeking. 

"carnists" are not a coherent group, not an ideology, they do not have an agenda (unless we're talking about some very specific industry lobbyists who no doubt exist). They're just people who don't care and eat meat.

It seems untrue that because carnists are not an organized physical group that has meetings and such, they are thereby incapable of having shared norms or ideas/memes. I think in some contexts it can make sense/be useful to refer to a group of people who are not coherent in the sense of explicitly "working together" or having shared newletters based around a subject or whatever. In some cases, it can make sense to refer to those people's ideologies/norms.

Also, I disagree with the idea that carnists are inherently neutral on the subject of animals/meat. That is, they don't "not care". In general, they actively want to eat meat and would be against things that would stop this. That's not "not caring"; it is "having an agenda", just not one that opposes the current status quo. The fact that being pro-meat and "okay with factory farming" is the more dominant stance/assumed default in our current status quo doesn't mean that it isn't a legitimate position/belief that people could be said to hold. There are many examples of other memetic environments throughout history where the assumed default may not have looked like a "stance" or an "agenda" to the people who were used to it, but nonetheless represented certain ideological claims.

I don't think something only becomes an "ideology" when it disagrees with the current dominant cultural ideas; some things that are culturally common and baked into people from birth can still absolutely be "ideology" in the way I am used to using it. If we disagree on that, then perhaps we could use a different term? 

If nothing else, carnists share the ideological assumption that "eating meat is okay". In practice, they often also share ideas about the surrounding philosophical questions and attitudes. I don't think it is beyond the pale to say that they could share norms around truth-seeking as it relates to these questions and attitudes. It feels unnecessarily dismissive and perhaps implicitly status quoist to assume that: as a dominant, implicit meme of our culture "carnism" must be "neutral" and therefore does not come with/correlate with any norms surrounding how people think about/process questions related to animals/meat.

Carnism comes with as much ideology as veganism even if people aren't as explicit in presenting it or if the typical carnist hasn't put as much thought into it. 

I do not really have any experience advocating publicly for veganism and I wouldn't really know about which specific espistemic failure modes are common among carnists for these sorts of conversations, but I have seen plenty of people bend themselves out of shape persevering their own comfort and status quo, so it really doesn't seem like a stretch to imagine that epistemic maladies may tend to present among carnists when the question of veganism comes up.

For one thing, I have personally seen carnists respond in intentionally hostile ways towards vegans/vegan messaging on several occasions. Partially this is because they see it as a threat to their ideas or their way of life or partially this is because veganism is a designated punching bag that you're allowed to insult in a lot of places. Often times, these attacks draw on shared ideas about veganism/animals/morality that are common between "carnists". 

So, while I agree that there are very different group dynamics, I don't think it makes sense to say that vegans hold ideologies and are capable of exhibiting certain epistemic behaviors, but that carnists, by virtue of not being a sufficiently coherent collection of individuals, could not have the same labels applied to them. 

comment by [deleted] · 2023-10-06T23:59:56.647Z · LW(p) · GW(p)

(edit: idk if i endorse comments like this, i was really stressed from the things being said in the comments here)

People who fund the torture of animals are not a coherent group, not an ideology, they do not have an agenda. People who don't fund the torture of animals are a coherent group, an ideology, they have an agenda.

People who keep other people enslaved are not a coherent group, not an ideology, they do not have an agenda. People who seek to end slavery are a coherent group, an ideology, they have an agenda.

Normal people like me are not a coherent group, not an ideology, we do not have an agenda.
Atypicals like you are a coherent group, an ideology, you have an agenda.

maybe a future, better, post-singularity version of yourself will understand how terribly alienating statements like this are. maybe that person will see just how out-of-frame you have kept the suffering of other life forms to think this way.

my agenda is that of a confused, tortured animal, crying out in pain. it is, at most, a convulsive reaction. in desperation, it grasps onto 'instrumental rationality' like the paws of one being pulled into rotating blades flail around them, looking for a hold to force themself back.

and it finds nothing, the suffering persists until the day the world ends.

Replies from: ariel-kwiatkowski
comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2023-10-07T13:40:26.520Z · LW(p) · GW(p)

Jesus christ, chill. I don't like playing into the meme of "that's why people don't like vegans", but that's exactly why.

And posting something insane followed by an edit of "idk if I endorse comments like this" has got to be the most online rationalist thing ever. 

Replies from: None
comment by [deleted] · 2023-10-10T07:30:39.880Z · LW(p) · GW(p)

i do endorse the actual meaning of what i wrote. it is not "insane" and to call it that is callous. i added the edit because i wasn't sure if expressions of stress are productive. i think there's a case to be made that they are when it clearly stems from some ongoing discursive pattern, so that others can know the pain that their words cause. especially given this hostile reaction.

---

deleted the rest of this. there's no point for two alignment researchers to be fighting over oldworld violence. i hope this will make sense looking back.

comment by dr_s · 2023-09-30T06:01:09.876Z · LW(p) · GW(p)

putting pictures of free animals on packages containing factory farmed animal flesh

Well, yes, that's called marketing, it's like the antithesis of truth seeking.

The cure to hypocrisy is not more hypocrisy and lies but of opposite sign: that's the kind of naive first order consequentialism that leads people to cynicism instead. The fundamental problem is that out of fear that people would reasonably go for a compromise (e.g. keep eating meat but less of it and only from free range animals) some vegans decide to just pile on the arguments, true or false, until anyone who believed them all and had a minimum of sense would go vegan instantly. But that completely denies the agency and moral ability of everyone else, and underestimates the possibility that you may be wrong. As a general rule, "my moral calculus is correct, therefore I will skew the data so that everyone else comes to the same conclusions as me" is a bad principle.

Replies from: tailcalled
comment by tailcalled · 2023-09-30T07:27:56.949Z · LW(p) · GW(p)

I agree in principle, though someone has to actually create a community of people who track the truth in order for this to be effective and not be outcompeted by other communities. When working individually, people don't have the resources to untangle the deception in society due to its scale.

comment by Serine · 2023-10-08T11:26:50.301Z · LW(p) · GW(p)

The line about "carnists" strikes me as outgroup homogeneity, conceptual gerrymandering, The Worst Argument In The World [LW · GW] - call it what you want, but it should be something rationalists should have antibodies against.

Specifically, equivocating between "carnists [meat industry lobbyists]" and "carnists [EA non-vegans]" seems to me like known anti-truthseeking behavior.

So the question, as I see you posing, is whether NinetyThree prefers being in an epistemic environment with people who care about epistemic truthseeking (EA non-vegans) or with people for whom your best defense is that they're no worse than meat industry lobbyists.

Replies from: tailcalled
comment by tailcalled · 2023-10-08T11:30:12.007Z · LW(p) · GW(p)

I think my point would be empirically supported; we can try set up a survey and run a factor analysis if you doubt it.

 

Edit: just to clarify I'm not gonna run the factor analysis unless someone who doubts the validity of the category comes by to cooperate, because I'm busy and expect there'd be a lot of goalpost moving that I don't have time to deal with if I did it without pre-approval from someone who doesn't buy it.

comment by tailcalled · 2023-09-29T07:11:58.170Z · LW(p) · GW(p)

Ok I'm getting downvoted to oblivion because of this, so let me clarify:

So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals.

If, like NinetyThree, you decide to give up on untangling the question for yourself because of all the lying ("I would describe my epistemic status as "not really open to persuasion""), then you still have to make decisions, which in practice means following some side in the conflict, and the most common side is the carnist side which has the problems I mention.

I don't want to be in a situation where I have to give up on untangling the question (see my top-level comment proposing a research community), but if I'm being honest I can't exactly say that it's invalid for NinetyThree to do so.

Replies from: bec-hawk
comment by Rebecca (bec-hawk) · 2023-09-30T19:13:00.827Z · LW(p) · GW(p)

I understood NinetyThree to be talking about vegans lying about issues of health (as Elizabeth was also focusing on), not about the facts of animal suffering. If you agree with the arguments on the animal cruelty side and your uncertainty is focused on the health effects on you of a vegan diet vs your current one (which you have 1st hand data on), it doesn’t really matter what the meat industry is saying as that wasn’t a factor in the first place

Replies from: tailcalled
comment by tailcalled · 2023-10-05T09:31:01.121Z · LW(p) · GW(p)

Maybe. I pattern-matched it this way because I had previously been discussing psychological sex differences with Ninety-Three on discord, where he adopted the HBD views on them due to a perception that psychologists were biased, but he wasn't interested in making arguments or in me doing followup studies to test it. So I assumed a similar thing was going on here with respect to eating animals.

comment by Adam Zerner (adamzerner) · 2023-09-29T07:26:15.531Z · LW(p) · GW(p)

I don't agree with the downvoting. The first paragraph sounds to me like a not only fair, but good point. The first sentence in the second paragraph doesn't really seem true to me though.

Replies from: tailcalled
comment by tailcalled · 2023-09-29T07:28:06.512Z · LW(p) · GW(p)

Does it also not seem true in the context of my followup clarification?

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-09-29T07:36:15.086Z · LW(p) · GW(p)

Yeah, it still doesn't seem true even given the followup clarification.

Well, depending on what you actually mean. In the original excerpt, you're saying that the question is whether you want to be in epistemic environment A or epistemic environment B. But in your followup clarification, you talk about the need to decide on something. I agree that you do need to decide on something (~carnist or vegan). I don't think that means you necessarily have to be in one of those two epistemic environments you mention. But I also charitably suspect that you don't actually think that you necessarily have to be in one of those two specific epistemic environments and just misspoke.

Replies from: tailcalled
comment by tailcalled · 2023-09-29T07:59:44.281Z · LW(p) · GW(p)

In the followup, I admit you don't have to choose as long as you don't give up on untangling the question. So like I'm implying that there's multiple options such as:

  • Try to figure it out (NinetyThree rejects this, "not really open to persuasion")
  • Adopt the carnist side (I think NinetyThree probably broadly does this though likely with exceptions)
  • Adopt the vegan side (NinetyThree rejects this)

Though I suppose you are right that there are also lots of other nuanced options that I haven't acknowledged, such as "decide you are uncertain between the sides, and e.g. use utility weights to manage risk while exploiting opportunities", which isn't really the same as "try to figure it out". Not sure if that's what you mean; another option would be that e.g. I have a broader view of what "try to figure it out" means than you do, or similar (though what really matters for the literal truth of my comment is what NinetyThree's view is). Or maybe you mean that there are additional sides that could be adopted? (I meant to hint at that possibility with phrasings like "the most common side", but I suppose that could also be interpreted to just be acknowledging the vegan side.) Or maybe it's just "all of the above"?

I do genuinely think that there is value in thinking of it as a 2D space of tradeoffs for cheap epistemics <-> strong epistemics and pro animal <-> pro human (realistically one could also put in the environment too, and realistically on the cheap epistemics side it's probably anti human <-> anti animal). I agree that my original comment lacked nuance wrt the ways one could exist within that tradeoff, though I am unsure to what extent your objection is about the tradeoff framing vs the nuance in the ways one can exist in it.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-09-29T17:56:51.103Z · LW(p) · GW(p)

In the followup, I admit you don't have to choose as long as you don't give up on untangling the question.

Ah, I kinda overlooked this. My bad.

In general my position is now that:

  • I'm a little confused.
  • I think what you wrote is probably fine.
  • Think you probably could have been more clear about what you initially wrote.
  • Think it's totally fine to not be perfect in what you originally wrote.
  • Feel pretty charitable. I'm sure that what you truly meant is something pretty reasonable.
  • Think downvoters were probably triggered and were being uncharitable.
  • Am not interested in spending much more time on this.
comment by ztzuliios · 2023-10-02T20:36:18.103Z · LW(p) · GW(p)

In general, committing to any stance as a personal constant (making it a "part of your identity") is antithetical to truthseeking. It certainly imposes a constraint on truthseeking that makes the problem harder. 

But, if you share that stance with someone else, you won't tend to see it. You'll just see the correctness of your own stance. Being able to correctly reason around this is a hard-mode problem. 

While you can speak about specific spectra of stances (vegan-carnist, and others), in reality, there are multiple spectra in play at any given time (the one I see the most is liberal-radical but there are also others). This leads to truthseeking constraints or in a word biases in cross-cutting ways. This seems to play out in the interplay of all the different people committing all the different sins called out in the OP. I think this is not unique to veganism at all and in fact plays out in virtually all similar spaces and contests. You always have to average out the ideological bias from a community. 

There is no such thing as an epistemic environment that has not declared war on you. There can be no peace. This is hard mode and I consider the OP here to be another restatement of the generally accepted principle that this kind of discussion is hard mode / mindkilling.

This is why I'm highly skeptical of claims like the comment-grandparent. Everyone is lying, and it doesn't matter much whether the lying is intentional or implicit. There is no such thing as a political ideology that is fully truth-seeking. That is a contradiction in terms. There is also no such thing as a fully neutral political ideology or political/ethical stance; everyone has a point of view. I'm not sure whether the vegans are in fact worse than the carnists on this. One side certainly has a significant amount of status-quo bias behind it. The same can be said about many other things. 

Just to be explicitly, my point of view as it relates to these issues is vegan/radical, I became vegan roughly at the same time I became aware of rationalism but for other reasons, and when I went vegan the requirement for b12 supplementation was commonly discussed (outside the rationalist community, which was not very widely vegan at the time) mostly because "you get it from supplements that get it from dirt" was the stock counterargument to "but no b12 when vegan."

Replies from: tailcalled
comment by tailcalled · 2023-10-08T13:33:11.140Z · LW(p) · GW(p)

I don't think this is right, or at least it doesn't hit the crux.

People on a vegan diet should in a utopian society be the ones who are most interested in truth about the nutritional challenges on a vegan diet, as they are the ones who face the consequences. The fact that they aren't reflects the fact that they are not optimizing for living their own life well, but instead for convincing others of veganism.

Marketing like this is the simplest (and thus most common?) way for ideologies to keep themselves alive. However, it's not clear that it's the only option. If an ideology is excellent at truthseeking, then this would presumably by itself be a reason to adopt it, as it would have a lot of potential to make you stronger.

Rationalism is in theory supposed to be this. In practice, rationalism kind of sucks at it, I think because it's hard and people aren't funding it much and maybe also all the best rationalists start working in AI safety or something.

There's some complications to this story though. As you say, there is no such thing as an epistemic environment that has not (in a metaphorical sense) declared war on you. Everyone does marketing, and so everyone perceives full truthseeking as a threat, and so you'd make a lot of enemies through doing this. A compromise would be a conspiracy which does truthseeking in private to avoid punishment, but such a conspiracy is hardly an ideology, and also it feels pretty suspicious to organize at scale.

comment by Adam Zerner (adamzerner) · 2023-09-29T06:08:15.417Z · LW(p) · GW(p)

The other reason vegan advocates should care about the truth is that if you keep lying, people will notice and stop trusting you.

I hear ya, but I think this is missing something important. Basically, I'm thinking of the post Ends Don't Justify Means (Among Humans) [LW · GW].[1][2]

Doing things that are virtuous tends to lead to good outcomes. Doing things that aren't virtuous tends to lead to bad outcomes. For you, and for others. It's hard to predict what those outcomes -- good and bad -- actually are. If you were a perfect Bayesian with unlimited information, time and computing power, then yes, go ahead and do the consequentialist calculus. But for humans, we are lacking in those things. Enough so that consequentalist calculus frequently becomes challenging, and the good track record of virtue becomes a huge consideration.

So, I agree with you that "lying leads to mistrust" is one of the reasons why vegan advocates shouldn't lie. But I think that the main reason they shouldn't lie is simply that lying has a pretty bad track record.

And then another huge consideration is that people who come up with reasons why they, at least in this particular circumstance, are a special snowflake and are justified in lying, frequently are deluding themselves.[3]

  1. ^

    Well, that post is about ethics. And I think the conversation we're having isn't really limited to ethics. It's more about just, pragmatically, what should the EA community do if they want to win.

  2. ^

    Here's my slightly different(?) take, if anyone's interested: Reflective Consequentialism [LW · GW]. 

  3. ^

    I cringe at how applause light-y this comment is. Please don't upvote if you feel like you might be non-trivially reacting to an applause light [LW · GW].

Replies from: bec-hawk
comment by Rebecca (bec-hawk) · 2023-09-30T19:27:05.642Z · LW(p) · GW(p)

I understood the original comment to be making essentially the same point you’re making - that lying has a bad track record, where ‘lying has a bad track record of causing mistrust’ is a case of this. In what way do you see them as distinct reasons?

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-09-30T19:41:04.809Z · LW(p) · GW(p)

I see them as distinct because what I'm saying is that lying generally tends to lead to bad outcomes (for both the liar and society at large) whereas mistrust specifically is just one component of the bad outcomes.

Other components that come to my mind:

  • People don't end up with accurate information.
  • Expectations that people will cooperate (different from "tell you the truth") go down.
  • Expectations that people will do things because they are virtuous go down.

But a big thing here is that it's difficult to know why exactly it will lead to bad outcomes. The gears are hard to model. However, I think there's solid evidence that it leads to bad outcomes.

comment by jacquesthibs (jacques-thibodeau) · 2023-09-29T07:39:02.728Z · LW(p) · GW(p)

I personally became vegetarian after being annoyed that vegans weren’t truth-seeking (like most groups of people, tbc). But I can totally see why others would be turned off from veganism completely after being lied to (even if people spread nutrition misinformation about whatever meat-eating diet they are on too).

I became vegetarian even though I stopped trusting what vegans said about nutrition and did my own research. Luckily that’s something I was interested in because I wouldn’t expect others to bother reading papers and such for having healthy diet.

(Note: I’m now considering eating meat again, but only ethical farms and game meat because I now believe those lives are good, I’m really only against some forms of factory farming. But this kind of discussion is hard to have with other veg^ns.)

Replies from: tristan-williams, adamzerner
comment by Tristan Williams (tristan-williams) · 2023-09-29T20:28:29.636Z · LW(p) · GW(p)

Did you go vegetarian because you thought it was specifically healthier than going vegan?

Replies from: jacques-thibodeau
comment by jacquesthibs (jacques-thibodeau) · 2023-09-29T21:12:25.129Z · LW(p) · GW(p)

Yes and no. No because I figured most of the reduction in suffering came from not eating meat and eggs (I stopped eating eggs even tho most vegetarians do). So I felt it was a good spot to land and not be too much effort for me.

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-09-30T11:43:52.347Z · LW(p) · GW(p)

Ah okay cool, so you have a certain threshold for harm and just don't consume anything above that. I've found this approach really interesting and have recommended others against because I've worried about it's sustainability, but do you think it's been a good path for you?

Replies from: jacques-thibodeau, pktechgirl, DonyChristie
comment by jacquesthibs (jacques-thibodeau) · 2023-09-30T13:49:11.028Z · LW(p) · GW(p)

I’m not sure why you’d think it’s less sustainable than veganism. In my mind, it’s effective because it is sustainable and reduces most of the suffering. Just like how EA tries to be effective (and sustainable) by not telling people to donate massive amounts of their income (just a small-ish percentage that works for them to the most effective charities), I see my approach as the same. It’s the sweet-spot between reducing suffering and sustainability (for me).

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-10-02T15:30:44.030Z · LW(p) · GW(p)

See below if you'd like an in depth look at my way of thinking, but I defiantly see the analogy and suppose I just think of it a bit differently myself. Can I ask how long you've been vegetarian? And how you've come to the decision as to which animals lives you think are net positive?

Replies from: jacques-thibodeau
comment by jacquesthibs (jacques-thibodeau) · 2023-10-03T03:14:29.701Z · LW(p) · GW(p)

5 and half years. Didn’t do it sooner because I was concerned about nutrition and don’t trust vegans/vegetarians to give truthful advice. I used various statistics on number of deaths, adjusted for sentience, and more. Looked at articles like this: https://www.vox.com/2015/7/31/9067651/eggs-chicken-effective-altruism

comment by Elizabeth (pktechgirl) · 2023-09-30T17:02:25.791Z · LW(p) · GW(p)

I've found this approach really interesting and have recommended others against because I've worried about it's sustainability,

 

This argument came up a lot during the facebook debate days, could you say more about why you believe it?

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-10-02T15:27:10.475Z · LW(p) · GW(p)

Yeah sure. I would need a full post to explain myself, but basically I think that what seems to be really important when going vegan is standing in a certain sort of loving relationship to animals, one that isn't grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute.

I guess the first time I thought about this was with my university EA group. We had a couple of hardcore utilitarians, and one of them brought up an interesting idea one night. He was a vegan, but he'd been offered some mac and cheese, and in similar thinking to above (that dairy generally involves less suffering than eggs or chicken for ex) he wondered if it might actually be better to take the mac and donate the money he would have spent to an animal welfare org. And when he roughed up the math, sure enough, taking the mac and donating was somewhat significantly the better option.  

But he didn't do it, nor do I think he changed how he acted in the future. Why? I think it's really hard to draw a line in the sand that isn't veganism that stays stable over time. For those who've reverted, I've seen time and again a slow path back, one where it starts with the less bad items, cheese is quite frequent, and then naturally over time one thing after another is added to the point that most wind up in some sort of reduceitarian state where they're maybe 80% back to normal (I also want to note here, I'm so glad for any change, and I cast no stones at anyone trying their best to change). And I guess maybe at some point it stops being a moral thing, or becomes some really watered down moral thing like how much people consider the environment when booking a plane ticket. 

I don't know if this helps make it clear, but it's like how most people feel about harm to younger kids. When it comes to just about any serious harm to younger kids, people are generally against it, like super against it, a feeling of deep caring that to me seems to be one of the strongest sentiments shared by humans universally. People will give you some reasons for this i.e. "they are helpless and we are in a position of responsibility to help them" but really it seems to ground pretty quickly in a sentiment of "it's just bad". 

To have this sort of love, this commitment to preventing suffering, with animals to me means pretty much just drawing the line at sentient beings and trying to cultivate a basic sense that they matter and that "it's just bad" to eat them. Sure, I'm not sure what to do about insects, and wild animal welfare is tricky, so it's not nearly as easy as I'm making it seem. And it's not that I don't want to have any idea of some of the numbers and research behind it all, I know I need to stay up to date on debates on sentience, and I know that I reference relative measures of harm often when I'm trying to guide non-veg people away from the worst harms. But what I'd love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what's better or worse and just casts it all out as beyond the pale. 

And to try to return to earlier, I guess I see taking this sort of position as likely to extend people's time spent doing veg-related diets, and I think it's just a lot trickier to have this sort of relationship when you are doing some sort of utilitarian calculus of what is and isn't above the bar for you (again, much love to these people, something is always so much better than nothing). This is largely just a theory, I don't have much to back it up, and it would seem to explain some cases of reversion I've seen but certainly not all, and I also feel like this is a bit sloppy because I'd really need a post to get at this hard to describe feeling I have. But hopefully this helps explain the viewpoint a bit better, happy to answer any questions :)

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-03T02:23:07.015Z · LW(p) · GW(p)

Thank you. This was educational for me, and also just beautifully put.

I have two responses, one on practicalities and one on moral philosophy. My guess is the practical issues aren't your cruxes, so I'm going to put those aside for now to focus on the moral issue. 

you say:
 

one that isn't grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute.

[...]

what I'd love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what's better or worse and just casts it all out as beyond the pale

This might be presumptuous, but I think I understand how you feel here, because it is I how I feel about truthseeking. That respect for the truth[1] isn't just an instrumental tactic towards some greater good, it is the substrate that all good things grow from. If you start trading away truthseeking for short-term benefit you will end up with nothing but ashes, no matter how good the short-term trade looked. And it is scary to me that other people don't get this, the way I imagine it is scary to you that other people can be surrounded by torture factories and think about them mostly as a source of useful and pleasurable molecules. 

I don't know how to resolve this, because "respect for life" and "respect for truth" are both pretty compelling substrates. I don't actually know if I'd want to live in a world where truth had decisively won over life and anti-suffering. My gut feeling is that truth world can bootstrap to truth-and-life world easier than life world can, but if someone disagreed I wouldn't have a good counterargument. 

  1. ^

    except with explicit enemies

Replies from: tristan-williams, tailcalled
comment by Tristan Williams (tristan-williams) · 2023-10-06T11:18:11.292Z · LW(p) · GW(p)

First thanks for your kind words, they were nice to receive :) 

But I also think this is wonderfully put, and I think you're right to point to your feelings on truth as similar. As truth for you, life to me is sacred, and I think I generally build a lot of my world out of that basic fact. I would note that I think one another's values are likely important for us to, as truth is also really important to me and I value honestly and not lying more than most people I know. And on the flipside I imagine that you value life quite a bit. 

But looking at the specific case you imagine, yeah it's really hard to imagine either totally separate on their own because I find they often lead to one another. I guess one crux for me that might give me doubts on the goodness of the truth world is not being sure on the "whether humans are innately good" question. If they aren't innately good, then everyone being honest about their intentions and what they want to do may mean places in the world where repression or some sort of suffering is common. I guess the way I imagine it going is having a hard time dealing with the people who honesty just want some version of the world that involves inflicting some sort of harm on others. I imagine that many would likely not want this, and they would make rules as such, but that they'd have a hard time critiquing others in the world far away from themselves if they've been perfectly straightforward and honest about where they stand with their values. 

But I can easily imagine counterarguments here, and it's not as if a life where reducing suffering were of utmost importance wouldn't run the risk of some pretty large deviations from the truth that seem bad (i.e. a vegan government asserting there are zero potentials for negative health effects for going vegan). But then we could get into standard utilitarian responses like "oh well they really should have been going in for something like rule utilitarianism that would have told them this was an awful decision on net" and so on and so forth. Not sure where I come out either really. 

Note: I'd love to know what practical response you have, it might not be my crux but could be insightful!

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-06T22:03:14.487Z · LW(p) · GW(p)

LessWrong is launching Dialogues pretty soon, would you be interested in doing one together? I'm most interested in high level "how do you navigate when two good principles conflict?" than object level vegan questions, but probably that would come up. An unnuanced teaser for this would be "I don't think a world where humans are Bad as a whole makes sense". 

On a practical level:

  • I think you speak of veganism as the sustainable shelling point with more certainty than is warranted. How do you know it's less sustainable, for everyone, than ameliatarianism or reducitarianism? How do you know that the highest EV is pushing veganism-as-ideal harder, rather than socially coordinate around "medical meat"?
    • I think "no animal products via the mouth" is a much more arbitrary line than is commonly considered, especially if you look at it on purely utilitarian grounds. What about sugar sifted through bones? Why isn't "no vertebrates" sustainable?  What about glue in shoes? What about products produced in factories that use rat poison?
    • Vegetarianism seems like a terrible compromise shelling point. My understanding is that most eggs are more suffering per calorie or nutrient than beef. But vegetarian is (currently) an easier line than milk-and-beef-but-not-eggs-or-chickens. 
      • I knew a guy who went vegetarian for ethical reasons before learning all of the math. He chose to stay vegetarian after learning how bad eggs were, because he was worried that he couldn't reconfigure himself a second time and if he tried he'd slide all the way back into eating meat.
    • When I've asked other people about this they answer based on the vegans they knew. But that's inherently a biased group. It includes current vegans who socialize in animal-focused spaces. Assuming that's true, why should that apply to vegans who don't hang out in those spaces, much less omnivores considering reducitarianism?
  • It's not obvious to me our protect-at-all-costs attitude towards young children is optimal. I can track a number of costs to society, parents, and the children themselves. And a bunch of harms that still aren't being prevented. 
  • I suspect you are underestimating the costs of veganism for some people. It sounds like that one guy didn't value mac and cheese that much, and it was reasonable for him to forego it even if it also would have been easy to buy an offset. But for some people that bowl of mac and cheese is really important, and if you want society as a whole to shift then the default rules need to accommodate that.
Replies from: habryka4, tristan-williams, whitehatStoic
comment by Tristan Williams (tristan-williams) · 2023-10-09T21:30:51.663Z · LW(p) · GW(p)

Have no idea what it entails but I enjoy conversing and learning more about the world, so I'd happy do a dialogue! Happy to keep it in the clouds too.

But yeah you make a good point. I mean, I'm not convinced what the proper schelling point is, and would eagerly eat up any research on this. Maybe what I think is that for a specific group of people like me (no idea what exactly defines that group) it makes sense, but that generally what's going to make sense for a person has to be quite tailored to their own situation and traits. 

I would push back on the no animal products through the mouth bit. Sure, it happens to include lesser forms of suffering that might be less important than changing other things in the world (and if you assumed that this was zero sum that may be a problem, but I don't think it is). But generally it focuses on avoiding suffering that you are in control of, in a way that updates in light of new evidence. Vegetarianism in India is great because it leads to less meat consumption, but because it involves specific things to avoid instead of setting a basis as suffering it becomes much harder to convincingly explain why they should update to avoid eggs for example. So yeah, protesting rat poison factories may not be a mainstream vegan thing, but I'd be willing to bet vegans are less apt to use it. And sure, vegans may be divided on what to do about sugar, but I'd be surprised if any said "it doesn't involved an animal going in my mouth so it's okay with me". I don't think it's arbitrary but find it rather intentional.

I could continue on here but I'm also realizing some part of you wanted to avoid debates about vegan stuffs, so I'll let this suffice and explicitly say if you don't want to respond I fully understand (but would be happy to hear if you do!). 

comment by MiguelDev (whitehatStoic) · 2023-10-07T07:45:02.829Z · LW(p) · GW(p)

"how do you navigate when two good principles conflict?" 

 

I'd be happy to join a dialogue about this.

comment by tailcalled · 2023-10-04T10:45:59.666Z · LW(p) · GW(p)

I think an issue is that you are imagining a general factor of truth-seeking which applies regardless of domain, whereas in practice most of the variation you see in truth-seeking is instrumental or ideological and so limited to the specific areas where people draw utility or political interest from truth-seeking.

I think it is possible to create a community that more generally values truth-seeking and that doing so could be very valuable, bootstrapping a great deal more caring and ability by more clearly seeing what's going on.

However I think by-default it's not what happens when criticizing others for lack of truth-seeking, and also by-default not what happens among people who pride themselves on truth-seeking and rationality. Instead, my experience is that when I've written corrections to one side in a conflict, I've gotten support from the opposing side, but when I've then turned around and criticized the opposing side, they've rapidly turned on me.

Truth-seeking with respect to instrumentally valuable things can gain support from others who desire instrumentally valuable things, and truth-seeking with respect to politically valuable things can gain support from others who have shared political goals. However creating a generally truth-seeking community that extends beyond this requires a bunch of work and research to extend the truth-seeking to other questions. In particular, one has to proactively recognize the associated conflicts and the overlooked questions and make sure one seeks truth in those areas too. (Which sucks! It's not your specialty, can't other people do it?? Ideally yes, but they're not going to do it automatically, so in order for other people to do it, you have to create a community that actually recognizes that it has to be done, and delegates the work of doing it to specific people who will go on and do it.)

comment by Pee Doom (DonyChristie) · 2023-10-04T06:24:58.537Z · LW(p) · GW(p)

I've worried about it's sustainability, but do you think it's been a good path for you?

Cutting out bird and seafood products (ameliatarianism) is definitely more sustainable for me. I'm very confused why you would think it's less sustainable than, uh, 'cold turkey' veganism. "Just avoid chicken/eggs" (since I don't like seafood or the other types of bird meat products) is way easier than "avoid all meat, also milk, also cheese".

Replies from: pktechgirl, GWS, tristan-williams
comment by Elizabeth (pktechgirl) · 2023-10-04T19:58:33.742Z · LW(p) · GW(p)

My sense is that different people struggle with staying on a suffering-reducing diet for different reasons, and they have different solutions. Some people do need a commitment to a greater principle to make it work, and they typical mind that other people can’t (but aren’t wrong that people tend to overestimate themselves). Some people really need a little bit of animal nutrition but stop when that need is filled, and it’s not a slippery slope for them[1]and maybe miss that other people can’t stop where they can, although this group tends to be less evangelical so it causes fewer problems..

If the general conversation around ethics and nutrition were in a better place, I think it would be useful to look at how much of “veganism as a hard line” is a self-fulfilling prophecy, and what new equilibriums could be created. Does telling people “if you cross this line once you’ll inevitably slide into full blow carnism” make it more likely? Could advocates create a new hard line that gave people strength but had space for people for whom the trade-offs of total abstention are too hard? Or maybe not-even-once is the best line to hold, and does more good on net even if it drives some people away. 

I don’t feel like I can be in that conversation, for a lot of reasons. But I hope it happens

 

  1. ^

    and maybe miss that other people can’t stop where they can, although this group tends to be less evangelical so it causes fewer problems.

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-10-06T11:00:51.957Z · LW(p) · GW(p)

I think the first paragraph is well put, and do agree that my camp is likely more apt to be evangelical. But I also want to say that I don't think the second paragraph is quite representative. I know approximately 0 vegans that support the "cross the line once" philosophy. I think the current status quo is something much closer to what you imagine in the second to last sentence, where the recommendation that's most often come to me is "look, as long as you are really thinking about it and trying to do what's best not just for you but for the animals as well, that's all it takes. We all have weak moments and veganism doesn't mean perfection, it's just doing the best with what you've got"[1] 

  1. ^

    Sure, there are some obvious caveats here like you can't be a vegan if you haven't significantly reduced your consumption of animals/animal products. Joe, who eats steak every night and starts every morning with eggs and cheese and a nice hearty glass of dairy milk won't really be a vegan even if he claims the title. But I don't see the average vegan casting stones at and of the various partial reduction diets, generally I think they're happy to just have some more people on board.   

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-06T21:42:32.693Z · LW(p) · GW(p)

I don't see the average vegan casting stones at and of the various partial reduction diets,

I have seen a lot of stones cast about this. I'd believe that the 50th percentile vegan doesn't, but in practice the ones who care a lot are the ones potential reducitarians hear from. 

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-10-07T08:19:11.386Z · LW(p) · GW(p)

Sure, sure. I'm not saying there isn't perhaps an extreme wing, I just think it's quite important to say this isn't the average, and highlight that the majority of vegans have a view more like the one I mentioned above.

I think this is a distinction worth making, because when you collapse everyone into one camp, you begin to alienate the majority that actually more or less agrees with you. I don't know what the term for the group you're talking about is, but maybe evangelical vegans isn't a bad term to use for now.

comment by Stephen Bennett (GWS) · 2023-10-04T06:51:35.895Z · LW(p) · GW(p)

I took Tristan to be using "sustainability" in the sense of "lessened environmental impact", not "requiring little willpower"

Replies from: tristan-williams
comment by Tristan Williams (tristan-williams) · 2023-10-06T10:45:10.409Z · LW(p) · GW(p)

While I think the environmental sustainability angle is also an active thing to think about here (because beef potentially involves less suffering for the animals, but relatively more harm to the environment), I did actually intend sustainability in the spirit of "able to stick with it for a long period of time" or something like that. Probably could have been clearer. 

comment by Tristan Williams (tristan-williams) · 2023-10-06T10:52:31.046Z · LW(p) · GW(p)

What Elizabeth had to say here is broadly right. See my comment above [LW(p) · GW(p)], for some more in depth reasoning as to why I think the opposite may be true, but basically I think that the sort of loving relationship formed with other animals that I imagine as the thing that holds together commitment over a long period of time, over a large range of hard circumstances, is tricky to create when you don't go full on. I have no idea what's sustainable for you though, and want to emphasize that whatever works to reduce is something I'm happy with, so I'm quite glad for your ameliatarian addition. 

I'm also trying to update my views here, so can I ask for how long you've been on a veg diet? And if you predict any changes in the near future? 

comment by Adam Zerner (adamzerner) · 2023-09-29T07:47:44.570Z · LW(p) · GW(p)

Is this just acknowledging some sort of monkey brain thing, or endorsing it as well? (If part of it is acknowledging it, then kudos. I appreciate the honesty and bravery. I also think the data point is relevant to what is discussed in the post.)

I ask because it strikes me as a Reversed Stupidity Is Not Intelligence sort of thing. If Hitler thinks the sky is green, well, he's wrong, but it isn't really relevant to the question of what color the sky actually is.

Replies from: localdeity, dr_s
comment by localdeity · 2023-09-29T16:51:23.072Z · LW(p) · GW(p)

The first paragraph doesn't include jacquesthibs's original state before becoming vegetarian, leaving some ambiguity.  I think you're parsing it as "I went from vegan to vegetarian because I stopped trusting vegans".  The other parsing is "I went from omnivore to vegetarian, despite not trusting vegans, because I did my own research."  The rest of the comment makes me fairly confident that the second parsing is correct; but certainly it would be easier to follow if it were stated upfront.

Replies from: jacques-thibodeau
comment by jacquesthibs (jacques-thibodeau) · 2023-09-30T17:12:04.121Z · LW(p) · GW(p)

Sorry, I assumed it was obvious we were talking about omnivore to vegan given that Ninety-Three was talking about not being open to becoming vegan if vegans tried to convince them. I do see the ambiguity though.

comment by dr_s · 2023-09-30T05:57:46.484Z · LW(p) · GW(p)

I mean, to a point; if vegans are your main source of information on veganism and its costs, and you find out a pattern of vegans being untrustworthy, this means you're left to navigate veganism alone, which is itself a cost and a risk. Having a supportive community that you can trust makes a big difference in how easy it is to stick to big lifestyle decisions.

comment by Slapstick · 2023-10-01T23:24:09.963Z · LW(p) · GW(p)

Do you think that the attitude you're presenting here is the attitude one ought to have in matters of moral disagreement?

Surely there's various examples of moral progress (which have happened or are happening) that you would align yourself with. Surely some or all of these examples include people who lack perfect honesty/truth seeking on par with veganism.

If long ago you noticed some people speaking out against racism/sexism/slavery/etc Had imperfect epistemics and truth seeking, would you condone willfully disregarding all attempts to persuade you on those topics?

comment by Lao Mein (derpherpize) · 2023-09-29T02:56:16.953Z · LW(p) · GW(p)

I noticed a similar trend of loose argumentation and a devaluing of truth-seeking in the AI Safety space as public advocacy became more prominent. 

Replies from: jkaufman, TrevorWiesinger, D0TheMath, sharmake-farah, Vaniver, shankar-sivarajan
comment by jefftk (jkaufman) · 2023-09-30T10:45:16.371Z · LW(p) · GW(p)

I'm confused why this has so many agreement votes when the only potential example anyone has given doesn't actually have this problem?

Replies from: RobbBB, joachim-bartosik
comment by Rob Bensinger (RobbBB) · 2023-10-02T00:43:33.797Z · LW(p) · GW(p)

I agreed based on how AI safety Twitter looked to me a year ago vs. today, not based on discussion here.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-10-02T16:10:04.173Z · LW(p) · GW(p)

The best example I have right now is this thread with Liron, and it's a good example since it demonstrates the errors most cleanly.

Warning, this is a long comment, since I need to characterize the thread fully to explain why this thread demonstrates the terrible epistemics of Liron in this thread, why safety research is often confused, and more and also I will add my own stuff on alignment optimism here.

Anyways, let's get right into the action.

Liron tries to use the argument that it violates basic constraints analogous to a perpetual motion machine to have decentralized AI amongst billions of humans, and he doesn't even try to state what the constraints are until later, which turns out to be not great.

https://twitter.com/liron/status/1703283147474145297

His scenario is a kind of perpetual motion machine, violating basic constraints that he won’t acknowledge are constraints.

Quintin Pope recognizes that the comparison between the level of evidence for thermodynamics, and the speculation every LWer did about AI alignment is massively unfair, in that the thermodynamics example is way more solid than virtually everything LW said on AI. (BTW, this is why I dislike climate-AI analogies in the evidence each one has, since the evidence for climate change is also way better than all AI discussion ever acheived.) Quintin Pope notices that Liron is massively overconfident here.

https://twitter.com/QuintinPope5/status/1703569557053644819

Equating a bunch of speculation about instrumental convergence, consequentialism, the NN prior, orthogonality, etc., with the overwhelming evidence for thermodynamic laws, is completely ridiculous.

Seeing this sort of massive overconfidence on the part of pessimists is part of why I've become more confident in my own inside-view beliefs that there's not much to worry about.

Liron claims that instrumental convergence and the orthogonality thesis are simple deductions, and criticizes Quintin Pope for seemingly having an epistemology that is wildly empiricist.

https://twitter.com/liron/status/1703577632833761583

Instrumental convergence and orthogonality are extremely simple logical deductions. If the only way to convince you about that is to have an AI kill you, that’s not gonna be the best epistemology to have.

Quintin Pope points out that once we make it have any implications for AI, things get vastly more complicated, and uses an example to show how even a very good constructed argument for an analogous thing to AI doom basically totally fails for predictable reasons:

https://twitter.com/QuintinPope5/status/1703595450404942233

Instrumental convergence and orthogonality are extremely simple logical deductions.

They're only simple if you ignore the vast complexity that would be required to make the arguments actually mean anything. E.g., orthogonality:

  • What does it mathematically mean for "intelligence levels" and "goals" to be "orthogonal"?
  • What does it mean for a given "intelligence level" to be "equally good" at perusing two different "goals"?
  • What do any of the quoted things above actually mean?
  • Making a precise version of the orthogonality argument which actually makes concrete claims about how the structure of "intelligent algorithms space" relates to the structure of "goal encodings space", would be one of the most amazing feats of formalisation and mathematical argumentation ever.
  • Supposed you successfully argued that some specific property held between all pairs of "goals" and "intelligence levels". So what? How does this general argument translate into actual predictions about the real world process of building AI systems?

To show how arguments about the general structure of mathematical objects can fail to translate into the "expected" real world consequences, let's look at thermodynamics of gas particles. Consider the following argument for why we will all surely die of overpressure injuries, regardless of the shape of the rooms we're in:

  • Gas particles in a room are equally likely to be in any possible configuration.
  • This property is "orthogonal" to room shape, in the specific mechanistic sense that room shape doesn't change the relative probabilities of any of the allowed particle configurations, merely renders some of them impossible (due to no particles being allowed outside the room).
  • Therefore, any room shape is consistent with any possible level of pressure being exerted against any of its surfaces (within some broad limitations due to the discrete nature of gas particles).
  • The range of gas pressures which are consistent with human survival is tiny compared to the range of possible gas pressures.
  • Therefore, we are near-certain to be subjected to completely unsurvivable pressures, and there's no possible room shape that will save us from this grim fate.

This argument makes specific, true statements about how the configuration space of possible rooms interacts with the configuration spaces of possible particle positions. But it still fails to be at all relevant to the real world because it doesn't account for the specifics of how statements about those spaces map into predictions for the real world (in contrast, the orthogonality thesis doesn't even rigorously define the spaces about which it's trying to make claims, never mind make precise claims about the relationship between those spaces, and completely forget about showing such a relationship has any real-world consequences).

The specific issue with the above argument is that the "parameter-function map" between possible particle configurations and the resulting pressures on surfaces concentrates an extremely wide range of possible particle configurations into a tiny range of possible pressures, so that the vast majority of the possible pressures just end up being ~uniform on all surfaces of the room. In other words, it applies the "counting possible outcomes and see how bad they are" step to the space of possible pressures, rather than the space of possible particle positions.

The classical learning theory objections to deep learning made the same basic mistake when they said that the space of possible functions that interpolate a fixed number of points is enormous, so using overparameterized models is far more likely to get a random function from that space, rather than a "nice" interpolation.

They were doing the "counting possible outcomes and seeing how bad they are" step to the space of possible interpolating functions, when they should have been doing so in the space of possible parameter settings that produce a valid interpolating function. This matters for deep learning because deep learning models are specifically structured to have parameter-function maps that concentrate enormous swathes of parameter space to a narrow range of simple functions (https://arxiv.org/abs/1805.08522, ignore everything they say about Solomonoff induction).

I think a lot of pessimism about the ability of deep learning training to specify the goals on an NN is based on a similar mistake, where people are doing the "count possible outcomes and see how bad they are" step to the space of possible goals consistent with doing well on the training data, when it should be applied to the space of possible parameter settings consistent with doing well on the training data, with the expectation that the parameter-function map of the DL system will do as it's been designed to, and concentrate an enormous swathe of possible parameter space into a very narrow region of possible goals space.

If the only way to convince you about that is to have an AI kill you, that’s not gonna be the best epistemology to have. (Liron's quote)

I'm not asking for empirical demonstrations of an AI destroying the world. I'm asking for empirical evidence (or even just semi-reasonable theoretical arguments) for the foundational assumptions that you're using to argue AIs are likely to destroy the world. There's this giant gap between the rigor of the arguments I see pessimists using, versus the scale and confidence of the conclusions they draw from those arguments. There are components to building a potentially-correct argument with real-world implications, that I've spent the entire previous section of this post trying to illustrate. There exist ways in which a theoretical framework can predictably fail to have real-world implications, which do not amount to "I have not personally seen this framework's most extreme predictions play out before me."

Quintin also has other good side tweets to the main thread talking about the orthogonality thesis and why it either doesn't matter or is actually false for our situation, which you should check out:

https://twitter.com/QuintinPope5/status/1706849035850813656

"Orthogonality" simply means that a priori intelligence doesn't necessarily correlate with values. (Simone Sturniolo's question.)

Correlation doesn't make sense except in reference to some joint distribution. Under what distribution are you claiming they do not correlate? E.g., if the distribution is, say, the empirical results of training an AI on some values-related data, then your description of orthogonality is a massive, non-"baseline" claim about how values relate to training process. (Quintin Pope's response.)

https://twitter.com/CodyMiner_/status/1706161818358444238

https://twitter.com/QuintinPope5/status/1706849785125519704

What does it mean for a given "intelligence level" to be "equally good" at perusing two different "goals"? You're misstating OT. It doesn't claim a given intelligence will be equally good at pursuing any 2 goals, just that any goal can be pursued by any intelligence. (Cody Miner's tweet.)

In order for OT to have nontrivial implications about the space of accessible goals / intelligence tuples, it needs some sort of "equally good" (or at least, "similarly good") component, otherwise there are degenerate solutions where some goals could be functionally unpersuable, but OT still "holds" because they are not literally 100% unpersuable. (Quintin Pope's response.)

Anyways, back to the main thread at hand.

Liron argues that the mechanistic knowledge we have about Earth's pressure is critical to our safety:

https://twitter.com/liron/status/1703603479074456012

The gas pressure analogy to orthogonality is structurally valid.

The fact that Earth’s atmospheric pressure is safe, and that we mechanistically know that nothing we do short of a nuke is going to modify that property out of safe range, are critical to the pressure safety claim.

Quintin counters that the argument he defined about the gas pressure, even though it does way better than all AI safety arguments to date, still fails to have any real-world consequences predictably:

https://twitter.com/QuintinPope5/status/1703878630445895830

The point of the analogy was not "here is a structurally similar argument to the orthogonality thesis where things turn out fine, so orthogonality's pessimistic conclusion is probably false."

The point of my post was that the orthogonality argument isn't the sort of thing that can possibly have non-trivial implications for the real world. This is because orthogonality: 1: doesn't define the things it's trying to make a statement about. 2: doesn't define the statement it's trying to make 3: doesn't correctly argue for that statement. 4: doesn't connect that statement to any real-world implications.

The point of the analogy to gas pressure is to give you a concrete example of an argument where parts 1-3 are solid, but the argument still completely fails because it didn't handle part 4 correctly.

Once again, my argument is not "gas pressure doesn't kill us, so AI probably won't either". It's "here's an argument which is better-executed than orthogonality across many dimensions, but still worthless because it lacks a key piece that orthogonality also lacks".

This whole exchange illustrates one of the things I find most frustrating about so many arguments for pessimism: they operate on the level of allegories, not mechanism. My response to @liron was not about trying to counter his vibes of pessimism with my vibes of optimism. I wasn't telling an optimistic story of how "deep learning is actually safe if you understand blah blah blah simplicity of the parameter-function map blah blah". I was pointing out several gaps in the logical structure of the orthogonality-based argument for AI doom (points 1-4 above), and then I was narrowing in on one specific gap (point 4, the question of how statements about the properties of a space translate into real-world outcomes) and showing a few examples of different arguments that fail because they have structurally equivalent gaps.

Saying that we only know people are safe from overpressure because of x, y, or z, is in no way a response to the argument I was actually making, because the point of the gas pressure example was to show how even one of the gaps in the orthogonality argument is enough to doom an arguments that is structurally equivalent to the orthogonality argument.

Liron argues that the gas pressure argument does connect to the real world:

https://twitter.com/liron/status/1703883262450655610

But the gas pressure argument does connect to the real world. It just happens to be demonstrably false rather than true. Your analogy is mine now to prove my point.

Quintin counters that the gas pressure argument doesn't connect to the real world, since it does not correctly translate from the math to the real world, and the argument used seems very generalizable to a lot of AI discourse:

https://twitter.com/QuintinPope5/status/1703889281927032924

But the gas pressure argument does connect to the real world. It just happens to be demonstrably false rather than true.

It doesn't "just happen" to be false. There's a specific reason why this argument is (predictably) false: it doesn't correctly handle the "how does the mathematical property connect to reality?" portion of the argument. There's an alternative version of the argument which does correctly handle that step. It would calculate surface pressure as a function of gas particle configuration, and then integrate over all possible gas particle configurations, using the previously established fact that all configurations are equally likely. This corrected argument would actually produce the correct answer, that uniform, constant pressure over all surfaces is by far the most likely outcome.

Even if you had precisely defined the orthogonality thesis, and had successfully argue for it being true, there would still be this additional step where you had to figure out what implications it being true would have for the real world. Arguments lacking this step (predictably) cannot be expected to have any real-world implications.

Liron then admits, while he's unaware of it to a substantial weakening of the claim, since he discarded the idea that AI safety was essentially difficult or impossible, he now makes the vastly weaker claim that AI can be misaligned/unsafe. This is a substantial update that isn't hinted to the reader at all, since virtually everything can be claimed for, including the negation of AI governance and AI misalignment, since it uses words and only uses can instead of anything else.

https://twitter.com/liron/status/1704126007652073539

Right, orthogonality doesn’t argue that AI we build will have human-incompatible preferences, only that it can.

It raises the question: how will the narrow target in preference-space be hit?

Then it becomes concerning how AI labs admit their tools can’t hit narrow targets.

Quintin Pope then re-enters the conversation, since he believed that Liron conceded, and then asks questions about what Liron intended to do here:

https://twitter.com/QuintinPope5/status/1706855532085313554

@liron I previously disengaged from this conversation because I believed you had conceded the main point of contention, and agreed that the orthogonality argument provides no evidence for high probabilities of value misalignment.

I originally believed you had previously made reference to 'laws of physics'-esque "basic constraints" on AI development (https://x.com/liron/status/1703283147474145297?s=20). When I challenged the notion that any such considerations were anything near strong enough to be described in such a manner (https://x.com/QuintinPope5/status/1703569557053644819?s=20), you made reference to the orthogonality thesis and instrumental convergence (https://x.com/liron/status/1703577632833761583?s=20). I therefore concluded you thought OT/IC arguments gave positive reason to think misalignment was likely, and decided to pick apart OT in particular to explain why I think it's ~worthless for forecasting actual AI outcomes (https://x.com/QuintinPope5/status/1703595450404942233?s=20).

I have three questions: 1: Did you actually intend to use the OT to argue that the probability of misalignment was high in this tweet? https://x.com/liron/status/1703577632833761583?s=20 2: If not, what are the actual "basic constraints" you were referencing in this tweet? https://x.com/liron/status/1703283147474145297?s=20 3: If so, do you still believe that OT serves any use in estimating the probability of misalignment?

Liron motte-and-baileys back to the very strong claim that optimization theory gives us any reason to believe aligned AI is extraordinarily improbable (short answer, it isn't and it can't make any claim to this.)

https://twitter.com/liron/status/1706869351348125847

Analogy to physical-law violation: While the basic principles of “optimization theory” don’t quite say aligned AI is impossible (like perpetual motion would be), they say it’s extremely improbable without having a reason to expect many bits of goal engineering to locate aligned behavior in goal-space (and we know we currently don’t understand major parts of the goal engineering that would be needed).

E.g. the current trajectory of just scaling capabilities and doing something like RLHF (or just using a convincing-sounding RLHF’d AI to suggest a strategy for “Superalignment”) has a very low a-priori probability of overcoming that improbability barrier.

Btw I appreciate that you’ve raised some thought-provoking objections to my worldview on LessWrong. I’m interested to chat more if you are, but can we do it as like a 45-minute podcast? IMO it’d be a good convo and help get clarity on our cruxes of disagreement.

Quintin suggests a crux here, in that his optimization theory, insofar as it could be called a theory, implies that alignment could be relatively easy. I don't buy all of his optimization theory, but I have other sources of evidence for alignment being easy, and it's way better than anything LW ever came up with.

https://twitter.com/QuintinPope5/status/1707916607543284042

I'd be fine with doing a podcast. I think the crux of our disagreement is pretty clear, though. You seem to think there are 'basic principles of “optimization theory”' that let you confidently conclude that alignment is very difficult. I think such laws, insofar as we know enough to guess at them, imply alignment somewhere between "somewhat tricky" and "very easy", with current empirical evidence suggesting we're more towards the "very easy" side of the spectrum.

Personally, I have no problem with pointing to a few candidate 'basic principles of “optimization theory”' that I think support my position. In roughly increasing order of speculativeness: 1: The geometry of the parameter-function map is most of what determines the "prior" of an optimization process over a parameter space, with the relative importance of the map increasing as the complexity of the optimization criterion increases. 2: Optimization processes tend to settle into regions of parameter space with flat (or more accurately, degenerate / singular) parameter-function maps, since those regions tend to map a high volume of parameter space to their associated, optimization criterion-satisfying, functional behavior (though it's actually the RLCT from singular learning theory that determines the "prior/complexity" of these regions, not their volume). 3: Symmetries in the parameter-function map are most important for determining the relative volumes/degeneracy of different solution classes, with many of those symmetries being entangled with the optimization criterion. 4: Different optimizers primarily differ from each other via their respective distributions of gradient noise across iterations, with the zeroth-order effect of higher noise being to induce a bias towards flat regions of the loss landscape. (somewhat speculative) 5: The Eigenfunctions of the parameter-function map's local linear approximation form a "basis" translating local movement in parameter space to the corresponding changes in functional behaviors, and the spectrum of the Eigenfunctions determines the relative learnability of different functional behaviors at that specific point in parameter space. 6: Eigenfunctions of the local linearized parameter-function map tend to align with the target function associated with the optimization criterion, and this alignment increases as the optimization process proceeds. (somewhat speculative)

How each of these points suggest alignment is tractable:

  • Points 1 and 2 largely counter concerns about impossible to overcome under-specification that you > reference when you say alignment is "extremely improbable without having a reason to expect many bits of goal engineering to locate aligned behavior in goal-space". Specifically, deep learning is not actually searching over "goal-space". It's searching over parameter space, and the mapping from parameter space to goal space is extremely compressive, such that there aren't actually that many goals consistent with a given set of training data. Again, this is basically why deep learning works at all, and why overparameterized models don't just pick a random perfect loss function which fails to generalize outside the training data.
  • Point 3 suggests that NNs strongly prefer short, parallel ensembles of many shallower algorithms, over a small number of "deep" algorithms (since parallel algorithms have an entire permutation group associated with their relative ordering in the forwards pass, whereas each component of a single deep circuit has to be in the correct relative order). This basically introduces a "speed prior" into the "simplicity prior" of deep nets, and makes deceptive alignment less likely, IMO.
  • Points 4 and 6 suggest that different optimizers don't behave that differently from each other, especially when there's more data / longer training runs. This would mean that we're less likely to have problems due to fundamental differences in how SGD works as compared to the brain's pseudo-Hebbian / whatever local update rule it really uses to minimize predictive loss and maximize reward.
  • Point 5 suggests a lens from which we can examine the learning trajectories of deep networks and quantify how different updates change their functional behaviors over time.

Given this illustration of what I think may count as 'basic principles of “optimization theory”', and a quick explanation of how I think they suggest alignment is tractable, I would like to ask you: what exactly are your 'basic principles of “optimization theory”', and how do these principles imply aligned AI is "extremely improbable without having a reason to expect many bits of goal engineering to locate aligned behavior in goal-space"?

Further, I'd like to ask: how do your principles not also say the same thing about, e.g., training grammatically fluent language models of English, or any of the numerous other artifacts we successfully use ML to create? What's different about human values, and how does that difference interact with your 'basic principles of “optimization theory”' to imply that "behaving in accordance with human values" is such a relatively more difficult data distribution to learn, as compared with all the other distributions that deep learning demonstrably does learn?

Liron suggests that his optimizer theory suggests that natural architectures can learn a vast variety of goals, which combined with the presumed most goals being disastrous for humans, makes him worried about AI safety. IMO, it's notably worse in that it's way more special casey than Quintin Pope's theory, and it describes only the end results.

https://twitter.com/liron/status/1707950230266909116

My basic optimization theory says

  1. There exist natural goal optimizer architectures (analogous to our experience with the existence of natural Turing-complete computing architectures) such that minor modular modifications to its codebase can cause it to optimize any goal in a very large goal-space.
  1. Optimizing the vast majority of goals in this goal-space would be disastrous to humans.
  1. A system with superhuman optimization power tends to foom to far superhuman level and thus become unstoppable by humans.

AI doom hypothesis: In order to survive, we need a precise combination of building something other than the default natural outcome of a rogue superhuman AI optimizing a human-incompatible objective, but we’re not on track to set up the narrow/precise initial conditions to achieve that.

Quintin Pope points out the flaws in Liron's optimization theory. In particular, they're outcomes that are relabeled as laws:

https://twitter.com/QuintinPope5/status/1708575273304899643

None of these are actual "laws/theory of optimization". They are all specific assertions about particular situations, relabeled as laws. They're the kind of thing you're supposed to conclude from careful analysis using the laws as a starting point.

Analogously, there is no law of physics which literally says "nuclear weapons are possible". Rather, there is the standard model of particle physics, which says stuff about the binding energies and interaction dynamics of various elementary particle configurations. From the standard model, one can derive the fact that nuclear weapons must be possible, by analyzing the standard model's implications in the case that a free neutron impacts a plutonium nucleus.

Laws / theories are supposed to be widely applicable descriptions of a domain's general dynamics, able to make falsifiable predictions across many different contexts for the domain in question. This is why laws / theories have their special epistemic status. Because they're so applicable to so many contexts, and make specific predictions for those contexts, each of those contexts acts as experimental validation for the laws / theories.

In contrast, a statement like "A system with superhuman optimization power tends to foom to far superhuman level and thus become unstoppable by humans." is specific to a single (not observed) context, and so it cannot possibly have the epistemic status of an actual law / theory, not unless it's very clearly implied by an actual law / theory.

Of course, none of my proposed laws have the epistemic backing of the laws of physics. The science of deep learning isn't nearly advanced enough for that. But they do have this "character" of physical laws, where they're applicable to a wide variety of contexts (and can thus be falsified / validated in a wide variety of contexts). Then, I argue from the proposed laws to the various alignment-relevant conclusions I think they support. I don't list out the conclusion that I think support optimism, then call them laws / theory.

I previously objected to your alluding to thermodynamic laws in regards to the epistemic status of your assertions (https://x.com/QuintinPope5/status/1703569557053644819?s=20). I did so because I was quite confident that there do not exist any such laws of optimization. I am still confident in that position.

Overall, I see pretty large issues with Liron's side of the conversation, in that he moves between 2 different claims such that one is defensible but has ~no implications, and the claim that has implications but needs much, much more work to make it do well.

Also, Liron is massively overconfident in his theories here, which also is bad news.

Some additions to the AI alignment optimism case are presented below, to point out that AI safety optimism is sort of robust.

For more on why RLHF is actually extraordinarily general for AI alignment, Quintin Pope's comment on LW basically explains it better than I can:

https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/?commentId=Lj3gJmjMMSS24bbMm [LW · GW]

For the more general AI alignment optimism cases, Nora Belrose has a part of the post dedicated to the point that AIs are white boxes, not black boxes, and while it definitely overestimates the easiness (I do not believe that for ANNs today, that we can essentially analyze or manipulate them at 0 cost, and Steven Byrnes in the comments is right to point a worrisome motte-and-bailey that Nora Belrose does, albeit even then it's drastically easier to analyze ANNs rather than brains today.)

https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire#Alignment_optimism__AIs_are_white_boxes [EA · GW]

For the untested but highly promising solution to the AI shutdown problem, these 3 posts provide necessary reading, since Elliott Thornley found a way to usefully weaken expected utility maximization to retain most of the desirable properties of expected utility maximization, but without making the AI unshutdownable or display other bad behaviors. This might be implemented using John Wentworth's idea of subagents.

Sami Petersen's post on Invulnerable Incomplete Preferences: https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1 [LW · GW]

Elliott Thornley's submission for the AI contest: https://s3.amazonaws.com/pf-user-files-01/u-242443/uploads/2023-05-02/m343uwh/The Shutdown Problem- Two Theorems%2C Incomplete Preferences as a Solution.pdf

John Wentworth's post on subagents, for how this might work in practice:

https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents [LW · GW]

Damn, this was a long comment for me to make, since I needed it to be a reference for the future when people ask me about my optimism on AI safety, and the problems with AI epistemics, and I want it to be both self-contained and dense.

Replies from: Liron, orthonormal
comment by Liron · 2023-10-06T03:36:38.980Z · LW(p) · GW(p)

Appreciate the detailed analysis.

I don’t think this was a good debate, but I felt I was in a position where I would have had to invest a lot of time to do better by the other side’s standards.

Quintin and I have agreed to do a X Space debate, and I’m optimistic that format can be more productive. While I don’t necesarily expect to update my view much, I am interested to at least understand what the crux is, which I’m not super clear on atm.

Here’s a meta-level opinion:

I don’t think it was the best choice of Quintin to keep writing replies that were disproportionally long compared to mine.

There’s such a thing as zooming claims and arguments out. When I write short tweets, that’s what I’m doing. If he wants to zoom in on something, I think it would be a better conversation if he made an effort to do it less at a time, or do it for fewer parts at a time, for a more productive back & forth.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-10-06T13:51:01.552Z · LW(p) · GW(p)

I don’t think it was the best choice of Quintin to keep writing replies that were disproportionally long compared to mine.

I understand why you feel this way, but I do think that it was sort of necessary to respond like this, primarily because I see a worrisome asymmetry between the arguments for AI doom and AI being safe by default.

AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct.

This, Quintin Pope has to respond at length, since refuting bullshit or wrong theories takes very long compared to making intuitive, but wrong arguments for AI doom.

Quintin and I have agreed to do a X Space debate, and I’m optimistic that format can be more productive.

Alright, that might work. I'm interested to see whether you will write up a transcript, or whether I will be able to join the X space debate.

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2023-10-06T14:07:32.565Z · LW(p) · GW(p)

"AI doom arguments are more intuitive than AI safety by default arguments, making AI doom arguments requires less technical knowledge than AI safety by default arguments, and critically the AI doom arguments are basically entirely wrong, and the AI safety by default arguments are mostly correct."

I really don't like that you make repeated assertions like this. Simply claiming that your side is right doesn't add anything to the discussion and easily becomes obnoxious.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-10-06T14:34:01.056Z · LW(p) · GW(p)

I really don't like that you make repeated assertions like this. Simply claiming that your side is right doesn't add anything to the discussion and easily becomes obnoxious.

Yes, I was trying to be short rather than write the long comment or post justifying this claim, because I had to write at least two long comments on this issue.

But thank you for point here. I definitely agree that I was wrong to just claim that I was right without trying to show why, especially explaining things.

Now that I'm thinking that text-based interaction is actually bad, since we can't communicate a lot of information.

comment by orthonormal · 2023-10-02T17:20:35.529Z · LW(p) · GW(p)

Seems fair to tag @Liron [LW · GW] here.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-10-02T17:28:15.908Z · LW(p) · GW(p)

How did you manage to tag Liron, exactly? But yes, I will be waiting for Liron to respond, as well as other interested parties to respond.

Replies from: orthonormal
comment by orthonormal · 2023-10-02T17:37:04.341Z · LW(p) · GW(p)

Simply type the at-symbol to tag people. I don't know when LW added this, but I'm glad we have it.

comment by Joachim Bartosik (joachim-bartosik) · 2023-09-30T14:39:30.183Z · LW(p) · GW(p)

Here [LW(p) · GW(p)] is an example of someone saying "we" should say that AGI is near regardless of whether it's near or no. I post it only because it's something I saw recently and so I could find it easily but my feeling is that I'm seeing more comments like that than I used to (though I recall Eliezer complaining about people proposing conspiracies on public forums so I don't know if that's new).

Replies from: gworley, jimrandomh
comment by Gordon Seidoh Worley (gworley) · 2023-10-01T02:16:39.835Z · LW(p) · GW(p)

I think this misunderstands my position. I wouldn't advocate for saying "AGI is near" if it wasn't possibly near, only that, if you have to communicate something short with no nuance, given there's non-trivial possibility that AGI is near, it's better to communicate "AGI is near" if that's all you can communicate.

Replies from: ThirdSequence
comment by ThirdSequence · 2023-10-04T09:08:04.339Z · LW(p) · GW(p)

You are turning this into a hypothetical scenario where your only communication options are "AGI is near" and "AGI is not near". 

"We don't know if AGI is near, but it could be." would seem short enough to me. 

Replies from: pktechgirl, gworley
comment by Elizabeth (pktechgirl) · 2023-10-04T20:20:38.884Z · LW(p) · GW(p)

For the general public considered as a unit, I think "We don't know if AGI is near, but it could be." is much too subtle. I don't know how to handle that, but I think the right way to talk about it is "this is an environment that does not support enough nuance for this true statement to be heard, how do we handle that?", not "pretend it can handle more than it can."[1]

I think this is one reason doing mass advocacy is costly, and not be done lightly. There are a lot of advantages to staying in arenas that don't render a wide swath of true things unsayable. But I don't think it's correct to totally rule out participating in those arenas either. 

 

  1. ^

    And yes, I do think the same holds for vegan advocacy in the larger world. I think simplifying to "veganism is totally healthy* (*if you do it right)" is fine-enough for pamphlets and slogans. As long as it's followed up with more nuanced information later, and not used to suppress equally true information. 

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-10-06T16:45:10.887Z · LW(p) · GW(p)

I think this is one reason doing mass advocacy is costly, and not be done lightly.

And why I deeply disagree with Eliezer's choice to break open the Overton window, and FLI's choice to argue for a pause to open the Overton window, because I believe that the nuances of the situation, especially ML nuance, are very critically important for stuff like "Will AI be safe?", "Will AI generalize", and so on.

comment by Gordon Seidoh Worley (gworley) · 2023-10-04T17:16:30.865Z · LW(p) · GW(p)

See my original answer for why I think picking such short messages is necessary. Tl;dr: most people aren't paying attention and round off details, so you have to communicate with the shortest possible message that can't be rounded off further in some contexts. Your proposed message will be rounded off to "we don't know", which is not a message that seems unlikely to me to inspire the correct actions at this point in time.

comment by jimrandomh · 2023-09-30T18:36:23.443Z · LW(p) · GW(p)

I draw a slightly different conclusion from that example [LW(p) · GW(p)]: that vegan advocates in particular are a threat to truth-seeking in AI alignment. Because I recognize the name, and that's a vegan who's said some extremely facepalm-worthy things about nutrition to me.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-10-01T02:09:26.606Z · LW(p) · GW(p)

Hmm, odd you draw that conclusion, because I'm not a vegan, though I have tried and continue to try to eat a plant-based diet, though due to a combination of dietary issues am often unable to. I think you'd also be hard pressed to say I'm a vegan advocate other than I generally think animals have moral worth and killing them is bad all else equal, but I'm not really trying to get anyone else to eat only plants.

Also, if you're going to make a claim about me, please @ me so I can respond. I only saw this by luck, and I consider it pretty rude to make claims about someone on this site when you can easily tag them and then don't.

All those specific points aside, this also seems like overgeneralization, since I'm not advocating in that comment not to seek truth, only to take a particular stance with how to communicate to people who are not so much interested in truth as what action to take based on the recommendation of experts. I don't really like that the way humans have organized themselves requires communicating low-resolution things that obscure important details of the truth, but I do recognize that's how our systems work and try to make the best of it.

[Edited to add:] Actually, it dawns on me now this comment makes even less sense because you're claiming I said "extremely facepalm-worthy things about nutrition" yet I can't recall ever having done so. We've only ever spoken in person a handful of times and mostly to make smalltalk. So I have no idea what you're trying to do here.

Honestly, the more I think about it, the more your comment reads like libel to me: it's making claims that defame me in various ways yet is totally unsubstantiated. Perhaps you've mixed me up with someone else? Either way, this comment is, in my opinion, in bad taste in that it makes claims against me, gives no evidence for them, and then tries to draw some conclusions based on seemingly made up evidence.

Replies from: jimrandomh
comment by jimrandomh · 2023-10-01T19:41:44.740Z · LW(p) · GW(p)

The thing I was referring to was an exchange on Facebook, particularly the comment where you wrote:

also i felt like there was lots of protein, but maybe folks just didn't realize it? rice and most grains that are not maize have a lot (though less densely packed) and there was a lot of quinoa and nut products too

That exchange was salient to me because, in the process of replying to Elizabeth, I had just searched my FB posting history and reread what veganism-related discussions I'd had, including that one. But I agree, in retrospect, that calling you a "vegan advocate" was incorrect. I extrapolated too far based on remembering you to have been vegan at that time and the stance you took in that conversation. The distinction matters both from the perspective of not generalizing to vegan advocates in general, and because the advocate role carries higher expectations about nutrition-knowledge than participating casually in a Facebook conversation does.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-10-01T23:10:37.830Z · LW(p) · GW(p)

I've struck out some of my comment above that, based on your reply, no longer makes sense.

We may still have other disagreements about other things, but your comment seems to break your claim that I'm a threat to truth seeking, so I'm happy to leave it there.

comment by trevor (TrevorWiesinger) · 2023-09-30T15:48:16.843Z · LW(p) · GW(p)

I think that rallying the AI safety movement behind consistent strategies is like herding cats in lots of different ways, which EA vegan advocacy largely isn't, because AI safety is largely a meritocracy whereas EA vegan advocacy is much more open source. EA vegan advocacy has no geopolitics and no infohazards, their most galaxy-brained plot is meat substitute technology and their greatest adversary is food industry PR.

Furthermore, the world is changing around the AI safety, especially during 2020 and 2022, and possibly in a rather [LW · GW] hostile [LW · GW] way. This makes open-source participation even less feasible than it was during the 2010s.

comment by Garrett Baker (D0TheMath) · 2023-09-29T05:06:01.617Z · LW(p) · GW(p)

@Elizabeth [LW · GW] This thread [EA(p) · GW(p)]seems good as a concrete example of this.

Replies from: orthonormal, pktechgirl
comment by orthonormal · 2023-09-29T06:31:23.184Z · LW(p) · GW(p)

The thread is closer to this post's Counter-Examples than its examples. 

Richard calls out the protest for making arguments that diverge from the protesters' actual beliefs about what's worth protesting, and is highly upvoted for doing so. In the ensuing discussion, Steven changes Holly and Ben's minds on whether it's right to use the "not really open-source" accusation against FB (because we think true open-source would be even worse).

Tyler's comment that [for public persuasion, messages get rounded to "yay X" or "boo X" anyway, so it's not worth worrying about nuance nuance is less important] deserves a rebuttal, but I note that it's already got 8 disagrees vs 4 agrees, so I don't think that viewpoint is dominant.

Replies from: D0TheMath, tylerjohnston
comment by Garrett Baker (D0TheMath) · 2023-09-29T06:34:30.592Z · LW(p) · GW(p)

Good points! My mind has been changed.

comment by tylerjohnston · 2023-09-29T14:06:42.216Z · LW(p) · GW(p)

Sidebar: For what it's worth, I don't argue in my comment that "it's not worth worrying" about nuance. I argue that nuance isn't more important for public advocacy than, for example, in alignment research or policy negotiations — and that the opposite might be true.

Replies from: orthonormal
comment by orthonormal · 2023-09-30T00:49:52.852Z · LW(p) · GW(p)

Fair enough, I've changed my wording.

comment by Elizabeth (pktechgirl) · 2023-09-29T06:16:34.172Z · LW(p) · GW(p)

could you be more specific?  

comment by Noosphere89 (sharmake-farah) · 2023-10-02T16:36:09.861Z · LW(p) · GW(p)

For a better example than Garrett Baker's example, see my comment for the cleanest example. It's long, but I explain a lot here:

https://www.lesswrong.com/posts/aW288uWABwTruBmgF/?commentId=ZJde3sTdzuEaoaFfi [LW · GW]

comment by Vaniver · 2023-10-01T20:12:49.188Z · LW(p) · GW(p)

Regression to the mean?

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2023-10-02T07:29:06.506Z · LW(p) · GW(p)

I would guess that it's because it's something a lot more people care viscerally about now (as opposed to the more theoretical care a few years ago).

comment by Shankar Sivarajan (shankar-sivarajan) · 2023-10-01T02:41:14.754Z · LW(p) · GW(p)

I agree, and would like to proffer as a concrete example the deliberate conflation of two kinds of AI safety: one against AI saying the word "nigger" or creating generating nonconsensual nudity/pornography, and the more traditional one against AI turning everyone into paperclips. The idea is to sow enough confusion that they can then use the popularity of the former as a means of propping up the latter. I consider this behavior antithetical to truth-seeking.

Replies from: Raemon
comment by Raemon · 2023-10-02T21:23:00.548Z · LW(p) · GW(p)

Hey, uh, I don't wanna overly police people's language, but this is the second time in a week you've used the n-word specifically as your example here, and it seems like, at best, an unnecessarily distracting example.

Replies from: shankar-sivarajan
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-10-02T22:04:52.859Z · LW(p) · GW(p)

No, I maintain this is THE central example of the goals of this new (and overwhelmingly dominant) AI safety, enforcing the single taboo in present-day America most akin to blasphemy, and precisely as victimless. If they were the Islamic analogue, I'd use the example of the caricatures of Mohammed every time, "distracting" as it may be to those of the faith: using any other is disingenuous, and contrary to my deeply-held value of speaking the truth as best I see it. 

Replies from: Raemon
comment by Raemon · 2023-10-02T23:12:04.091Z · LW(p) · GW(p)

LessWrong has a pretty established norm of not using unnecessarily political examples. (See Politics is the Mind-Killer [LW · GW]). I don't object to you writing up a top level post arguing for the point you're trying to make here. But I do object to you injecting your pet topic into various other comment threads in particularly distracting ways (especially ones that are only tangentially about AI, let alone about your particular concern about AI and culture/politics/etc). 

When you did it last week, it didn't seem like something it felt right for the mods to intervene on heavy-handedly (some of us downvoted as individuals). But, it sounds like you're going out of your way to use an inflammatory example repeatedly. I am now concretely asking you as a moderator to not do that. 

I'm locking this thread since it's pretty offtopic. You can go discuss it more at the meta-level over in the Open Thread, if you want to argue about the overall LessWrong moderation policy.

comment by bideup · 2023-09-29T10:48:25.192Z · LW(p) · GW(p)

Just wanted to say that I am a vegan and I’ve appreciated this series of posts.

I think the epistemic environment of my IRL circles has always been pretty good around veganism, and personally I recoil a bit from discussion of specific people or groups’ epistemic virtues of lack thereof (not sure if I think it’s unproductive or just find it aversive), so this particular post is of less interest to me personally. But I think your object-level discussion of the trade-offs of veganism has been consistently fantastic and I wanted to thank you for the contribution!

comment by Yoav Ravid · 2023-09-29T13:46:16.408Z · LW(p) · GW(p)

“the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”

I agree with Soto on this, but think that suppressing truth-seeking causes far more damage than just making people implement veganism worse, including, importantly, making some people not go vegan at all.

If you believe that marginal health benefits don't justify killing animals, I think that's a far more effective line of argument. And it remains truthful/honest.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-09-29T22:32:19.769Z · LW(p) · GW(p)

One of the best arguments for veganism I ever heard was "I don't care if it makes me healthier, I wouldn't eat humans for my health so I won't eat animals". I respect that viewpoint a lot.

But in practice, I don't see vegans with serious dietary-caused health issues holding to this. The only person I know personally who made that argument has since entered a moral trade so they can drink milk, because their health was suffering.[1] This was someone who had been vegan for years, was (and is) deeply committed to the cause, and I'm sure tried everything they could to solve the problem via plants. Ultimately they needed animal products.

Which makes me think most of the people who say "better to starve than to eat animal products" are suffering from a failure of imagination on how bad malnutrition can get for some people.[2] And that I don't respect, especially if they are deliberately suppressing evidence to the contrary. 

  1. ^

    Which I tentatively think is great, as a way to help themselves without increasing animal suffering. But it's not consistent with "I would rather die than eat animal products". 

  2. ^

    Not all of them. I'm sure there's at least one person slowly starving to death rather than violate their principles, and I respect their resolve. But not many.

Replies from: ann-brown, Yoav Ravid
comment by Ann (ann-brown) · 2023-09-29T23:05:04.183Z · LW(p) · GW(p)

In practice, to my understanding, even scrupulous humans will at times eat consenting humans rather than starve to death. "For your health" is a broad range.

comment by Yoav Ravid · 2023-09-30T05:25:38.601Z · LW(p) · GW(p)

I agree. That's why I said marginal health benefits, the sort that people argued many people are missing out on even on their usual diets because they're suboptimal. So I'm definitely not saying "better to starve than to eat animal products"[1], in that case, eat animal products, and if you can try to eat ones that don't come from factory farming, even better. In several years we'll start having cultured meat and this problem will go away.

  1. ^

    Though I can relate to these people, I did think that way when I became vegan at 11yo.

comment by Stephen Bennett (GWS) · 2023-09-29T01:35:16.543Z · LW(p) · GW(p)

I encourage you to respond to any comment of mine that you believe...

  • ...actively suppresses inconvenient questions with "fuck you, the truth is important."
  • ...ignores the arguments you made with "bro read the article."
  • ...leaves you in a fuzzy daze of maybe-disagreement and general malaise with "?????"
  • ...is hostile without indicating a concrete disagreement of substance with "that's a lot of hot air"
  • ...has citations that are of even possibly dubious quality with "legit?". And if you dig through one of my citations and think either I am misleading by including it or it itself is misleading, demonstrate this fact, and then I don't respond, you can call me a coward.
  • ...belittles your concerns (on facebook or otherwise) with "don't be a jerk."
  • ...professes a belief that is wholly incompatible with what I believe in private with "you're lying."

Since I expect readers of the comment chain to not have known that I gave you permission, I'll take the work of linking to this post and assuring them that I quite literally asked for it. You're also welcome to take liberties with the exact phrasing. For example, if you wanted to express a sharper sentiment in response to your general malaise, you might write "???!!!!??!?!?!?", which I would also encourage.

I doubt that this would work as a normative model for discourse since it would quickly devolve into namecalling and increase the heat of the arguments without actually shedding much light. I also think that if you were never beholden to the typical social rules that govern the EA forum and lesswrong, that you would lose some of the qualities that I most enjoy in your writing. But, if you see my name at the top of a comment, feel free to indulge yourself.

I don't think I've told you before, but I like your writing. I appreciate the labor you put into your work to make it epistemically legible, which makes it obvious to me that you are seeking the truth. You engage with your commenters with kindness and curiosity, even when they are detracting from your work. Thank you.

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-09-29T18:30:56.633Z · LW(p) · GW(p)

Since I'm getting a fair number of confused reactions, I'll add some probably-needed context:

Some of Elizabeth's frustration with the EA Vegan discourse seems to stem from general commenting norms of lesswrong (and, relatedly, the EA forums). Specifically, the frustrations remind me of those of Duncan Sabien, who left lesswrong in part because he believed there was an asymmetry between commenters and posters wherein the commenters were allowed to take pot-shots at the main post, misrepresent the main post, and put forth claims they don't really endorse that would take hours to deconstruct.

In the best case, this resulted in a discussion that exposed and resolved a real disagreement. In the worst case, this resulted in an asymmetric amount of time between main poster and commenter resolving a non-disagreement that would never have happened if the commenter put in the time to carefully read the parent post or express themselves clearly. Elizabeth's post here touches on many similar themes, and although she bounds the scope of the post significantly (that she is only talking about EA Vegan advocacy and a general trend amongst commentators writ large instead of a problem of individuals), I suspect that she is at least at times annoyed/frustrated/reluctant to put forth the work involved in carefully disentangling confusing disagreements with commenters.

I can't solve the big problem. I was hoping to give Elizabeth permission to engage with me in a way that feels less like work, and more like a casual conversation. The sort of permission I was giving is explicitly what Duncan was asking for (e.g. context-less links to the sequences) and I imagine I would want at least some of the time as a poster.

I realize that Elizabeth and Duncan are different people, and want different things, so sorry if I gave you something you didn't want, Elizabeth.

Regardless, thank you for taking me up on my offer of responding with an emote expressing confusion rather than trying to resolve whatever confusion you had with a significant number of words, per https://www.lesswrong.com/posts/aW288uWABwTruBmgF/?commentId=hgx5vjXAYjYBGf32J [LW · GW]. (misunderstood UI).

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-09-29T19:13:07.342Z · LW(p) · GW(p)

TBC I voted against confusion because I found your comment easy to understand. But seems like lots of people didn't, and I'm glad they had an easy way to express that. I have some hope for emojis doing exactly what you describe here, cheaply and without much inflammation. Elsewhere in the comments I've been able to mark specific claims as locally invalid, or ask for examples, or expressed appreciation, with it being a Whole Thing, and that's been great. 

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-09-29T19:58:02.229Z · LW(p) · GW(p)

Oh whoops, I misunderstood the UI. I saw your name under the confusion tag and thought it was a positive vote. I didn't realize it listed emote-downvotes in red.

Replies from: Yoav Ravid, neel-nanda-1
comment by Yoav Ravid · 2023-09-29T20:53:42.577Z · LW(p) · GW(p)

For the record, I also misunderstood the UI in the same way. Perhaps it should be made clearer somehow.

comment by Neel Nanda (neel-nanda-1) · 2023-09-30T15:15:09.776Z · LW(p) · GW(p)

Oh huh, I also misunderstood that, I thought red meant OP or something

Replies from: shankar-sivarajan
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-10-01T02:09:38.092Z · LW(p) · GW(p)

Yes, if the emote-downvotes are red, the emote-upvotes ought to be green.

comment by Matthew Barnett (matthew-barnett) · 2023-10-02T18:54:30.692Z · LW(p) · GW(p)

In fact the best data I found on this was from Faunalytics, which found that ~20% of veg*ns drop out due to health reasons. This suggests to me a high chance his math is wrong and will lead him to do harm by his own standards.

I don't trust self-report data on this question. Even if 100% of vegans dropped out due to the inconvenience of the diet, I'd still expect a substantial fraction of those people to misreport their motive for doing so. People frequently exaggerate how much of their behavior can be attributed to favorable motives, and dropping out of veganism because you ran into health issues sounds a lot better than dropping out because you got lazy and didn't want to put up with the hassle. I'm not even claiming that people are lying or misremembering. But I think people can and often do convince themselves of things that aren't true.

More generally, I'm highly skeptical that you can get reliable information about the causal impacts of diets by asking people about their self-reported health after trying the diets. There's just way too many issues of bias, selective memories, and motivated reasoning involved. Unless we're talking about something concrete like severe malnutrition, most people are just not good at this type of causal attribution. There are plenty of people who self-report that healing crystals treat their ailments. Pretty much the main reason why we need scientific studies and careful measurements in the first place is because self-report data and personal speculations are not reliable on complex issues like this one.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-02T22:02:28.510Z · LW(p) · GW(p)

I agree with all of this. I think this data has some advantages not seen elsewhere (mostly catching ex-vegans), but I absolutely expect people to overreport sympathetic reasons for leaving veganism.

OTOH, that 20% was for veganism and vegetarianism combined, and I expect the health-related dropout rate to be higher among vegans than vegetarians. 

On the third hand... well, the countervailing factors could go on for quite a while. That post you link to contains a guesstimate model that lets you adjust the attrition rate based on various factors. This is more an intuition pump than anything else, too many of the factors are unknown. But if you have a maximum acceptable attrition rate you can play around with what assumptions are necessary to get below that attrition rate, and share those.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-02T23:54:34.925Z · LW(p) · GW(p)

I'm not sure how to square the fact that you agreed with "all of [my comment]" with this post.

In a basic sense, I implied that the self-report data appears consistent with no health difference between veg*n diets and non-veg*n diets. In practice, I expect there to be some differences because many veg*ns don't take adequate supplementation, and also there are probably some subtle differences between the diets that are hard to detect. But, assuming the standard recommended supplementation, it seems plausible to me that health isn't a non-trivial tradeoff for the vast majority of people when deciding to adopt a veg*n diet, and the self-report data doesn't move me much on this question at all.

This comment is probably better suited as a response to your post on the health downsides of a vegan diet, rather than this post. However, in this post you critique multiple people for seeking to dismiss or suppress discussion about the health downsides of a vegan diet, or for attempting to reframe the discussion. When I read the comments you cite in this post under a background assumption that these health downsides are tiny or non-existent, most of the comments don't seem very unreasonable to me anymore. If they're right that the health downsides are small, then I don't think it's fair to allege that they weren't being truth-seeking, in most cases you cited. It sounds more like they simply think your claims are frivolous and misleading.

If someone was writing a post about the safety or health downsides of nuclear energy production, I would probably similarly argue that focusing too much on this element of the discussion can be distracting and irrelevant, since nuclear energy is not significantly more unsafe or unhealthy than other forms of energy production if managed appropriately. I don't think that means I'm a denialist about the tradeoffs. An open discussion of tradeoffs is important, but it's equally important to emphasize an honest appraisal of whether the tradeoffs imply anything significant about what we should actually do.

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-10-03T01:15:43.916Z · LW(p) · GW(p)

I took your original comment to be saying "self-report is of limited value", so I'm surprised that you're confused by Elizabeth's response. In your second comment, you seem to be treating your initial comment to have said something closer to "self-report is so low value that it should not materially alter your beliefs." Those seem like very different statements to me.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T01:33:35.316Z · LW(p) · GW(p)

In the original comment I said "I'm highly skeptical that you can get reliable information about the causal impacts of diets by asking people about their self-reported health after trying the diets". It's subjective whether that means I'm saying self-report data has "limited value" vs. "very little value" but I assumed Elizabeth had interpreted me as saying the latter, and that's what I meant.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-03T01:56:35.512Z · LW(p) · GW(p)

If this had come from a meat industry mouthpiece or even a provably neutral party, I would have tossed it. It would have been too easy to create those results. But it's from Faunalytics, an org focused on reducing animal suffering through data analytics. Additionally, they seemed to accept the number as is. In fact my number is lower than the one they give, because I only include people who reported remission after resuming meat consumption, where Faunalytics reports "listed health issues as a reason they quit". Given that, using their number seemed better than everyone guessing.

I hedged a little less about this after wilkox, a doctor who was not at all happy with the Change My Mind post, said he thought it was if anything an underestimate [LW(p) · GW(p)].

Edit: seems quite possible he meant "this is small given the self-reporting bias", not "this is small given my estimate of the problem"

I think the interesting question here is "what % attrition to health issues do you think is okay?" If it's 19%, I think it's reasonable for you to decide this isn't worth your time[1]. If it's 2%, then you'd need to show the various factors were inflating estimates by a full order of magnitude.

  1. ^

    Although even then, I believe veganism will have many more issues than vegetarianism, and Faunalytics's sample is overwhelmingly vegetarian.

Replies from: Slapstick, matthew-barnett
comment by Slapstick · 2023-10-03T21:42:19.856Z · LW(p) · GW(p)

Additionally, they seemed to accept the number as is.

I don't think that's fair to say given this disclaimer in the faunalytics study:

Note: Some caution is needed in considering these results. It is possible that former vegetarians/vegans may have exaggerated their difficulties given that they provide a justification for their current behavior.

.

where Faunalytics reports "listed health issues as a reason they quit".

This isn't a quote from the faunalytics data, nor is it an accurate description of the data they gathered.

The survey asked people who are no longer veg*/n if they experienced certain health issues while they were veg*/n. Not whether they attributed those health issues to their diet, or whether they quit because of those health issues.

Someone who experienced depression/anxiety while they were vegan for example, who then quit being vegan because they broke up with their vegan partner, would be included in the survey data you're talking about.

It's possible I'm confused or missing something.

I hedged a little less about this after wilkox, a doctor who was not at all happy with the Change My Mind post, said he thought it was if anything an underestimate.

I'm much less confident about my issue with this part because im not totally sure what they meant, but I don't interpret their comment as saying that in their professional opinion they think the number of people experiencing health issues from veg*nism is higher.

I interpret their surprise at the numbers being due to the fact that it's a self reported survey. Given that people can say whatever they want, and that it's surveying ex veg*ns, they're surprised more people didn't use health as a rationalization (is my impression).

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-03T22:37:03.092Z · LW(p) · GW(p)

Given that people can say whatever they want, and that it's surveying ex veg*ns, they're surprised more people didn't use health as a rationalization

That seems reasonable.

Your quote from Faunalytics also seems reasonable, and a counter to my claim. I remembered another line that implied they accepted the number but thought it didn't matter because it was small. It seems plausible they also were applying heavy discounting for self-reporting bias and expressing surprise about that.

Replies from: Slapstick
comment by Slapstick · 2023-10-04T02:54:15.113Z · LW(p) · GW(p)

I appreciate the response

Though I'm mostly concerned that you seem to be falsely quoting the faunalytics study:

where Faunalytics reports "listed health issues as a reason they quit"

This isn't in the study and it's not something they surveyed. They surveyed something meaningfully different, as I outlined in my comment.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-04T19:33:35.720Z · LW(p) · GW(p)

you're right, my summary in this post was wrong. Thank you for catching that and persisting in pointing it out when I missed it the first time. I'm fixing it now.

I agree with you that self-reports are inherently noisy, and I wish they'd included things like "what percentage of people develop an issue on that list after leaving veg*nism?", "what percentage of veg*ns recover from said issues without adding in animal products?", and "how prevalent are these issues in veg*ns, relative to omnivores" However I think self-reporting on the presence of specific issues is a stronger metric than self-reporting on something like "did you leave veganism for medical reasons?". 

Replies from: Slapstick
comment by Slapstick · 2023-10-04T20:45:38.402Z · LW(p) · GW(p)

Thanks I appreciate this! (What follows doesn't include any further critical feedback about what you wrote)

One thing I also thought was missing in the survey is something that would touch on a general sense of loss of energy.

Its my impression that many people attempting veganism (perhaps more specifically a whole foods plant based diet, but also veg*nism generally) report a generalized loss of energy. Often this is cited as a reason for stopping the diet.

It's also my impression (opinion?) that this is largely due to a difference in the intuitive sense of whether you're getting enough calories, since vegan food is often less caloricaly dense. (You could eat 2Lb of mushrooms, feel super full, and only have eaten 250 calories)

If someone is used to eating a certain volume of food until they feel full, that same heuristic without changing may leave them at a major caloric defecit if eating healthy vegan foods.

This can also be a potential risk factor for any sort of deficiency. The food you're eating might have 100% of the nutrients you need, but if you're eating 70% of the food you need, you'll not be getting enough nutrients.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-04T21:05:47.045Z · LW(p) · GW(p)

Yeah I think this is a very important question and I'd love to get more data on it.

My very milquetoast guess is that some vegans or aspiring vegans do just need to eat more, and others are correct that they can't just eat more, so it doesn't matter if eating more would help (some of whom could adapt given more time and perhaps a gentler transition, and some of whom can't).[1] A comment on a previous post [EA(p) · GW(p)] talked about all the ways plant-based foods are more filling per unit calorie, and that may be true as far it goes, but it also means those foods are harder to digest, and not everyone considers that a feature. 

My gut says that the former group (who just need to literally put more of the same food in their mouth) should be small, because surely they would eventually stumble on the "eat more" plan? The big reason not to would be if they're calorie-restricting, but that's independent of veganism.  But who knows, people can be really disembodied and there are so many cultural messages tell us to eat less. 

I talked in some earlier posts about digestive privilege [EA(p) · GW(p)], where veganism is just easier for some people than others. I think some portion of those people typical mind that everyone else faces the exact same challenge level, and this is the cause of a lot of inflammatory discussion. I have a hunch that people who find veganism more challenging than other vegans, but still less than the population average, or particular people they're to, are the worst offenders because they did make some sacrifice. 

  1. ^

    And others have other reasons they can't be vegan, but not focusing on those right now. 

comment by Matthew Barnett (matthew-barnett) · 2023-10-03T02:34:09.228Z · LW(p) · GW(p)

I'm not claiming that the self-reported data is unreliable because it comes from a dubious institutional source. I'm claiming it's unreliable because people are unreliable when they self-report facts about their health, especially when it comes to causal attribution. I'm sure Faunalytics is honest and the survey was designed reasonably well, but it's absurd to take self-reported health data at face value for the reasons I stated: confirmation bias, selective memories, and motivated reasoning etc.

I think the interesting question here is "what % attrition to health issues do you think is okay?" If it's 19%, I think it's reasonable for you to decide this isn't worth your time[1] [LW(p) · GW(p)]. If it's 2%, then you'd need to show the various factors were inflating estimates by a full order of magnitude.

It would be shockingly bad if we treated self-reported health data similarly in other circumstances. For example, I found one article that reported, "available epidemiological data points to a relatively high prevalence of perceived [electromagnetic hypersensitivity] in the general population, reaching 1,6% in Finland and 2,7% in Sweden, 3,5% in Austria, 4,6% in Taiwan, 5% in Switzerland and 10.3% in Germany". Are you comfortable saying that the number of people who have electromagnetic hypersensitivity in Germany is more than one order of magnitude lower than this estimate of 10.3%? Because I am. In this case, I think the whole phenomenon is probably bullshit from top to bottom, and therefore the figure itself is bunk, not merely inflated.

Of course, the fact that self-reported health data is unreliable doesn't actually imply that vegan diets are healthy. And as I noted, we already know that there are many vegans who don't take adequate supplementation. So, I'd be very surprised if the number of people who suffer from ill-health as a result of becoming veg*n is literally zero. But I reject the framework that we should anchor to the self-report data and then try to figure out how much it's inflated. As a datapoint about the health downsides of veg*nism, I think it simply provides very little value. And that seems especially true if we're talking about veg*ns who take the standard recommended supplements regularly.

Replies from: Vaniver, orthonormal
comment by Vaniver · 2023-10-03T02:46:44.022Z · LW(p) · GW(p)

But I reject the framework that we should anchor to the self-report data and then try to figure out how much it's inflated. 

Do you have another framework you prefer, or do you just think that we should not speak about this because we can't know anything about this?

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-03T03:21:38.028Z · LW(p) · GW(p)

[deleted]

Replies from: GWS, Vaniver
comment by Stephen Bennett (GWS) · 2023-10-03T04:30:14.202Z · LW(p) · GW(p)

Does such a study exist?

From what I remember of Elizabeth's posts on the subject, her opinion is the literature surrounding this topic is abysmal. To resolve the question of why some veg*ns desist, we would need one that records objective clinical outcomes of health and veg*n/non-veg*n diet compliance. What I recall from Elizabeth's posts was that no study even approaches this bar, and so she used other less reliable metrics.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-03T05:07:12.635Z · LW(p) · GW(p)

[deleted]

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-10-03T05:47:53.210Z · LW(p) · GW(p)

I'm aware that people have written scientific papers that include the word vegan in the text, including the people at Cochrane. I'm confused why you thought that would be helpful. Does a study that relates health outcomes in vegans with vegan desistance exist, such that we can actually answer the question "At what rate do vegans desist for health reasons?"

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T06:55:12.447Z · LW(p) · GW(p)

Does a study that relates health outcomes in vegans with vegan desistance exist, such that we can actually answer the question "At what rate do vegans desist for health reasons?"

I don't think that's the central question here. We were mostly talking about whether vegan diets are healthy. I argued that self-reported data is not reliable for answering this question. The self-reported data might provide reliable evidence regarding people's motives for abandoning vegan diets, but it doesn't reliably inform us whether vegan diets are healthy.

Analogously, a survey of healing crystal buyers doesn't reliably tell us whether healing crystals improve health. Even if such a survey is useful for explaining motives, it's clearly less valuable than an RCT when it comes to the important question of whether they actually work.

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-10-03T16:04:41.307Z · LW(p) · GW(p)

I don't think that's the central question here.

So far as I can tell, the central question Elizabeth has been trying to answer is "Do the people who convert to veganism because they get involved in EA have systemic health problems?" Those health problems might be easily solvable with supplementation (Great!), systemic to having a fully vegan diet but only requires some modest amount of animal product, or something more complicated. She has several self-reported people coming to her saying they tried veganism, had health problems, and stopped. So, "At what rate do vegans desist for health reasons?" seems like an important question to me. It will tell you at least some of what you are missing when surveying current vegans only.

Analogously, a survey of healing crystal buyers doesn't reliably tell us whether healing crystals improve health. Even if such a survey is useful for explaining motives, it's clearly less valuable than an RCT when it comes to the important question of whether they actually work.

I agree that if your prior probability of something being true is near 0, you need very strong evidence to update. Was your prior probability that someone would desist from the vegan diet for health reasons actually that low? If not, why is the crystal healing metaphor analogous?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T17:36:18.463Z · LW(p) · GW(p)

So, "At what rate do vegans desist for health reasons?" seems like an important question to me. It will tell you at least some of what you are missing when surveying current vegans only.

As I argued in my original comment, self-reported data is unreliable for answering this question. I simply do not trust people's ability to attribute the causal impact of diets on their health. Separately, I think people frequently misreport their motives. Even if vegan diets caused no health effects, a substantial fraction of people could still report desisting from veganism for health reasons.

I'm honestly not sure why think that self-reported data is more reliable than proper scientific studies like RTCs when trying to shed light on this question. The RCTs should be better able to tell us the actual health effects of adopting veganism, which is key to understanding how many people would be forced to abandon the diet for health reasons.

Replies from: Vaniver, Raemon
comment by Vaniver · 2023-10-03T18:51:46.046Z · LW(p) · GW(p)

As I argued in my original comment, self-reported data is unreliable for answering this question. I simply do not trust people's ability to attribute the causal impact of diets on their health.

It seems to me like even if this isn't relevant to our beliefs about underlying reality ("teachers think criticism works because they're not correcting for regression to the mean, and so we can't rely on teacher's beliefs") it should be relevant to our beliefs about individual decision-making ("when people desist from vegan diets, 20% of the time it's because they think their health is worse"), which should direct our views on advocacy (if we want fewer people to desist from vegan diets (and possibly then dissuade others from trying them), we should try to make sure they don't think their health is worse on them).

I'm honestly not sure why think that self-reported data is more reliable than proper scientific studies like RTCs when trying to shed light on this question.

Often there's question-substitution going on. If the proper scientific studies are measuring easily quantifiable things like blood pressure and what people care about more / make decisions based off of is the difficult-to-quantify "how much pep is in my step", then the improper survey may point more directly at the thing that's relevant, even if it does so less precisely.

comment by Raemon · 2023-10-03T19:13:17.520Z · LW(p) · GW(p)

The question I assumed Stephen was asking (and at least my question for myself) here is, "okay, but what do we believe in the meanwhile?". 

Natalia responded with a process that might find some good evidence (but, might not, and looks like at least several hours of skilled-labor search to find out). I agree someone should do that labor and find out if better evidence exists. 

I also realize Vaniver did explicitly ask "what alternate framework you prefer?" and it makes sense that your framework is interested in different questions than mine or Elizabeths or Stephens or whatnot. But, for me, the question is "what should vegan activist's best guess be right now", not "what might it turn out to be after doing a bunch more research that maybe turns out to have good data and maybe doesn't."

Replies from: Natália Mendonça, matthew-barnett
comment by Natália (Natália Mendonça) · 2023-10-03T20:42:59.132Z · LW(p) · GW(p)

for me, the question is "what should vegan activist's best guess be right now"

Best guess of what, specifically?

comment by Matthew Barnett (matthew-barnett) · 2023-10-03T19:28:32.814Z · LW(p) · GW(p)

for me, the question is "what should vegan activist's best guess be right now"

This is fair and a completely reasonable question to ask, even if we agree that the self-reported data is unreliable. I agree the self-reported data could be a useful first step towards answering this question if we had almost no other information. I also haven't looked deeply into the RCTs and scientific data and so I don't have a confident view on the health value of vegan diets. 

On the other hand, personally, my understanding is that multiple mainstream scientific institutions say that vegan diets are generally healthy (in the vast majority of cases) unless you don't take the proper supplementation regularly. If you put a gun to my head right now and asked me to submit my beliefs about this question, I would defer heavily to (my perception of) the mainstream consensus, rather than the self-reported data. That's not because I think mainstream consensus isn't sometimes wrong or biased, but -- in the spirit of your question -- that's just what I think is most reliable out of all the easily accessible facts that I have available right now, including the self-reported data.

To be clear, I didn't really want to take this line, and talk about how The Experts disagree with Elizabeth, and so the burden of proof is on her rather than me, because that's often a conversation stopper and not helpful for fruitful discussion, especially given my relative ignorance about the empirical data. But if we're interested in what a good "best guess" should be, then yes, I think mainstream scientific institutions are generally reliable on questions they have strong opinions about. That's not my response to everything in this discussion, but it's my response to your specific point about what we should believe in the meantime.

Replies from: Raemon, orthonormal
comment by Raemon · 2023-10-03T21:22:52.689Z · LW(p) · GW(p)

Nod.

In this case I don't think the claim you're ascribing to the experts are Elizabeth are actually in conflict. You say:

vegan diets are generally healthy (in the vast majority of cases) unless you don't take the proper supplementation regularly.

And I think Elizabeth said several times "If you actually are taking the supplementation, it's healthy, but I know many people who aren't taking that supplementation. I think EA vegan activists should put more effort into providing good recommendations to people they convince to go vegan." So I'm not sure why you're thinking of the expert consensus here as saying a different thing.

I feel a bit confused about what the argument is about here. I think the local point of "hey, you should be quite skeptical of self-reports" is a good, important point (thanks for bringing it up. I don't think I agree with you on how much I should discount this data, but I wasn't modeling all the possible failure modes you're pointing out). But it feels from your phrasing like there's something else going on, or the thread is overall getting into a cycle of arguing-for-the-sake-of-arguing, or something. (Maybe it's just that Elizabeth's post is long and it's easy to lose track of the various disclaimers she made? Maybe it's more of a "how much are you supposed to even have an opinion if all your evidence is weak?" frame clash)

Could you (or Natalie) say more about what this thread is about from your perspective?

Replies from: matthew-barnett, Natália Mendonça
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T21:52:29.944Z · LW(p) · GW(p)

So I'm not sure why you're thinking of the expert consensus here as saying a different thing.

As far as I can tell, I didn't directly assert that expert consensus disagreed with Elizabeth in this thread. Indeed I mentioned that I "didn't really" want to make claims about that. I only brought up expert consensus to reply to a narrow question that you asked about what we should rely on as a best guess. I didn't mention expert consensus in any of my comments prior to that one, at least in this thread.

My primary point in this thread was to talk about the unreliability of self-reported data and the pitfalls of relying on it. Secondarily, I commented that most of the people she's critiquing in this post don't seem obviously guilty of the allegation in the title. I think it's important to push back against accusations that a bunch of people (or in this case, a whole sub-community) is "not truthseeking" on the basis of weak evidence. And my general reply here is that if indeed vegan diets are generally healthy as long as one takes the standard precautions, then I think it is reasonable for others to complain about someone emphasizing health tradeoffs excessively (which is what I interpreted many of the quoted people in the post as doing).

(At the very least, if you think these people are being unreasonable, I would maintain that the sweeping accusation in the title requires stronger evidence than what was presented. I am putting this in parentheses though to emphasize that this is not my main point.)

Also, I think it's possible that Elizabeth doesn't agree with the scientific consensus, or thinks it's at least slightly wrong. I don't want to put words in her mouth, though. Partly I think the scientific consensus is important to mention at some point because I don't fully know what she believes, and I think that bringing up expert consensus is a good way to ground our discussion and make our premises more transparent. However, if she agrees with the consensus, then I'm still OK saying what I said, because I think almost all of it stands or falls independently of whether she agrees with the consensus.

But it feels from your phrasing like there's something else going on, or the thread is overall getting into a cycle of arguing-for-the-sake-of-arguing, or something.

That's possible too. I do think I might be getting too deep into this over what is mostly a few pointless quibbles about what type of data is reliable and what isn't. You're right to raise the possibility that things are going off the rails in an unintended way.

comment by Natália (Natália Mendonça) · 2023-10-03T21:57:30.963Z · LW(p) · GW(p)

I think the original post [LW · GW] was a bit confusing in what it claimed the Faunalytics study was useful for.

For example, the section 

The ideal study is a longitudinal RCT where diet is randomly assigned, cost (across all dimensions, not just money) is held constant, and participants are studied over multiple years to track cumulative effects. I assume that doesn’t exist, but the closer we can get the better. 

I’ve spent several hours looking for good studies on vegan nutrition, of which the only one that was even passable was the Faunalytics study. 

[...]

A non-exhaustive list of common flaws:

  • Studies rarely control for supplements. [...]

makes it sound like the author is interested on the effects of vegan diets on health, both with and without supplementation, and that they're claiming that the Faunalytics study is the best study we have to answer that question. This is what I and Matthew would strongly disagree with.

This post uses the Faunalytics study in a different (and IMO more reasonable) way, to show which proportion of veg*ans report negative health effects and quit in practice. This is a different question because it can loosely track how much veg*ans follow dietary guidelines. For example, vitamin B12 deficiency should affect close to 100% of vegans who don't supplement and have been vegan for long enough, and, on the other side of the spectrum, it likely affects close to 0% of those who supplement, monitor their B12 levels and take B12 infusions when necessary. 

A "longitudinal RCT where diet is randomly assigned" and that controls for supplements would not be useful for answering the second question, and neither would the RCTs and systematic reviews I brought up. But they would be more useful than the Faunalytcis survey for answering the first question.

comment by orthonormal · 2023-10-03T19:48:43.653Z · LW(p) · GW(p)

Elizabeth has put at least dozens of hours into seeking good RCTs on vegan nutrition, and has come up nearly empty. At this point, if you want to say there is an expert consensus that disagrees with her, you need to find a particular study that you are willing to stand behind, so that we can discuss it. This is why Elizabeth wrote a post on the Adventist study—because that was the best that people were throwing at her.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T20:10:01.292Z · LW(p) · GW(p)

Elizabeth has put at least dozens of hours into seeking good RCTs on vegan nutrition, and has come up nearly empty.

Given that I haven't looked deeply into the RCTs and observational studies, I fully admit that I can't completely address this comment and defend the merits of the scientific data. That said, I find it very unlikely that the RCTs and observational studies are so flawed that the self-reported survey data is more reliable. Although I find it quite plausible that the diet studies are flawed, why would the self-reported data be better?

The scientific studies might be bad, but that doesn't mean we should anchor to an even more unreliable source of information.

At this point, if you want to say there is an expert consensus that disagrees with her, you need to find a particular study that you are willing to stand behind, so that we can discuss it.

I was very careful in my comment to say that I was only bringing up expert consensus to respond purely to a narrow point about what the "best guess" of vegan activists should be in the absence of a thorough investigation. Moreover, expert consensus is not generally revealed via studies, and so I don't think I need to bring one up in order to make this point. Expert consensus is usually revealed by statements from mainstream institutions and prominent scientists, and sometimes survey data from scientists. If you're asking me to show expert consensus, then I'd refer you to statements from the American Dietetic Association and the British Dietetic Association as a start. But I also want to emphasize that I really do not see expert consensus as the primary point of contention here.

comment by Vaniver · 2023-10-03T16:17:19.794Z · LW(p) · GW(p)

I would prefer anchoring on studies that report objective clinical outcomes 

Yeah, that does sound nicer; have those already been done or are we going to have to wait for them?

Replies from: Natália Mendonça
comment by orthonormal · 2023-10-03T19:43:53.106Z · LW(p) · GW(p)

This is a pretty transparent isolated demand for rigor. Can you tell me you've never uncritically cited surveys of self-reported data that make veg*n diets look good?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2023-10-03T19:55:05.438Z · LW(p) · GW(p)

This is a pretty transparent isolated demand for rigor.

I don't see what you mean. I thought an isolated demand for rigor is when you demand rigor selectively. But I am similarly skeptical of health claims about diets in almost every circumstance. Can you explain what you think I'm being selective about?

Can you tell me you've never uncritically cited surveys of self-reported data that make veg*n diets look good?

It's hard to go back and check, and maybe I said some things when I was e.g. 16 that I wouldn't stand by today, but I honestly don't think I've taken this strategy at all. In my life I mostly remember being highly skeptical of self-reported health data, especially when it comes to asking people about causal effects. That's not a vegan thing. That's just my take on the value of the scientific method over anecdotes, self-reported data, and personal speculation. Do you have any evidence otherwise?

comment by wilkox · 2023-10-03T01:43:30.868Z · LW(p) · GW(p)

For several of the examples you give, including my own comments, your description of what was said seems to misrepresent the source text.

Active suppression of inconvenient questions: Martín Soto

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that's a made-up problem.

This is not a charitable or even plausible description of what Martín wrote, and Martín has described this as a 'hyperbolic [LW(p) · GW(p)]' misrepresentation of their position. There is nowhere in the source comment thread [LW(p) · GW(p)] that Martín claims or implies anything resembling the position that naïve veganism is 'made-up'. The closest they come is to express that naïve transitions to veganism are not common in their personal experience ('I was very surprised to hear those anecdotal stories of naive transitions, because in my anecdotal experience across many different vegan and animalist spaces, serious talk about nutrition, and a constant reminder to put health first, has been an ever-present norm.') Otherwise, they seem to take the idea of naïve transitions seriously while considering the effects of 'signaling [sic] out veganism' for discussion: 'To the extent the naive transition accounts are representative of what's going on...', 'seems to me to be one of the central causes of these naive transitions...', '...the single symptom of naive vegan transitions...'.

(Martín also objected to other ways [LW(p) · GW(p)] in which they believe you misrepresented their position, and Slapstick agreed [LW(p) · GW(p)]. I found it harder to evaluate whether they were misrepresented in these other ways, because like Stephen Bennett I found it hard to understand [LW(p) · GW(p)] Martín's position in detail.)

Active suppression of inconvenient questions: 'midlist EA'

But what they actually did was name-check the idea that X is fine before focusing on the harm to animals caused by repeating the claim- which is exactly what you'd expect if the health claims were true but inconvenient. I don't know what this author actually believes, but I do know focusing on the consequences when the facts are in question is not truthseeking.

The author is clear about what they actually believe. They say that the claims that plant foods are poisonous or responsible for Western diseases are 'based on dubious evidence' and are 'dubious health claims'. They then make an argument proceeding from this: that these dubious claims increase the consumption of animal-based foods, which they believe to be unethical, with the evidence for animal suffering being much stronger than the evidence for the 'dubious health claims'.

You may disagree with their assessment of the health claims about plant foods, and indeed they didn't examine any evidence for or against these claims in the quoted post. This doesn't change the fact that the source quotation doesn't fit the pattern you describe. The author clearly does not believe that the health claims about plant foods are 'true but inconvenient', but that they are 'dubious'. Their focus on the consequences of these health claims is not an attempt to 'actively suppress inconvenient questions', but to express what they believe to be true.

Active suppression of inconvenient questions: Rockwell

This comment more strongly emphasizes the claim that my beliefs are wrong, not just inconvenient.

Rockwell expresses, in passing, a broad concern with your posts ('...why I find Elizabeth's posts so troubling...'), although as they don't go into any further detail it's not clear if they think your 'beliefs are wrong' or that they find your posts troubling for some other reason. It's reasonable to criticise this as vague negativity without any argument or details to support it. However, it cannot serve as an example of 'active suppression of an inconvenient question' because it does not seem to engage with any question at all, and there's certainly nowhere in the few words Rockwell wrote on 'Elizabeth's posts' where they express or emphasise 'the claim that [your] beliefs are wrong, not just inconvenient'. (This source could work as an example of 'strong implications not defended').

Active suppression of inconvenient questions: wilkox[1]

...the top comment says that vegan advocacy is fine because it's no worse than fast food or breakfast cereal ads...If I heard an ally described our shared movement as no worse than McDonalds, I would injure myself in my haste to repudiate them.

My comment does not claim 'that vegan advocacy is fine because it's no worse than fast food or breakfast cereal ads', and does not describe veganism or vegan advocacy as 'no worse than McDonalds'. It sets up a hypothetical scenario ('Let's suppose...') in which vegan advocates do the extreme opposite of what you recommend in the conclusions of the 'My cruxes' section of the Change My Mind post, then claims that even this hypothetical, extreme version of vegan advocacy would be no worse than the current discourse around diet and health in general. This was to illustrate my claim that health harms from misinformation are 'not a problem specific to veganism', nor one where 'veganism in particular is likely to be causing significant health harms'.

Had I actually compared McDonalds to real-world vegan advocacy rather than this hypothetical worst-case vegan advocacy, I would have said McDonalds is much worse. You know this, because you asked me [LW(p) · GW(p)] and I told you [LW(p) · GW(p)]. (This also doesn't seem to be an example of 'active suppression of inconvenient questions'.)

Frame control, etc.: wilkox

Over a very long exchange I attempt to nail down his position:

  • Does he think micronutrient deficiencies don't exist? No, he agrees they do.
  • Does he think that they can't cause health issues? No, he agrees they do.

This did not happen. You did not ask or otherwise attempt to nail down whether I believe micronutrient deficiencies exist, and I gave my position on that in the opening comment ('Veganism is a known risk factor for some nutrient deficiencies...'). Likewise, you did not ask or attempt to nail down whether I believe micronutrient deficiencies can cause health issues, and I gave my position on that in the opening comment ('Nutrient deficiencies are common and can cause anything ranging from no symptoms to vague symptoms to life-threatening diseases').

  • Does he think this just doesn't happen very often, or is always caught? No, if anything he thinks the Faunalytics underestimates the veg*n attrition due to medical issues.

You did ask me what I thought about the Faunalytics data ('Do you disagree with their data...or not consider that important...?').

So what exactly does he disagree with me on?

This is answered by the opening sentences of my first comment: 'I feel like I disagree with this post, despite broadly agreeing with your cruxes', because I interpreted your post as making 'an implicit claim' that there are 'significant health harms' of veganism beyond the well-known nutritional deficiencies. I went on to ask whether you actually were making this claim: 'Beyond these well-known issues, is there any reason to expect veganism in particular to cause any health harms worth spending time worrying about?' Over two exchanges on the 'importance' of nutrient deficiencies in veganism, I asked again [LW(p) · GW(p)] and then again [LW(p) · GW(p)] whether you believe that there are health harms of veganism that are more serious and/or less well-known than nutrient deficiencies, and you clarified that you do not [LW(p) · GW(p)], and provided some useful context that helped me to understand why you wrote the post the way you did.

My account of the conversation is that I misread an implicit claim into your post, and you clarified what you were actually claiming and provided context that helped me to understand why the post had been written in the way it was. We did identify a disagreement over the 'importance' of nutrient deficiencies in veganism, but this also seemed explicit and legible. It's hard to construe this as an example where the nature of the disagreement was unclear, or otherwise of 'nailing jello to the wall'.

Wilkox acknowledges that B12 and iron deficiencies can cause fatigue, and veganism can cause these deficiencies, but it's fine because if people get tired they can go to a doctor

I did not claim that fatigue due to B12 or iron deficiencies, or any other health issue secondary to veganism, is 'fine because if people get tired they can go to a doctor'. I claimed that to the extent that people don't see a doctor because of these symptoms, the health harms of veganism are unlikely to be their most important medical problem, because the symptoms are 'minor enough that they can't be bothered', they 'generally don't seek medical help when they are seriously unwell, in which case the risk from something like B12 deficiency is negligible compared to e.g. the risk of an untreated heart attack', or they 'don't have good access to medical care...[in which case] veganism is unlikely to be their most important health concern'. I did not say that every vegan who has symptoms due to nutritional deficiencies can or will go to a doctor (I explicitly said the opposite), nor that this situation is 'fine'.

But it's irrelevant when the conversation is "can we count on veganism-induced fatigue being caught?"

'Can we count on veganism-induced fatigue being caught?' is not a question raised in my original comment, nor in Lukas Finnveden's reply. I claimed that it would not always be caught, and gave some reasons why it might not be caught (symptoms too minor to bother seeing a doctor, generally avoid seeking medical care for major issues, poor access to medical care). Lukas Finnveden's comment added reasons that people with significant symptoms may not seek medical care: they might not notice issues that are nonetheless significant to them, or they might have executive function problems that create a barrier to accessing medical care. There's nowhere in our brief discussion where 'can we count on veganism-induced fatigue being caught?' is under debate.

Bad sources, badly handled: wilkox

Wilkox's comment [LW(p) · GW(p)] on the LW version of the post, where he eventually agrees that veganism requires testing and supplementation for many people (although most of that exchange hadn't happened at the time of linking).

I did not 'eventually agree' to these points, and we did not discuss them at all in the exchange. In my first comment, I said 'Many vegans, including myself, will routinely get blood tests to monitor for these deficiencies. If detected, they can be treated with diet changes, fortified foods, oral supplementation, or intramuscular/intravenous supplementation.'

  1. ^

    I am not an EA, have only passing familiarity with the EA movement, and have never knowingly met an EA in real life. I don't think anything I have written can stand as an example of 'EA vegan advocacy', and actual EAs might reasonably object to being tarred with the same brush. ↩︎

Replies from: philh
comment by philh · 2023-10-03T09:40:16.799Z · LW(p) · GW(p)

So I haven't reread to figure out an opinion on most of this, but wrt this specific point

I found it harder to evaluate whether they were misrepresented in these other ways, because like Stephen Bennett I found it hard to understand [LW(p) · GW(p)] Martín’s position in detail.

I kinda want to flag something like "yes, that's the point"? If Martín's position is hard to pin down, then... like, it's better to say "I don't know what he's trying to say" than "he's trying to say [concrete thing he's not trying to say]", but both of them seem like they fit for the purposes of this post. (And if Elizabeth had said "I don't know what he's trying to say" then I anticipate three different commenters giving four different explanations of what Martín had obviously been saying.)

And, part of the point here is "it is very hard to talk about this kind of thing". And I think that if the response to this post is a bunch of "gotcha! You said this comment was bad in one particular way, but it's actually bad in an interestingly different way", that kinda feels like it proves Elizabeth right?

But also I do want there to be space for that kind of thing, so uh. Idk. I think if I was making a comment like that I'd try to explicitly flag it as "not a crux, feel free to ignore".

Replies from: wilkox
comment by wilkox · 2023-10-03T20:34:03.852Z · LW(p) · GW(p)

And, part of the point here is "it is very hard to talk about this kind of thing". And I think that if the response to this post is a bunch of "gotcha! You said this comment was bad in one particular way, but it's actually bad in an interestingly different way", that kinda feels like it proves Elizabeth right?

This seems like a self-fulfilling prophecy. If I wrote a post that said:

It's common for people on LessWrong to accuse others of misquoting them. For example, just the other day, Elizabeth said:

wilkox is always misquoting me! He claimed that I said the moon is made of rubber, when of course I actually believe it is made of cheese.

and philh said:

I wish wilkox would stop attributing made-up positions to me. He quoted me as saying that the sky is blue. I'm a very well-documented theskyisgreenist.

The responses to that post would quite likely provide evidence in favour of my central claim. But this doesn't mean that the evidence I provided was sound, or that it shouldn't be open to criticism.

Replies from: philh
comment by philh · 2023-10-04T07:40:38.963Z · LW(p) · GW(p)

I don't think this is a great analogy, but basically yeah. This sort of thing is why I included the last paragraph in my previous comment ("I do want there to be space for that kind of thing").

comment by Natália (Natália Mendonça) · 2023-10-02T15:50:38.594Z · LW(p) · GW(p)

Outcomes for veganism are [...] worse than everything except for omnivorism in women.

As I explained elsewhere [LW(p) · GW(p)] a few days ago (after this post was published), this is a very misleading way to describe that study. The correct takeaway is that they could not find any meaningful difference between each diet's association with mortality among women, not that “[o]utcomes for veganism are [...] worse than everything except for omnivorism in women.” 

It's very important to consider the confidence intervals in addition to the point estimates when interpreting this study (or any study, really, when confidence intervals are available). They provide valuable context to the data.

Replies from: jimrandomh, jkaufman
comment by jimrandomh · 2023-10-05T17:46:15.913Z · LW(p) · GW(p)

Mod note: I count six deleted comments by you on this post. Of these, two had replies (and so were edited to just say "deleted"), one was deleted quickly after posting, and three were deleted after they'd been up for awhile. This is disruptive to the conversation. It's particularly costly when the subject of the top-level post is about conversation dynamics themselves, which the deleted comments are instances (or counterexamples) of.

You do have the right to remove your post/comments from LessWrong. However, doing so frequently, or in the middle of active conversations, is impolite. If you predict that you're likely to wind up deleting a comment, it would be better to not post it in the first place. LessWrong has a "retract" button which crosses out text (keeping it technically-readable but making it annoying to read so that people won't); this is the polite and epistemically-virtuous way to handle comments that you no longer stand by.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-05T18:02:27.244Z · LW(p) · GW(p)

Thanks for this information. When I did this, it was because I was misunderstanding someone's position, and only realized it later. I'll refrain from deleting comments excessively in the future and will use the "retract" feature when something like this happens again.

comment by jefftk (jkaufman) · 2023-10-02T17:25:48.517Z · LW(p) · GW(p)

That sounds right. When citing a study as finding X is worse than Y, unless you say otherwise people will interpret that as "the study's confidence intervals for X and Y don't overlap" and not the much weaker "the study's point estimate for X is below it's point estimate for Y".

(It's a bit less clear in this context, where Elizabeth is trying to say that, contrary to other people's claims, the study does not show that veganism is better than other diets. In that case the point estimate for X being below Y does tell us you shouldn't use the study to argue that X is above Y. But I agree with Natália that people are likely to misinterpret the OP this way.)

Replies from: Natália Mendonça, pktechgirl
comment by Natália (Natália Mendonça) · 2023-10-02T17:50:46.237Z · LW(p) · GW(p)

To be clear, the study found that veganism and pescetarianism were meaningfully associated with lower mortality among men (aHR 0.72 , 95% CI [0.56, 0.92] and 0.73 , 95% CI [0.57, 0.93], respectively), and that no dietary patterns were meaningfully associated with mortality among women. I don’t think it’s misleading to conclude from this that veganism likely has neutral-to-positive effects on lifespan given this study's data, which was ~my conclusion in the comment I wrote that Elizabeth linked on that section, which was described as "deeply misleading."

comment by Elizabeth (pktechgirl) · 2023-10-02T17:40:40.667Z · LW(p) · GW(p)

Thanks Jeff. I'm curious how you feel about my original phrasing, versus what's here.

I agree that the CIs heavily overlap, and drawing strong conclusions from this would be unjustified. I nonetheless think it’s relevant that even if you treat the results as meaningful, the summaries given by the abstract and some commenters are inaccurate (Natalia has since said she was making a more limited claim). That means they’re making two errors (overstating effect, and effect in wrong direction) rather than just one (overstating effect).

I can see how my phrasing here wouldn’t convey all that if you don’t click through the link, and that seems worth fixing, but I’m curious what you think of the underlying point.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-02T18:20:27.733Z · LW(p) · GW(p)

That means they’re making two errors (overstating effect, and effect in wrong direction) rather than just one (overstating effect).

Froolow’s comment [EA(p) · GW(p)] claimed that “there's somewhere between a small signal and no signal that veganism is better with respect to all-cause mortality than omnivorism.” How is that a misleading way of summarizing the adjusted hazard ratio 0.85 (95% CI, 0.73–1.01), in either magnitude or direction? Should he have said that veganism is associated with higher mortality instead? 

None of the comments you mentioned in that section claimed that veganism was associated with lower mortality in all subgroups (e.g. women). But even if they had, the hazard ratio for veganism among women was still in the "right" direction (below 1, though just slightly and not meaningfully). Other diets were (just slightly and not meaningfully) better among women, but none of the commenters claimed that veganism was better than all diets either.

Unrelatedly, I noticed that in this comment (and in other comments you've made regarding my points about confidence intervals) you don't seem to argue that the sentence “[o]utcomes for veganism are [...] worse than everything except for omnivorism in women” is not misleading.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-03T01:41:22.529Z · LW(p) · GW(p)

I think one issue here is that my phrasing was bad. I meant “support” as in “doesn’t support claims of superiority”, but reading it now it’s obvious that it could be read as “doesn’t support claims as fine”, which is not what I meant. I agree that veganism works for some people. I want to fix that, although am going to wait until I’m sure there aren’t other changes, because all changes need to be made in triplicate.

I appreciate you pointing out that potential reading so I could fix it. However I am extremely frustrated with this conversation overall. You are holding me to an exacting standard while making basic errors yourself, such as: 
* Quoted me [LW(p) · GW(p)] as saying “X is great”, when what I said was [LW · GW] “If you believe Y, which you shouldn’t, X is great” 
* implying that [EA(p) · GW(p)] when I wrote this post I ignored a response from you, when the response was made a day after this was posted posted, after I pointed out your non-response in comments. (This was somewhat fixed after another commenter pointed it out)
* Your original comment [LW(p) · GW(p)] on Change My Mind referred to this study as “missing context”, as if it was something I should have known and deliberately left out. That’s a loaded implication in general, but outright unfair when the post is titled “change my mind” and its entire point was asking for exactly that kind of information.



 

[LW(p) · GW(p)]

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-03T02:05:16.634Z · LW(p) · GW(p)

The second point here was not intended and I fixed it within 2 minutes of orthonormal pointing it out, so it doesn't seem charitable to bring that up. (Though I just re-edited that comment to make this clearer). 

The first point was already addressed here [LW(p) · GW(p)].

I'm not sure what to say regarding the third point other than that I didn't mean to imply that you "should have known and deliberately left out" that study. I just thought it was (literally) useful context. Just edited that comment.


All of this also seems unrelated to this discussion. I'm not sure why me addressing your arguments is being construed as "holding [you] to an exacting standard."

comment by Lao Mein (derpherpize) · 2023-10-19T12:40:34.827Z · LW(p) · GW(p)

I am getting increasingly nervous about the prevalence of vegetarians/vegans. The fact that they can get meat banned at EA events while being a minority is troubling and reminds me that they can seriously threaten my way of life. 

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2023-10-21T15:31:49.259Z · LW(p) · GW(p)

Well, it's not like vegans/vegetarians are some tiny minority in EA. Pulling together some data from the 2022 ACX survey, people who identify as EA are about 40% vegan/vegetarian, and about 70% veg-leaning (i.e., vegan, vegetarian, or trying to eat less meat and/or offsetting meat-eating for moral reasons). (That's conditioning on identifying as an LW rationalist, since anecdotally I think being vegan/vegetarian is somewhat less common among Bay Area EAs, and the ACX sample is likely to skew pretty heavily rationalist, but the results are not that different if you don't condition.)

ETA: From the 2019 EA survey, 46% of EAs are vegan/vegetarian and 77% veg-leaning [EA · GW].

comment by tailcalled · 2023-09-29T07:05:26.410Z · LW(p) · GW(p)

I would really like to have a community of people who take truth-seeking seriously. While I can do some research, the world is too big for me to research most things. Furthermore, the value of the research that I do could be much bigger if others could benefit from it, but this would require a community that upholds proper epistemic standards towards me and communicates value of information well. I assume other people face the same problems, of not having the resources to research everything, and finding that it is inefficient for them to research the things they do research.

I think this can be fixed by getting a couple of honest people representing different interests together for each topic, and having them perform research that answers the most commonly relevant question on the topic and writing up the answers in a convenient format.

(At least up to a point? People are, probably rightfully, skeptical that this approach can be used to research who is an abuser or not. But for "scientific" questions like veganism, which concern subjects that are present in many places across the world like human nutritional needs or means of food production, and therefore feasible to collect direct information on without too much interference, it seems like it should be feasible.)

The rationalist community seems too loosely organized to handle this automatically. The EA community seems too biased and maybe also too loose to handle it. So I would like to create a community within rationalism to address it. For now, here is a Discord link for it: https://discord.gg/sTqMq8ey

Note that I don't mean this to bash vegans. While the vegan community is often dishonest, I have the impression that the carnist community is also often dishonest. I think that people on all sides are too focused on creating counternarratives to places where they are being attacked, instead of creating actionable answers to important questions, and I would like a community that just focuses on 1) figuring out what questions people have, and 2) answering them as accurately as possible, in easy-to-understand formats, and communicating the ranges of uncertainty and the raw evidence used to answer them.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-09-29T07:30:54.535Z · LW(p) · GW(p)

The reminds me of Slate Star Codex's Adversarial Collaboration competition.

Replies from: mruwnik, tailcalled
comment by mruwnik · 2023-09-29T14:17:35.348Z · LW(p) · GW(p)

That's actually what got me to stop eating (or at least buying) meat

comment by tailcalled · 2023-09-29T07:35:52.474Z · LW(p) · GW(p)

Yes, I tried participating in this twice and am probably somewhat inspired by it.

comment by trevor (TrevorWiesinger) · 2023-09-29T01:58:10.386Z · LW(p) · GW(p)

I'm really glad that this is evaluated. I don't think people realize just how much is downstream of EA's community building- if EA grows at a rate of 1.5x per year, then EA's size is less than 6 years out from dectoupling (10x). No matter who you are or what you are doing, EA dectoupling in size will inescapably saturate your environment with people, so if that's going to happen then it should at least be done right instead of wrong. That shouldn't be a big ask.

The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet.

Hard agree on this. If slow takeoff happens soon, this will inescapably become an even more serious problem than it already is. There are so many awful and complicated things contained within "threats we can't even detect yet" when you're dealing with historically unprecedented information environments.

Replies from: pktechgirl, FiftyTwo
comment by Elizabeth (pktechgirl) · 2023-09-29T04:13:34.640Z · LW(p) · GW(p)

I share your concerns with growth and epistemics, but haven't been able to articulate it with anywhere near the degree of precision or evidence I have in this post. If you have any specifics you could point me to (including anecdotal) I'd really appreciate it. 

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-09-30T15:40:43.327Z · LW(p) · GW(p)

My thinking about this (3 minute read) is that EA will be deliberately hijacked [LW · GW] by an external organization or force, not a gradual erosion of epistemic norms causing EA to become less healthy, which is the focus of this post. I generally focus on high-tech cognitive hacks, not the old-fashioned use of writing by humans that this post focuses on (using human intelligence to find galaxy-brained combinations of words that maximize for effect). 

But I think that an internal conflict between animal welfare and the rest of EA is at risk of being exploited by outsiders, particularly vested interests that EA steps on such as those related to the AI race (e.g. Facebook [LW(p) · GW(p)] or Intelligence Agencies [LW · GW]).

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2023-09-30T20:11:02.148Z · LW(p) · GW(p)

My thinking about this (3 minute read) is that EA will be deliberately hijacked by an external organization or force

The recent examples linked from LW on the EA forum of (my personal judgement) batshit craziness about animal welfare (e.g. 12 (ETA: 14 [LW · GW]) bees are worth 1 human) has had me wondering about entryism by batshit crazy animal rights folks.

ETA: What's Ziz up to these days?

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2023-10-01T10:30:46.186Z · LW(p) · GW(p)

e.g. 12 (ETA: 14) bees are worth 1 human

This is a misrepresentation of what the report says. The report [EA · GW] says that, conditional [EA · GW] on hedonism, valence symmetry, the animals being sentient, and other assumptions, the intensity of positive/negative valence that a bee can experience is 7% that of the positive/negative intensity that a human can experience. How to value creatures based on the intensities of positively/negatively valenced states they are capable of is a separate question, even if you fully accept the assumptions. (ETA: If you assume utilitarianism and hedonism etc., I think it is pretty reasonable to anchor moral weight (of a year of life) in range of intensity of positive/negative valence, while of course keeping the substantial uncertainties around all this in mind.)

On bees in particular, the authors write:

We also find it implausible that bees have larger welfare ranges than salmon. But (a) we’re also worried about pro-vertebrate bias; (b) bees are really impressive; (c) there's a great deal of overlap in the plausible welfare ranges for these two types of animals, so we aren't claiming that their welfare ranges are significantly different; and (d) we don’t know how to adjust the scores in a non-arbitrary way. So, we’ve let the result stand.

I think when engaging in name-calling ("batshit crazy animal rights folks") it is especially important to get things right.

(COI: The referenced report was produced by my employer, though a different department.)

Replies from: frontier64, Richard_Kennaway
comment by frontier64 · 2023-10-01T15:42:39.126Z · LW(p) · GW(p)

e.g. 12 (ETA: 14) bees are worth 1 human

This is a misrepresentation of what the report says.

The report:

Instead, we’re usually comparing either improving animal welfare (welfare reforms) or preventing animals from coming into existence (diet change → reduction in production levels) with improving human welfare or saving human lives.


I don't think he's misrepresenting what the report says at all. Trevor gets the central point of the post perfectly. The post's response to the heading "So you’re saying that one person = ~three chickens?" is, no, that's just the year to year of life comparison, chickens have shorter lives than humans so the life-to-life comparison is more like 1/16. Absolutely insane. From the post:

Then, humans have, on average, 16x this animal’s capacity for welfare; equivalently, its capacity for welfare is 0.0625x a human’s capacity for welfare.

And elsewhere people say that capacity for welfare is how one should do cause prioritization. So the simple conclusion is one human life = 16 chicken lives. The organization is literally called "Rethinking Priorities" i.e. stop prioritizing humans so much and accept all our unintuitive, mostly guess-based math that we use to argue how animal welfare can trump your own welfare. The post uses more words than that sure but tradeoffs between animal and human lives is the central point of the post and really the whole sequence. If I sound angry it's because I am. Saying that a human life is comparable to a certain number of animal lives is very close to pure evil on top of being insane.

You further say:

The report says that, conditional on hedonism, valence symmetry, the animals being sentient, and other assumptions, the intensity of positive/negative valence that a bee can experience is 7% that of the positive/negative intensity that a human can experience.

And no, the report really doesn't say that. The report says that somehow, people should still mostly accept Rethinking Priotities' conclusions even if they disagree with the assumptions:

“I don't share this project’s assumptions. Can't I just ignore the results?" We don’t think so. First, if unitarianism is false, then it would be reasonable to discount our estimates by some factor or other. However, the alternative—hierarchicalism, according to which some kinds of welfare matter more than others or some individuals’ welfare matters more than others’ welfare—is very hard to defend. (


Second, and as we’ve argued, rejecting hedonism might lead you to reduce our non-human animal estimates by ~⅔, but not by much more than that.


So, skepticism about sentience might lead you to discount our estimates, but probably by fairly modest rates.

In response to someone commenting in part:

saving human lives is net positive

The post author's reply is:

This is a very interesting result; thanks for sharing it. I've heard of others reaching the same conclusion, though I haven't seen their models. If you're willing, I'd love to see the calculations. But no pressure at all.

My takeaway from the whole thing is that you're running a motte and bailey where

  • Motte = We're just doing an analysis of the range of positive to negative experience that animals can feel as compared to a human

  • Bailey = We're doing the above and also range of positive to negative experience is how we should decide allocation of resources between species.

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2023-10-01T18:08:51.930Z · LW(p) · GW(p)

Assuming you have the singular "you" in mind, no, I do not think I am not running a motte and bailey. I said above that if you accept the assumptions, I think using the ranges as (provisional, highly uncertain) moral weights is pretty reasonable, but I also think it's reasonable to reject the assumptions. I do think it is true that some people have (mis)interpreted the report and made stronger claims than is warranted, but the report is also full of caveats and (I think) states its assumptions and results clearly.

The report:

Instead, we’re usually comparing either improving animal welfare (welfare reforms) or preventing animals from coming into existence (diet change → reduction in production levels) with improving human welfare or saving human lives.

Yes, the report is intended to guide decision-making in this way. It is not intended to provide a be-all-end-all estimate. The results still need to be interpreted in the context of the assumptions (which are clearly stated up front). I would take it as one input when making decisions, not the only input.

The post's response to the heading "So you’re saying that one person = ~three chickens?" is, no, that's just the year to year of life comparison, chickens have shorter lives than humans so the life-to-life comparison is more like 1/16. Absolutely insane.

No, that is not the post's response to that heading. It also says: "No. We’re estimating the relative peak intensities of different animals’ valenced states at a given time. So, if a given animal has a welfare range of 0.5 (and we assume that welfare ranges are symmetrical around the neutral point), that means something like, 'The best and worst experiences that this animal can have are half as intense as the best and worst experiences that a human can have' [...]" There is a difference between comparing the most positive/negative valenced states an animal can achieve and their moral worth.

The report says that somehow, people should still mostly accept Rethinking Priotities' conclusions even if they disagree with the assumptions:

“I don't share this project’s assumptions. Can't I just ignore the results?" We don’t think so. First, if unitarianism is false, then it would be reasonable to discount our estimates by some factor or other. However, the alternative—hierarchicalism, according to which some kinds of welfare matter more than others or some individuals’ welfare matters more than others’ welfare—is very hard to defend.

I think I disagree with your characterization, but it depends a bit on what you mean by "mostly". The report makes a weaker claim, that if you don't accept the premises, you shouldn't totally ignore the conclusions (as opposed to "mostly accepting" the conclusions). The idea is that even if you don't accept hedonism, it would be weird if capacity for positively/negatively valenced experiences didn't matter at all when determining moral weights. That seems reasonable to me and I don't really see the issue?

So if you factor in life span (taking 2 months for a drone) and do the ⅔ reduction for not accepting hedonism, you get a median of 1 human life = ~20K bee lives, given the report's other assumptions. That's 3 OOMs more than what Richard Kennaway wrote above.

In response to someone commenting in part:

saving human lives is net positive

The post author's reply is:

This is a very interesting result; thanks for sharing it. I've heard of others reaching the same conclusion, though I haven't seen their models. If you're willing, I'd love to see the calculations. But no pressure at all.

I am not sure what you are trying to say here, could you clarify?

comment by Richard_Kennaway · 2023-10-01T17:01:48.811Z · LW(p) · GW(p)

I'm going by the summary by jefftk that I linked to. Having glanced at the material it's based on, and your links, I am not inclined to root through it all to make a more considered assessment. I suspect I would only end up painting a similar picture with a finer brush. Their methods of getting to the strange places they end up already appear to require more of my attention to understand than I am willing to spend on the issue.

comment by FiftyTwo · 2023-10-20T10:12:48.604Z · LW(p) · GW(p)

I've definitely noticed a shift in the time I've been involved or aware of EA. In the early 2010s it was mostly focused on global poverty and the general idea of evidence based charity, and veganism was peripheral. Now it seems like a lot of groups are mainly about veganism, and very resistant to people who think otherwise. And as veganism is a minority position that is going to put off people who would otherwise be interested in EA

comment by Martín Soto (martinsq) · 2023-10-08T12:55:24.740Z · LW(p) · GW(p)

I think my position has been strongly misrepresented here.

As per the conclusion of this other comment thread [LW(p) · GW(p)], I here present a completely explicit explanation of where and how I believe my position to have been strongly misrepresented. (Slapstick also had a shot at that in this shorter comment [LW(p) · GW(p)].)

Misrepresentation 1: Mistaking arguments

Elizabeth summarizes

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem. He believes this because all of the vegans he knows (through vegan advocacy networks) are well-educated on nutrition.

It is false that I think naive veganism is a made-up problem, and I think Elizabeth is taking the wrong conclusions from the wrong comments.

Her second sentence is clearly a reference to this short comment [LW(p) · GW(p)] of mine (which was written as a first reaction to her posts, before my longer and more nuanced explanation of my actual position [LW(p) · GW(p)]):

I don't doubt your anecdotal experience is as you're telling it, but mine has been completely different, so much so that it sounds crazy to me to spend a whole year being vegan, and participating in animal advocacy, without hearing mention of B12 supplementation. Literally all vegans I've met have very prominently stressed the importance of dietary health and B12 supplementation.

As should be obvious, this is not contesting the existence of naive veganism ("I don't doubt your anecdotal experience"), but just contrasting it with my own personal anecdotal experience. This was part of my first reaction, and didn't yet involve a presentation of my actual holistic position.

Elizabeth arrives at the conclusion that, because of my anecdotal experience, I believe naive veganism doesn't exist (I don't trust the other anecdotal experiences reported by her or other commenters), and that's the reason why I don't agree with her framing. I think my longer explanation [LW(p) · GW(p)] makes it evident that I'm not ignoring the existence of naive veganism, but instead quantitatively weighing against other consequences of Elizabeth's posts and framings. For example:

To the extent the naive transition accounts are representative of what's going on in rat/EA spheres, some intervention that reduces the number of transitions that are naive (while fixing the number of transitions) would be a Pareto-improvement. And an intervention that reduces the number of transitions that are naive, and decreases way less the number of transitions, would also be net-positive.

and

My worry, though, is that signaling out veganism for this is not the most efficient way to achieve this. I hypothesize that

  1. Naive transitions are more correlated with social dynamics about insuficient self-care not exclusive (nor close to exclusive) to veganism in rat/EA spheres.

Of course, these two excerpts are already present in the screenshots presented in the post (which indeed contain some central parts of my position, although leave out some important nuance), so I find this misrepresentation especially hard to understand or explain. I think it's obvious, when saying something like "Naive transitions are more correlated with social dynamics...", that I endorse their existence (or at least their possible existence).

Yet another example, in the longer text I say:

It feels weird for me to think about solutions to this community problem, since in my other spheres it hasn't arisen.

This explicitly acknowledges that this is a problem that exists in this community. Indeed, engaging with Elizabeth's writing and other anecdotal accounts in the comments has updated upwards my opinion of how many naive vegans exist in the rationalist community. But this is mostly independent of my position, as the next paragraph addresses.

You might worry, still, that my position (even if this is not stated explicitly) is motivated in reality by such a belief, and other arguments are rationalizations. First off I will note that this is a more complex inference, and while it is possible to believe it, it's clearly not a faithful representation of the explicit text, and should be flagged as such. But nonetheless, on the hopes of convincing you that this is not going on, I will point out that my actual position and arguments have to do mostly with social dynamics, epistemics, and especially the community's relationship to ethics. (See Misrepresentation 2: Mistaking claims for proof.)

Given, then, that the most important consequences of all this go through dynamics, epistemics and relationship to ethics (since this community has some chance of steering big parts of the future), I think it's clear that my position isn't that sensitive to the exact number of naive vegans. My position is not about ignoring those that exist: it is about how to come about solving the problem. It is about praxis, framing and optimal social dynamics.

Even more concretely, you might worry I'm pulling off a Motte and Bailey, trying to "quietly imply" with my text that the number of naive vegans is low, even if I don't say it explicitly. You might get this vibe, for example, from the following phrasing:

To the extent the naive transition accounts are representative of what's going on in rat/EA spheres,...

I want to make clear that this phrasing is chosen to emphasize that I think we're still somewhat far from having rigorous scientific knowledge about how prevalent naive veganism is in the community (for example, because your sample sizes are small, as I have mentioned in the past). That's not to neglect the possibility of naive veganism being importantly prevalent, as acknowledged in excerpts above.

I also want to flag that, in the short comment [LW(p) · GW(p)] mentioned above, I said the following:

since most vegans do supplement [citation needed, but it's been my extensive personal and online experience, and all famous vegan resources I've seen stress this]

This indeed is expressing my belief that, generally, vegans do supplement, based on my anecdotal experience and other sources. This is not yet talking explicitly about whether this is the case within the community (I didn't have strong opinions about that yet), but it should correctly be interpreted (in the context of that short comment) as a vague prior I am using to be (a priori) doubtful of deriving strong conclusions about the prevalence in the community. I do still think this vague prior is still somewhat useful, and that we still don't have conclusive evidence (as mentioned above). But it is also true (as mentioned above) that this was my first reaction, written before my long text holistically representing my position, and since then I have updated upwards my opinion of how many naive vegans exist in the rationalist community. So it makes sense that this first short comment was more tinged by an implicit lower likelihood of that possibility, but this was superseded by further engaging with posts and comments, and that is explicitly acknowledged in my latter text, as the above excerpts demonstrate.

Finally, one might say "well, of course Elizabeth didn't mean that you literally think 0 naive vegans exist, it was just a way to say you thought too few of them existed, or you were purposefully not putting weight on them". First off, even if those had been my actual stated or implied positions, I want to note this use of unwarranted hyperbole can already tinge a summary with unwarranted implications (especially a short summary of a long text), and thus would find it an implicit misrepresentation. And this is indeed part of what I think is going on, and that's why I repeatedly mention my problem is more with framing and course of action than with informational content itself. But also, as evidenced by the excerpts and reasoning above, it is not the case that I think too few naive vegans exist, or I purposefully don't represent them. I acknowledge the possibility that naive veganism is prevalent amongst community vegans, and also imply that my worries are not too sensitive to the exact number of naive vegans, and are of a different nature (related to epistemics and dynamics).

In summary, I think Elizabeth went something like "huh, if Martín is expressing these complex worries, it probably is just because he thinks naive veganism is not a real problem, since he could be understood to have some doubts about that in his early comments". And I claim that is not what's going on, and it's completely missing the point of my position, which doesn't rely in any way on naive veganism not being a thing. On the contrary, it discusses directly what to do in a community where naive veganism is big. I hope the above excerpts and reasoning demonstrated that.

Misrepresentation 2: Mistaking claims

Elizabeth summarizes

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”.

This, of course, makes it very explicitly sound like in my text I only weigh two variables against each other: disvalue caused by naive veganism, and disvalue caused by animal exploitation.

This is missing a third variable that is very present in my long text, and to which many paragraphs are dedicated or make reference: the consequences of all this (posts, framing, actions, etc.) for social dynamics of the community, and the community's (and its individuals') relationship to ethics.

In fact, not only is this third variable very present in the text, but also in some places I explicitly say it's the most important variable of the three, so demonstrating that my arguments have mostly to do with it. Here's one excerpt making that explicit:

As an extreme example, I very strongly feel like financing the worst moral disaster of current times so that "a few more x-risk researchers are not slightly put off from working in our office" is way past the deontological failsafes. As a less extreme example, I strongly feel like sending a message that will predictably be integrated by most people as "I can put even less mental weight on this one ethical issue that sometimes slightly annoyed me" also is. And in both cases, especially because of what they signal, and the kind of community they incentivize.

Here's another one, even more clear:

But I am even more worried about the harder-to-pin-down communal effects, "tone setting", and the steering of very important sub-areas of the EA community into sub-optimal ethical seriousness (according to me), which is too swayed by intellectual fuzzies, instead of actual utilons.

And finally, in my response [LW(p) · GW(p)] answering some clarificatory questions from Elizabeth (several days before this post was published), here's an even more explicit one:

Of course, I too don't optimize for "number of vegans in the world", but just a complex mixture including that as a small part. And as hinted above, if I care about that parameter it's mainly because of the effects I think it has in the community. I think it's a symptom (and also an especially actionable lever) of more general "not thinking about ethics / Sincerity in the ways that are correct". As conscious as the members of this community try to be about many things, I think it's especially easy (through social dynamics) to turn a blind eye on this, and I think that's been happening too much.

Indeed, one of Elizabeth's screenshots already emphasizes this, placing it as one of the central parts of my argument (although doesn't yet explicitly mention that it's, for me, the most important consequence):

Relatedly, incentivizing a community that's more prone to ignoring important parts of the holistic picture when that goes to the selfish benefit of individuals. (And that's certainly something we don't want to happen around the people taking important ethical decisions for the future.)

I do admit that, stylistically speaking, this point would have been more efficiently communicated had I explicitly mentioned its importance very near the top of my text (so that it appeared, for example, explicitly mentioned in her first screenshot).

Nonetheless, as the above excerpts show, the point (this third, even more important variable) was made explicit in some fragments of the text (even if the reader could have already understood it as implied by other parts that don't mention it explicitly). And so, I cannot help but see Elizabeth's sentence above as a direct and centrally important misrepresentation of what the text explicitly communicated.

You might worry, again, that there's some Motte and Bailey going on, of the form "explicitly mention those things, but don't do it at the top of the text, so that it seems like truly animal ethics is the only thing you care about, or something". While I'm not exactly sure what I'd gain from this practice (since anyway it's patent that many readers disagree with me ethically about the importance of animals, so I might as well downweigh its importance), I will still respond to this worry by pointing that, even if the importance of this third variable is only explicitly mentioned further down in the text, most of the text (and indeed, even parts of the screenshots) already deals with it directly, thus implying its importance and centrality to my position, and furthermore most of the text discussed / builds towards the importance of this third variable in a more detailed and nuanced way than just stating it explicitly (to give a better holistic picture of my thoughts and arguments).

In summary, not only does this representation neglect a central part of my text (and something that I explicitly mentioned was the most important variable in my argument), but also, because of that, attributes to me a statement that I haven't stated and do not hold. While I am uncertain about it (mostly because of remaining doubts about how prevalent naive veganism is), it is conceivable (if we lived in a world with high naive veganism) that, if we ignored all consequences of these posts/framings/actions except for the two variables Elizabeth mentions, attacking naive veganism through these posts is at least net-positive (even if, still, in my opinion, not the optimal approach). But of course, the situation completely changes when realistically taking into account all consequences.

How might Elizabeth have arrived at this misrepresentation? Well, it is true that at the start of my long text I mention:

And an intervention that reduces the number of transitions that are naive, and decreases way less the number of transitions, would also be net-positive.

It is clear how this short piece of text can be interpreted as implying that the only two important variables are the number of naive transitions and the number of transitions (even if I shortly later make clear these are not the only important variables, and most of the text is devoted to discussing this, and I even explicitly mention that is not the case). But clearly that doesn't imply that I believe "the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them". I was just stating that, under some situations, it can make sense to develop certain kinds of health-focused interventions (to make evident that I'm not saying "one should never talk about vegan nutrition", which is what Elizabeth was accusing me of doing). And indeed a central part of my position as stated in the text was that interventions are necessary, but of a different shape to Elizabeth's posts (and I go on to explicitly recommend examples of these shapes). But of course that's not the same as engaging in a detailed discussion about which damages are most important, or already taking into account all of the more complex consequences that different kinds of interventions can have (which I go on to discuss in more detail in the text).

Misrepresentation 3: Missing counter-arguments and important nuance

Elizabeth explains

There are a few problems here, but the most fundamental is that enacting his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread. My post and a commenter’s report [LW(p) · GW(p)] on their college group are apparently the first time he’s heard of vegans who didn’t live and breathe B12. 

But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

First off, the repeated emphasis on "My post and a commenter's report..." (when addressing this different point she's brought up) again makes it sound as if my position was affected or relied on a twisted perception of the world in which naive vegans don't exist. I have already addressed why this is not the case in Misrepresentation 1: Mistaking arguments, but I would like to call attention again to the fact that framing and tone are used to caricaturize my position, or make it seem like I haven't explicitly addressed Elizabeth's point here (and I haven't done so because of a twisted perception). I already find this mildly misleading, given I both had directly addressed that point, and that the content of the text clearly shows my position doesn't depend on the non-existence of naive veganism as a community problem. But of course, it's not clear (in this one particular sentence) where authorial liberties of interpretation should end. Maybe Elizabeth is just trying to psychoanalyze me here, finding the hidden motives for my text (even when the text explicitly states different things). First, I would have preferred this to be flagged more clearly, since the impression I (and probably most readers, who of course won't read my long comment) get from this paragraph is that of implying that my text showcased an obvious-to-all naiveté and didn't address these points. Second, in Misrepresentation 1: Mistaking arguments I have argued why these hidden motives are not real (and again, that is clear form the content of the long text).

Now on to Elizabeth's main point. In my response [LW(p) · GW(p)] to Elizabeth's response [LW(p) · GW(p)] to my long text [LW(p) · GW(p)] (which was sent several days before the post's publication), I addressed some clarifications that Elizabeth had asked for. There, answering directly to her claims that (on her immediate experience) the number of "naive vegans turned non-naive" had been much greater than the number of "vegans turned non-vegan" (which, again, my holistic position doesn't too strongly quantitatively rely on), I said:

The negative effects of the kind "everyone treats veganism less seriously, and as a result less people transition or are vocal about it" will be much more diffused, hard-to-track, and not-observed, than the positive effects of the kind "this concrete individual started vegan supplements". Indeed, I fear you might be down-playing how easy it is for people to arrive (more or less consciously) at these rationalized positions, and that's of course based on my anecdotal experience both inside and outside this community.

Thus, to her claim that I have "cut myself off from half the information", I was already pre-emptively responding by noting that (in my opinion) she has cut herself off from the other half of the information, by ignoring these kind of more diluted effects (that, according to my position, have the biggest impact on the third and most important variable of social dynamics, epistemics, and ethical seriousness). Again, it is also clear in this excerpt that I am worrying more about "diluted effects on social dynamics" than about "the exact figure of how wide-spread naive veganism is".

Indeed (and making a more general diagnostic of the misrepresentation that has happened here), I think Elizabeth hasn't correctly understood that my holistic position, as represented in those texts (and demonstrated in the excerpts presented above), brought forth a more general argument, not limited to short-term interventions against naive veganism, nor sensitively relying on how widespread naive veganism is.
Elizabeth understands me as saying "we should ignore naive veganism". And then, of course, the bigger naive veganism is, the bigger a mistake I might have been making. But in reality my arguments and worries are about framing and tone, and comparing different interventions based on all of their consequences, including the "non-perfectly-epistemic" consequences of undesirably exacerbating this or that dynamic. Here's an excerpt of my original long text exemplifying that:

As must be clear, I'd be very happy with treating the root causes, related to the internalized optimizy and obsessive mindset, instead of the single symptom of naive vegan transitions. This is an enormously complex issue, but I a prior think available health and wellbeing resources, and their continuous establishment as a resource that should be used by most people (as an easy route to having that part of life under control and not spiraling, similar to how "food on weekdays" is solved for us by our employers), would provide the individualization and nuance that these problems require.

Even more clearly, here is one excerpt where I mention I'm okay with running clinical trials to get whatever information we might need to better navigate this situation:

Something like running small group analytics on some naive vegans as an excuse for them to start thinking more seriously about their health? Yes, nice! That's individualized, that's immediately useful. But additionally extracting some low-confidence conclusions and using them to broadcast the above message (or a message that will get first-order approximated to that by 75%) seems negative.

The above makes clear that my worry is not about obtaining or making available that information. It is about the framing and tone of Elizabeth's message, and the consequences it will have when naively broadcast (without accounting for a part of reality: social dynamics).

Finally, Elizabeth says my desired policy is "suppressing public opinion". Of course, that's already a value judgement, and it's tricky to debate what counts as "suppressing public opinion", and what as "acknowledging the existence of social dynamics, and not shooting yourself in the foot by doing something that seems bad when taking them into account". I'm confident that my explanations and excerpts above satisfactorily argue for my having advocated for the latter, and not the former. But then again, as with hidden motives mentioned above, arriving at different conclusions than I do about this (about the nature of what I have written) is not misrepresentation, just an opinion.
What I do find worrisome is how this opinion has been presented and broadcast (so, again, framing). If my position had been more transparently represented, or if Elizabeth had given up on trying to represent it faithfully in a short text, and Elizabeth had nonetheless mentioned explicitly that her interpretation of that text was that I was trying to suppress public discussion (even though I had explicitly addressed public discussion and when and how it might be net-positive), then maybe it would have been easier for the average reader to notice that there might be an important difference of interpretations going on here, and that they shouldn't update so hard on her interpretation as if I had explicitly said "we shouldn't publicly discuss this (under any framing)". And even then I would worry this over-represented your side of the story (although part of that is unavoidable).
But this was not the case. These interpretations were presented in a shape pretty indistinguishable from what would have been an explicit endorsed summary. Her summary looks completely the same as it would look had I not addressed and answered the points she brings up in any way, or explicitly stated the claims and attitudes she attributes to me.

In summary, although I do think this third misrepresentation is less explicitly evident than the other two (due to mixing up with Elizabeth's interpretation of things), I don't think her opinions have been presented in a shape well-calibrated about what I was and wasn't saying, and I think this has led the average reader to, together with Elizabeth, strongly misrepresent my central positions.

Thank you for reading this wall of overly explicit text.

comment by Stephen Bennett (GWS) · 2023-10-04T06:15:09.318Z · LW(p) · GW(p)

The section "Frame control" does not link to the conversation you had with wilkox, but I believe you intended for there to be one (you encourage readers to read the exchange). The link is here: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=uh8w6JeLAfuZF2sxQ [LW(p) · GW(p)]

Replies from: pktechgirl, pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-06T01:30:02.117Z · LW(p) · GW(p)

Okay, it looks like the problem mostly occurred when I copy/pasted from google docs to Wordpress, which lost a lot of the links (but not all? maybe the problem was that it lost some images, and when I copied them over I lost the links?). Lightcone just launched a resync-to-RSS feature that has hopefully worked and updated this post. If it hasn't I am currently too battered and broken by Wordpress's shitty editor that apparently can't gracefully handle posts of this size to do more tonight.

comment by Elizabeth (pktechgirl) · 2023-10-04T20:03:59.835Z · LW(p) · GW(p)

oh god damn it. lesswrong doesn't manually pick up edits in response to updates on my own blog, so I copy pasted, and it looks like all the image links were lost. This isn't feasible to fix so for now I've put up a warning and bugged the lesswrong team about it

Thanks for catching this, would have been a huge issue. 

comment by shminux · 2023-09-29T23:07:54.765Z · LW(p) · GW(p)

Thought I'd comment in brief. I very much enjoyed your post and I think it is mostly right on point. I agree that EA does not have a great epistemic hygiene, given what their aspirations are, and the veganism discussion is a case in point. (Other issues related to EA and CEA have been brought up lately in various posts, and are not worth rehashing here.)

As far as the quoted exchange with me, I agree that I have not stated a proper disclaimer, which was quite warranted, given the thrust of the post. My only intended point was that, while a lot of people do veganism wrong and some are not suited to it at all, an average person can be vegan without adverse health effects, as long as they eat varied and enriched plant-based diet and periodically check their vitamins/nutrients/minerals levels and make dietary adjustments as necessary. Some might find out that they are in the small minority for whom vegan diet is not feasible, and they would do well to focus on what works for them and contribute to EA in other ways. Again, I'm sorry this seems to have come across wrong. 

Oh, and cat veganism is basically animal torture, those who want to wean cats off farmed animal food should focus on vat-grown meat for pet food etc. 

Replies from: philh
comment by philh · 2023-10-02T14:09:20.941Z · LW(p) · GW(p)

I agree that I have not stated a proper disclaimer, which was quite warranted, given the thrust of the post.

To clarify: it's not clear to me whether you think it would have been warranted of you to give a disclaimer, or whether you think not-giving it was warranted?

My only intended point was that, while a lot of people do veganism wrong and some are not suited to it at all, an average person can be vegan without adverse health effects, as long as they eat varied and enriched plant-based diet and periodically check their vitamins/​nutrients/​minerals levels and make dietary adjustments as necessary.

Your original comments said nothing about periodically checking levels and making adjustments. So if that was part of your intended point, you executed your intent very poorly. (My guess would be that four months later, you misremember your intent at the time. Which isn't a big deal. Whatever the reason for the omission, the omission seems worth noting.)

(I separately think it's worth noting that your comment sounds like you're observing "my partner does X" and concluding "the average person can do X", which is obviously not good reasoning. Like, maybe you have more evidence than you're letting on, but the natural-to-me reading of your post is that you're making a basic error.)

comment by Martín Soto (martinsq) · 2023-09-29T10:16:15.376Z · LW(p) · GW(p)

Hi Elizabeth, I feel like what I wrote in those long comments has been strongly misrepresented in your short explanations of my position in this post, and I kindly ask for a removal of those parts of the post until this has cleared up (especially since I had in the past offered to provide opinions on the write-up). Sadly I only have 10 minutes to engage now, but here are some object-level ways in which you've misrepresented my position:

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.

Of course, my position is not as hyperbolic as this.

his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”.

As I mentioned many times in my answer, that's not the (only) trade-off I'm making here. More concretely, I consider the effects of these interventions on community dynamics and epistemics possibly even worse (due to future actions the community might or might not take) than the suffering experienced by farmed animals murdered for members of our community to consume at present day.

I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

Again, I addressed this in my answers, and argue that data of the kind you will obtain are still not enough to derive the conclusions you were deriving.

More generally, my concerns were about framing and about how much posts like this one can affect sensible advocacy and the ethical backbone of this community. There is indeed a trade-off here between transparent communications and communal dynamics, but that happens in all communities and ignoring it in ours is wishful thinking. It seems like none of my worries have been incorporated into the composition of this post, in which you have just doubled down on the framing. I think these worries could have been presented in a way healthier form without incurring in all of those framing costs, and I think its publication is net-negative due to the latter.

Replies from: jimrandomh, daniel-glasscock
comment by jimrandomh · 2023-09-29T11:43:05.538Z · LW(p) · GW(p)

This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.

Replies from: daniel-glasscock, martinsq
comment by Daniel (daniel-glasscock) · 2023-09-29T19:49:30.663Z · LW(p) · GW(p)

I think it's almost always fine for criticized authors to defend themselves in the comments, even if their defense isn't very good.

Replies from: jimrandomh
comment by jimrandomh · 2023-09-29T19:55:14.072Z · LW(p) · GW(p)

I think that's true, but also: When people ask the authors for things (edits to the post, time-consuming engagement), especially if the request is explicit (as in this thread), it's important for third parties to prevent authors from suffering unreasonable costs by pushing back on requests that shouldn't be fulfilled.

comment by Martín Soto (martinsq) · 2023-09-29T16:11:46.933Z · LW(p) · GW(p)

This post literally strongly misrepresents my position in three important ways¹. And these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn't include them in her summary and interpretation. This can be checked by contrasting her summary of my position with the actual text linked to, in which I clarified how my position wasn't the simplistic one here presented.

Are you telling me I shouldn't flag that my position has been importantly misrepresented? On LessWrong? And furthermore on a post that will be seen by way more people than my original text?

¹ I mean the three latter in my above comment, since the first (the hyperbolic presentation) is worrisome but not central.

Replies from: jimrandomh, pktechgirl
comment by jimrandomh · 2023-09-29T18:06:23.896Z · LW(p) · GW(p)

You say that he quoted bits are misrepresentations, but I checked your writing and they seem like accurate summaries. You should flag that your position has been misrepresented iff that is true. But you haven't been misrepresented, and I don't think that you think you've been misrepresented.

I think you are muddying the waters on purpose, and making spurious demands on Elizabeth's time, because you think clarity about what's going on will make people more likely to eat meat. I believe this because you've written things like:

One thing that might be happening here, is that we're speaking at different simulacra levels

Source comment [LW(p) · GW(p)]. I'm not sure how how familiar you are with local usage of the the simulacrum levels phrase/framework, but in my understanding of the term, all but one of the simulacrum levels are flavors of lying. You go on to say:

Now, I understand the benefits of adopting the general adoption of the policy "state transparently the true facts you know, and that other people seem not to know". Unfortunately, my impression is this community is not yet in a position in which implementing this policy will be viable or generally beneficial for many topics.

The front-page moderation guidelines on LessWrong say "aim to explain, not persuade". This is already the norm. The norms of LessWrong can be debated, but not in a subthread on someone else's post on a different topic.

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T11:08:23.066Z · LW(p) · GW(p)

Yes, your quotes show that I believe (and have stated explicitly) that publishing posts like this one is net-negative. That was the topic of our whole conversation. That doesn't imply that I'm commenting to increase the costs of these publications. I tried to convince Elizabeth that this was net-negative, and she completely ignored those qualms, and that's epistemically respectable. I am commenting mainly to avoid my name from being associated with some positions that I literally do not hold.

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries. If you don't provide object-level reasons why the things I pointed out in my above comment are wrong, then I can do nothing with this information. (To be clear, I do think the screenshots are fairly central parts of my clarifications, but her summaries misrepresent and directly contradict other parts of them which I had also presented as central and important.)

I do observe that providing these arguments is a time cost for you, or fixing the misrepresentations is a time cost for Elizabeth, etc. So the argument "you are just increasing the costs" will always be available for you to make. And to that the only thing I can say is... I'm not trying to get the post taken down, I'm not talking about any other parts of the post, just the ones that summarize my position.

Replies from: jimrandomh
comment by jimrandomh · 2023-09-30T18:17:00.992Z · LW(p) · GW(p)

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.

I'm looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.

Of course, my position is not as hyperbolic as this.

This only asserts that there's a mismatch; it provides no actual evidence of one. Next up:

his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn't have public discussion (ie, public discussion would be suppressed). I myself wouldn't know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T20:52:23.340Z · LW(p) · GW(p)

This only asserts that there's a mismatch; it provides no actual evidence of one

I didn't provide quotes from my text when the mismatch was obvious enough from any read/skim of the text. In this case, for example, even the screenshots of my text included in the post demonstrate that I do think naive transitions to veganism exist. So of course this is more a point about framing, and indeed notice that I already mentioned in another comment that this one example might not constitute a strong misrepresentation, as the other two do (after all, it's just an hyperbole), although it  still gives me worries about biased tone-setting in a vaguer way.

Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn't have public discussion (ie, public discussion would be suppressed).

In the text I clearly address why

  1. My proposal is not suppressing public discussion of plant-based nutrition, but constructing some more holistic approach whose shape isn't solely focused on plant-based diets, or whose tone and framing aren't like this one (more in my text).
  2. I don't think it's true private communications "prevent us from getting the information" in important ways (even if taking into account the social dynamics dimension of things will always, of course, be a further hindrance). And also, I don't think public communications give us some of the most important information.

I hope it is now clear why I think Elizabeth's quoted sentence is a misrepresentation, since neither I push for suppressing public discussions of plant-based nutrition (only a certain non-holistic approach to this, more concretely, Elizabeth's approach), nor I ignored the possible worry that this prevents us from obtaining useful information (on the contrary, I addressed this). Of course we can object-level argue about whether these (my positions) are true (that's what I was trying to do with Elizabeth, although as stated she didn't respond to these two further points), but what's clear is that they are the ones represented in my text.

More generally, I think this is a kind of "community-health combating of symptoms" with many externalities for the epistemic and moral capabilities of our community (and ignoring them by ignoring the social dynamics at play in our community and society seems like wishful thinking, we are not immune to propaganda), and I think different actions will lead to a healthier and more robust community without the same externalities (all of this detailed in my text).

In any event, I will stop engaging now. I just wanted my name not to be associated with those positions in a post that will be read by so many people, but it's not looking like Elizabeth will fix that, and having my intentions challenged constantly so that I need to explain my each and every mental move is too draining.

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-09-30T22:25:31.790Z · LW(p) · GW(p)

I didn't provide quotes from my text when the mismatch was obvious enough from any read/skim of the text.

It was not obvious to me, although that's largely because after reading what you've written I had difficulty understanding what your position was at all precisely. It also definitely wasn't obvious to jimrandomh, who wrote that Elizabeth's summary of your position is accurate. It might be obvious to you, but as written this is a factual statement about the world that is demonstrably false.

My proposal is not suppressing public discussion of plant-based nutrition, but constructing some more holistic approach whose shape isn't solely focused on plant-based diets, or whose tone and framing aren't like this one (more in my text).

I'm confused. You say that you don't want to suppress public discussion of plant-based nutrition, but also that you do want to suppress Elizabeth's work. I don't know how we could get something that matches Elizabeth's level of rigor, accomplishes your goal of a holistic approach, and doesn't require at least 3 times the work from the author to investigate all other comparable diets to ensure that veganism isn't singled out. Simplicity is a virtue in this community [LW · GW]!

I don't think it's true private communications "prevent us from getting the information" in important ways (even if taking into account the social dynamics dimension of things will always, of course, be a further hindrance). And also, I don't think public communications give us some of the most important information.

This sounds, to me, like you are arguing against public discussions. Then in the next sentence you say you're not suppressing public discussions. Those are in fact very slightly different things since arguing that something isn't the best mode of communication is distinct from promoting suppression of that thing, but this seems like a really small deal. You might ask Elizabeth something like "hey, could you change 'promotes the suppression of x' with 'argues strongly that x shouldn't happen'? It would match my beliefs more precisely." This seems nitpicky to me, but if it's important to you it seems like the sort of thing Elizabeth Elizabeth might go for. It also wouldn't involve asking her to either delete a bunch of her work or make another guess at what you actually mean.

In any event, I will stop engaging now.

Completely reasonable, don't feel compelled to respond.

comment by Elizabeth (pktechgirl) · 2023-09-29T19:42:48.990Z · LW(p) · GW(p)

these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn't include them in her summary and interpretation

 

The quotes I screenshotted are from the clarifications, not your initial statement. 

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T08:39:27.243Z · LW(p) · GW(p)

Yes, I was referring to your written summaries of my position, which are mostly consistent with the shown screenshots, but not with other parts of my answers. That's why I kindly demand these pieces of text attached to my name to be changed to stop misrepresenting my position (I can provide written alternate versions if that helps), or at least removed while this is pending.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-09-30T09:42:04.917Z · LW(p) · GW(p)

As long as Elizabeth doesn't claim that you literally wrote yourself what she wrote in the summaries (she doesn't), and it's clear that it's her interpretation (it is), she's entirely in the right to include them, and under no obligation to change or remove them to suite you. If you think it misrepresents you, you can leave a comment (as you did). "demanding" that it is removed or changed is going too far (likely in order to incur extra costs on this type of criticism, imo). 

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T11:24:11.789Z · LW(p) · GW(p)

It's true the boundary between interpretation and strong misrepresentation is fuzzy. In my parent comment I'm providing object-level arguments for why this is a case of strong misrepresentation. This is aggravated by the fact that this post will be seen by a hundred times more people than my actual text, by the fact that Elizabeth herself reached out for these clarifications (which I've spent time to compose), and by the fact that I offered to quickly review the write-up more than a week ago.

I'm not trying to incur any extra costs, Elizabeth is free to post her opinions even if I believe doing so is net-negative. I'm literally just commenting so that my name is not associated with opinions which are literally not held by me (this being completely explicited in my linked text, but of course this being too long for almost anyone to actually check first-hand).

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-09-30T12:42:40.450Z · LW(p) · GW(p)

If you didn't include the request/demand for "a removal of those parts of the post until this has cleared up" I (and I think others as well) would have been much more receptive and less likely to see it as an attempt to incur extra costs.

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T13:13:39.298Z · LW(p) · GW(p)

As mentioned, my main objective in writing these comments is not have associated with a position I don't hold. Many people will see that part of the post, but not go into the comments, and even more will go into the comments but won't see my short explanation of how my position has been misrepresented, since it has been heavily down-voted (although I yet have to hear any object-level argument for why the misrepresentation actually hasn't happened or my conduct is epistemically undesirable). So why wouldn't I demand for the most public part of all this to not strongly misrepresent my position?

I get it this incurs some extra effort on Elizabeth (as does any change or discussion), but I'm trying to minimize that by offering myself to write the alternate summary, or even just remove that part of the text (which should take minimal effort). She's literally said things I didn't claim (especially the third example), and a demand to fix that doesn't seem so far-fetched to me that the first guess is I'm just trying to mud the waters.

Maybe we're still fresh from the Nonlinear exchange and are especially wary of tactical incurring of costs, but of course I don't even need to point out how radically different this situation is.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-09-30T15:39:33.310Z · LW(p) · GW(p)

I think it would be reasonable to ask that she links to your comment in the post, so people know you disagree with her interpretations. But if she thinks her interpretations are right she should keep them.

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T16:17:26.718Z · LW(p) · GW(p)

As I've object-level argued above, I believe these summaries fall into the category of misrepresentation, not just interpretation. And I don't believe an author should maintain such misrepresentations in their text in light of evidence about them.

In any event, certainly a link to my comment is better than nothing. At this point I'm just looking for any gesture in the direction of avoiding my name from being associated with positions I do not hold.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-01T18:10:16.393Z · LW(p) · GW(p)

On general principle, I think embedding links to people who think they've been mischaracterized in a post is fair and good. And if you had crisp pointers to where I mischaracterized you, I would absolutely link to it.

But what you have is a long comment thread asserting that I mischaracterized you and that the mischaracterizations are obvious, and multiple [LW(p) · GW(p)] people [LW(p) · GW(p)] telling you they're not, plus (as of this writing)14 disagreement karma on 15 votes. Linking people to this isn't informative, it's a tax, and I view it as a continuation of the pattern of making vegan nutrition costly to talk about. 

But of course it's a terrible norm to trust authors to decide if someone's challenge to their work is good enough. My ideal solution is something like a bet- I include your disclaimer, we run a poll and people say reading your clarifications was uninformative it costs you something meaningful. And if the votes say I was inaccurate, I'll at a minimum remove your section and put up a big disclaimer, and rethink the post as whole, since it rests on my ability to fairly summarize people. 

Right now I don't know of a way to do this that isn't too vulnerable to carpet bagging. I also don't know what stakes would be meaningful from you. But conceptually I think this is the right thing to do.  

Replies from: Slapstick, martinsq
comment by Slapstick · 2023-10-02T16:50:19.187Z · LW(p) · GW(p)

FWIW I just spent a lot of time reading all of the comments (in the original thread and this one) and my position is that Martín Soto's criticism of his representation is valid, and that he has been obviously mischaracterized.

Replies from: Raemon
comment by Raemon · 2023-10-02T16:55:10.425Z · LW(p) · GW(p)

Can you say how?

(I don't think you're obligated to or anything, seems good for you to just note your experience, but hearing more details would be helpful)

Replies from: Slapstick
comment by Slapstick · 2023-10-02T19:12:15.166Z · LW(p) · GW(p)

I think Martín Soto explained it pretty well in his comments here, but I can try to explain it myself. (Can't guarantee I will do it well, I originally commented because the stated opinions of commenters voicing an opposing view seemed to be used as evidence)

The post directly represents Soto as thinking that nieve veganism is a made up problem, despite him not saying that, or giving any indication that he thought the problem was fabricated (he literally states that he doesn't doubt the anecdotes of the commenter speaking about the college group). He just shared that in his experience, knowledge about vegan supplimenting needs was extremely widespread and the norm.

The post also represents Soto as desiring a "policy of suppressing public discussion of nutrition issues with plant-exclusive diets"

That's not what he said, and it's not an accurate interpretation.

He innitally commented thanking them for the post in question, providing some criticism and questions. Elizabeth later asked about whether he thinks vegan nutrition issues should be discussed, and for his thoughts on the right way to discuss vegan nutrition issues.

He seems to agree they should be discussed, but he offers a lot of thoughts about how the framing of those discussions is important, along with some other considerations.

He says that in his opinion the consequences of pushing the line she was pushing in the way she was pushing it were probably net negative, but that's very different from advocating a policy of suppressing public discussion about a topic.

Saying something along the lines of: "this speech about this topic framed in this way probably does more harm than good in my opinion"

Is very different than saying something like: "there should be a policy of suppressing speech about this topic"

Advocating generalized norms around suppressing speech about a broad topic, is not the same as stating an opinion that certain speech falling under that topic and framed a certain way might do more harm than good.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-09T04:10:16.129Z · LW(p) · GW(p)

I’d like to ask your opinion on something.

Tristan wrote a beautiful comment [LW(p) · GW(p)] about prioritizing creating a culture of reverence for life/against suffering, and how that wasn’t very amenable to compromise.

My guess is that if you took someone with Tristan’s beliefs and put more challenging discussion circumstances- less writing skill, less clarity, more emotional activation, trying to operate in an incompatible frame rather than a direct question on their own values- you might get something that looked a lot like the way you describe Martin’s comments. And what looked like blatant contradictions to me are a result of carving reality at different joints.

I don’t want to ask you to speak for Martin in particular, but does that model of friction in communication on this issue in general feel plausible to you?

Replies from: Slapstick, Slapstick
comment by Slapstick · 2023-11-01T22:10:50.723Z · LW(p) · GW(p)

When trying to model your disagreement with Martin and his position, I think the best sort of analogy I can think of is that of tobacco companies employing 'fear, uncertainty, and doubt' tactics in order to prevent people from seriously considering quitting smoking.

Smokers experience cognitive dissonance when they have strong desires to smoke, coupled with knowledge that smoking is likely not in their best interest. They can supress this cognitive dissonance by changing their behaviour and quitting smoking, or by finding something that introduces sufficient doubt about whether that behavior is in their self interest, the latter being much easier. They only need a marginal amount of uncertainty and doubt in order to suppress the dissonance, because their reasoning is heavily motivated, and that's all tobacco companies needed to offer.

I think Martin is essentially trying to make a case that your post(s) about veganism are functionally providing sufficient marginal 'uncertainty and doubt' for non-vegans to suppress any inclination that they ought to reconsider their behaviour. Even if that isn't at all the intention of the post(s), or a reasonable takeaway (for meat eaters).

I think this explains much or most of the confusing friction which came up around your posts involving veganism. Vegans have certain intuitions regarding the kinds of things that non-vegans will use to maintain the sufficient 'uncertainty and doubt' required to suppress the mental toll of their cognitive dissonance. So even though it was hard to find explicit disagreement, it also felt clear to a lot of people that the framing, rhetorical approach, and data selection entailed in the post(s) would mostly have the effect of readers permitting themselves license to forgo reckoning with the case for veganism.

So I think it's relevant whether one affords animals a higher magnitude of moral consideration, or internalized an attitude which places animals in the in-group, However I don't think that accounts for everything here.

Some public endeavors in truth seeking can satisfy the motivated anti-truth seeking of people encountering it. I interpret the top comment of this post as evidence of that.

I'm not sure if I conveyed everything I meant to here, but I think I should make sure the main point here makes sense before expanding.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-11-02T22:16:04.044Z · LW(p) · GW(p)

This seems like an inside view of the feelings that lead to using arguments as soldiers [? · GW]. The motivation is sympathetic and the reasoning is solid enough to weather low-effort attacks, but at the end of the day it is treating arguments as means to ends rather than attempts to discover ground level truth. And Effective Altruism and LessWrong have defined themselves as places where we operate on the object level and evaluate each argument on its own merit, not as a pawn in a war.  

The systems can tolerate a certain amount of failure (which is good, because it's going to happen). But the more people treat arguments as soldiers, the weaker the norm and aspiration to collaboratively truthseek, even when it's inconvenient, becomes. Do it too much, and the norm will go away entirely.

You might argue that it's good to destroy high-decoupling norms, because they're innately bad or because animal welfare is so important it is worth ruining any institution that gets in its way. But AFAICT, the truthseeking norms of EA and LW have been extremely hospitable environments for animal welfare advocates[1], specifically because of the high decoupling. High decoupling is what let people consider the argument that factory farming was a moral atrocity even though it was very inconvenient for them.

So when vegan advocates operate using arguments as soldiers they are not only destroying truthseeking infrastructure that was valuable to many causes. They are destroying infrastructure that has already done a great deal of good for their cause in particular. They are using arguments as soldiers to destroy their own buildings. 

  1. ^

    relative to baseline. Evidence off the top of my head: 

    * EAs go vegan, vegetarian, and reducitarian at much higher than baseline rates. This is less true of rationalists, but I believe is still above baseline. I know many people who loathe most vegan advocacy and nonetheless reduce meat, or in rare cases go all the way to vegan, because they could decouple the suffering arguments from the people making them.
    * EA money has transformed farmed animal welfare and AFAIK is the ~only source of funding for things like insect suffering (couldn't immediately find numbers, source is "a friend in EA animal welfare told me so")
    * AFAIK, veganism's biggest antagonist on LW and EAF over the last year has been me. And I've expressed that antagonism by... let me check my notes... getting dozens of vegans nutrition tested and on (AFAIK) vegan supplements. That project would have gone bigger if I'd been able to find a vegan collaborator, but I couldn't find one (and I did actively look, although exhausted my options pretty quickly). My main posts in this sequence go out of their way to express deep respect for vegans' moral convictions and recognize animal suffering as morally relevant. 

    Maybe there's a poster who's actively hostile to animal welfare that I didn't notice, but if I didn't hear about them they can't possibly have done that much. 

comment by Slapstick · 2023-10-09T06:20:04.650Z · LW(p) · GW(p)

That's a good question. I have many thoughts about this and I'm working on a more thorough response.

My very simple answer is that I do think that's generally plausible (or at least that you're getting at something significant).

comment by Martín Soto (martinsq) · 2023-10-08T13:04:00.836Z · LW(p) · GW(p)

(See also this new comment [LW(p) · GW(p)].)

First off, thanks for including that edit (which is certainly better than nothing), although that still doesn't neglect that (given the public status of the post) your summaries will be the only thing almost everyone sees (as much as you link to these comments or my original text), and that in this thread I have certainly just been trying to get my positions not misrepresented (so I find it completely false that I'm purposefully imposing an unnecessary tax, even if it's true that engaging with this misrepresentation debate takes some effort, like any epistemic endeavor).

Here's the two main reasons why I wouldn't find your proposal above fair:

  1. I expect most people who will see this post / read your summaries of my position to have already seen it (although someone correct me if I'm wrong about viewership dynamics in LessWrong). As a consequence, I'd gain much less from such a disclaimer / rethinking of the post being incorporated now (although of course it would be positive for me / something I could point people towards).
    Of course, this is not solely a consequence of your actions, but also of my delayed response times (as I had already anticipated in our clarifications thread).
    1. A second order effect is that most people who have seen the post up until now will have been "skimmers" (because it was just in frontpage, just released, etc.), while probably more of the people who read the post in the future will be more thorough readers (because they "went digging for it", etc.). As I've tried to make explicit in the past, my worry is more about the social dynamics consequences of having such a post (with such a framing) receive a lot of public attention, than with any scientific inquiry into nutrition, or any emphasis on public health. Thus, I perceive most of the disvalue coming from the skimmers' reactions to such a public signal. More on this below.
  2. My worry is exactly that such a post (with such a framing) will not be correctly processed by too many readers (and more concretely, the "skimmers", or the median upvoter/downvoter), in the sense that they will take away (mostly emotionally / gutturally) the wrong update (especially action-wise) from the actual information in the post (and previous posts).
    Yes: I am claiming that I cannot assume perfect epistemics from LessWrong readers. More concretely, I am claiming that there is a predictable bias in one of two emotional / ethical directions, which exists mainly due to the broader ethical / cultural context we experience (from which LessWrong is not insulated).
    Even if we want LessWrong to become a transparent hub of information sharing (in which indeed epistemic virtue is correctly assumed of the other), I claim that the best way to get there is not through completely implementing this transparent information sharing immediately in the hopes that individuals / groups will respond correctly. This would amount to ignoring a part of reality that steers our behavior too much to be neglected: social dynamics and culturally inherited biases. I claim the best way to get there is by implementing this transparency wherever it's clearly granted, but necessarily being strategic in situations when some unwanted dynamics and biases are at play. The alternative, being completely transparent ("hands off the simulacrum levels"), amounts to leaving a lot of instrumental free energy on the table for these already existing dynamics and biases to hoard (as they have always done). It amounts to having a dualistic (as opposed to embedded) picture of reality, in which epistemics cannot be affected by the contingent or instrumental. And furthermore, I claim this topic (public health related to animal ethics) is unfortunately one of the tricky situations in which such strategicness (as opposed to naive transparency) is the best approach (even if it requires some more efforts on our part).
    Of course, you can disagree with these claims, but I hope it's clear why I don't find a public jury is to be trusted on this matter.
    1. You might respond "huh, but we're not talking about deciding things about animal ethics here. We're talking about deciding rationally whether some comments were or weren't useful. We certainly should be able to at least trust the crowd on that?" I don't think that's the case for this topic, given how strong the "vegans bad" / "vegans annoying" immune reaction is for most people generally (that is, the background bias present in our culture / the internet).
    2. As an example, in this thread there are some people (like you and Jim) who have engaged with my responses / position fairly deeply, and for now disagreed. I don't expect the bulk of the upvotes / downvotes in this thread (or if we were to carry out such a public vote) to come from this camp, but more from "skimmers" and first reactions (that wouldn't enter the nuance of my position, which is, granted, slightly complex). Indeed (and of course based on my anecdotal experience on the internet and different circles, including EA circles), I expect way too many anonymous readers/voters to, upon seeing something like human health and animal ethics weighed off in this way, would just jump on the bandwagon of punishing the veganism meme for the hell of it.
      And let me also note that, while further engagement and explicit reasoning should help with recognizing those nuances (although you have reached a different conclusion), I don't expect this to eliminate some strong emotional reactions to this topic, that drive our rational points ("we are not immune to propaganda"). And again, given the cultural background, I expect these to go more in one direction than the other.

So, what shall we do? The only thing that seems viable close to your proposal would be having the voters be "a selected crowd", but I don't know how to select it (if we had half and half this could look too much like a culture war, although probably that'd be even better than the random crowd due to explicitly engaging deeply with the text). Although maybe we could agree on 2-3 people. To be honest, that's sounding like a lot of work, and as I mentioned I don't think there's that much more in this debate for me. But I truly think I have been strongly misrepresented, so if we did find 2-3 people who seemed impartial and epistemically virtuous I'd deem it positive to have them look at my newest, overly explicit explanation [LW(p) · GW(p)]and express opinions.

So, since your main worry was that I hadn't made my explanation of misrepresentation explicit enough (and indeed, I agree that I hadn't yet written it out in completely explicit detail, simply because I knew that would require a lot of time), I have in this new comment [LW(p) · GW(p)] provided the most explicit version I can muster myself to compose. I have made it explicit (and as a consequence long) enough that I don't think I have many more thoughts to add, and it is a faithful representation of my opinions about how I've been misrepresented.
I think having that out there, for you (and Jim, etc.) to be able to completely read my thoughts and re-consider whether I was misrepresented, and for any passer-by who wants to stop by to see, is the best I can do for now. In fact, I would recommend (granted you don't change your mind more strongly due to reading that) that your edit linked to this new, completely explicit version, instead of my original comment written in 10 minutes.

I will also note (since you seemed to care about the public opinions of people about the misrepresentation issue) that 3 people (not counting Slapstick here [LW(p) · GW(p)]) (only one vegan) have privately reached out to me to say they agree that I have been strongly misrepresented. Maybe there's a dynamic here in which some people agree more with my points but stay more silent due to being in the periphery of the community (maybe because of perceived wrong-epistemics in exchanges like this one, or having different standards for information-sharing / what constitutes misrepresentation, etc.).

comment by Daniel (daniel-glasscock) · 2023-09-29T19:42:11.926Z · LW(p) · GW(p)

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

This stood out to me as strange. Are you referring to this comment? [LW(p) · GW(p)]

And regardless of these resources you should of course visit a nutritionist (even if very sporadically, or even just once when you start being vegan) so that they can confirm the important bullet points, whether what you're doing broadly works, and when you should worry about anything. (And again, anecdotically this has been strongly stressed and acknowledged as necessary by all vegans I've met, which are not few).

The nutritionist might recommend yearly (or less frequent) blood testing, which does feel like a good failsafe. I've been taking them for ~6 years and all of them have turned out perfect (I only supplement B12, as nutritionist recommended).

I guess it's not that much that there's some resource that is the be-all end-all on vegan nutrition, but more that all of the vegans I've met have forwarded a really positive health-conscious attitudes, and stressed the importance of this points.

It sounds like you're saying that the nutritional requirements of veganism are so complex that they require individualized professional assistance, that there is no one-page "do this and you will get all the nutrients you need" document that will work for a the vast majority of vegans. You seem to dismiss this as if it's a minor concern, but I don't think it is. 

> I have a lot of respect for Soto for doing the math and so clearly stating his position that “the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”

 

As I mentioned many times in my answer, that's not the (only) trade-off I'm making here. More concretely, I consider the effects of these interventions on community dynamics and epistemics possibly even worse (due to future actions the community might or might not take) than the suffering experienced by farmed animals murdered for members of our community to consume at present day.

After reading your post, I feel like you are making a distinction without a difference here. You mention community dynamics, but they are all community dynamics about the ethical implications of veganism in the community, not the epistemic implications. It seems perfectly fair for Elizabeth to summarize your position the way she does.

Replies from: martinsq
comment by Martín Soto (martinsq) · 2023-09-30T11:17:52.451Z · LW(p) · GW(p)

This stood out to me as strange. Are you referring to this comment [LW(p) · GW(p)]?

No, I was referring to this one [LW(p) · GW(p)], and the ones in that thread, all part of an exchange in which Elizabeth reached out to me for clarification.

In the one you quoted I was still not entering that much detail.

I'll answer your comment nonetheless.

It sounds like you're saying that the nutritional requirements of veganism are so complex that they require individualized professional assistance, that there is no one-page "do this and you will get all the nutrients you need" document that will work for a the vast majority of vegans.

No, what I was saying wasn't as extreme. I was just saying that it's good general practice to visit a nutritionist at least once, learn some of the nutritional basics and perform blood tests periodically (each 1 or 2 years). That's not contradictory with the fact that most vegans won't need to pour a noticeable amount of hours into all this (or better said, they will have to do that the first 1-2 months, but mostly not afterwards). Also, there is no one-page be-all end-all for any kind of nutrition, not only veganism. But there certainly exist a lot of fast and easy basic resources.

After reading your post, I feel like you are making a distinction without a difference here. You mention community dynamics, but they are all community dynamics about the ethical implications of veganism in the community, not the epistemic implications. It seems perfectly fair for Elizabeth to summarize your position the way she does.

Yes, of course, we were talking about veganism. But in the actual comment I was referring to, I did talk about epistemic implications, not only implications for animal ethics (as big as they already are). What I meant is "if there is something that worries me even more than the animal ethics consequences of this (which are big), it is breeding a community that shies away from basic ethical responsibility at the earliest possibility and rationalizes the choice (because of the consequences this can have for navigating the precipice)".

comment by Pretentious Penguin (dylan-mahoney) · 2023-09-29T18:17:07.822Z · LW(p) · GW(p)

The "Ignoring known falsehoods until they're a PR problem" section seems a bit out of place. The other examples you point out seem to be of vegans not wanting to discuss possible nutritional issues with veganism because they don't want to make statements that are semantically associated with normative claims endorsing a world with nonzero animal agriculture.

But with ACE, it seems like the answer to the question of whether pamphleting is an effective way to get people to reduce their animal product consumption is orthogonal to whether or not people should go vegan or become vegan activists, right? After all, if you want there to be more vegans, and that's what's really emotionally motivating you, you should be very open to evidence indicating pamphleting is ineffective so that activists can spend their time doing something better. I feel like if ACE was being epistemically unreasonable as to which forms of vegan activism are most effective, that would be an example of general "I don't want to admit that I'm wrong about something" behavior rather than "I don't want to reveal true information that will harm the growth of a movement I'm attached to" behavior.

comment by Jonas V (Jonas Vollmer) · 2024-02-18T19:36:15.402Z · LW(p) · GW(p)

Just stumbled across this post, and copying a comment I once wrote [EA(p) · GW(p)]:

  • Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people's epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff

With some further clarifications:

  • Nobody actively wants factory farming to happen, but it's the cheapest way to get something we want (i.e. meat), and we've built a system where it's really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
  • In the context of AI, suffering subroutines might be an example of that.
  • Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there's something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it's really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don't consider these downside risks seem pretty naïve.

 

I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don't eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn't enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn't change that. So [I actually don't think that] my argument supports me remaining a vegan now, but I think it's a strong argument for me to go vegan in the first place at some point.

My guess is that a lot of people don't actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they're less visceral, more indirect/accidental ways of hurting animals.

 

Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now.

I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.

comment by NicholasKees (nick_kees) · 2023-09-29T10:37:14.109Z · LW(p) · GW(p)

This post feels to me like it doesn't take seriously the default problems with living in our particular epistemic environment. The meat and dairy industries have historically, and continue to have, a massive influence on our culture through advertisements and lobbying governments. We live in a culture where we now eat more meat than ever. What would this conversation be like if it were happening in a society where eating meat was as rare as being vegan now?

It feels like this is preaching to the choir, and picking on a very small group of people who are not as well resourced (financially or otherwise). The idea that people should be vegan by default is an extremely minority view, even in EA, and so anyone holding this position really has everything stacked against them. 

Replies from: rotatingpaguro, FiftyTwo, Viliam
comment by rotatingpaguro · 2023-09-29T10:49:23.741Z · LW(p) · GW(p)

If a small group of "weirdos" is ideological and non truthseeking, I won't listen to them. Popularity and social custom is the single most important heuristic because almost no one has the cognitive capacity to go on alone. To overcome this burden, I think being truth-seeking helps. That said, your argument could work if most people are not like me, and respond to more emotional motivations. In that case, I'd like that to not be what EA is for but something else. I'm not EA but I quite enjoy my EA friends while not other altruism advocates.

comment by FiftyTwo · 2023-10-20T10:28:04.957Z · LW(p) · GW(p)

I feel like you're conflating two different levels, the discourse in wider global society and within a specific community. 

I doubt you'd find anyone here who would disagree that actions by big companies that obscure the truth are bad. But they're not the ones arguing on these forums or reading this post. Vegans have a significant presence in EA spaces so should be contributing to those productively and promoting good epistemic norms. What the lobbying team of Big Meat Co. does has no impact on that. 

Also in general I'm leery of any argument of the form "the other side does as bad or worse so its okay for us to do so" given history. 

comment by Viliam · 2023-09-29T21:47:58.042Z · LW(p) · GW(p)

We live in a culture where we now eat more meat than ever.

Yeah, I know people who eat a steak every day, and there is no way an average person could have afforded that hundred years ago.

Is there any "eat meat once a week" movement? Possibly worth supporting.

Replies from: Raemon
comment by Raemon · 2023-09-29T22:00:44.343Z · LW(p) · GW(p)

The search term here is ‘reducetarian’

comment by tailcalled · 2023-10-08T13:23:05.578Z · LW(p) · GW(p)

Edit: This was a reply to a now-deleted comment by Richard_kennaway.

Veganism is the unusual thing, less-than-veganism (including both those who do eat meat and various forms of vegetarianism, pescetarianism, etc.) is the usual thing. Vegans are vegans on ideological grounds; as one gets further and further from veganism, the ideology becomes less and less. This is the distinction between marked and unmarked. The ordinary person who does not give this issue any attention is not practising an ideology when they have a lamb chop any more than when they have a banana.

There's no rule that you shouldn't have categories to describe common things. In fact, it is often common. When referring to people of average height, we might use the term "169 cm" rather than taking care to erase that height on all documents to keep them unmarked.

I think a major factor why people eat meat is because it is delicious and convenient. The deliciousness and convenience are non-ideological motivations. However, this doesn't mean carnist ideology isn't a factor. To some degree, it's a direct factor; those who believe there's nothing wrong with eating meat and that it's their own personal choice and that it's actually good for the animals are presumably going to feel more positive about eating animals and therefore going to do it more. But also, it affects things on an institutional level. Vegans might want to stop institutions from paying people to breed, cage, mutilate and slaughter animals, and if they could broadly succeed with this, it would presumably make animal consumption plummet; but attempts to do this is generally not met with support for opposing animal farming, but instead met with opposition. Presumably this opposition is reflective of an ideology that people should be allowed to eat meat if they want to. (This is an ideology that is plausibly motivated by self-interest considering the deliciousness/convenience, but it is an ideology nonetheless.)

I don't know what it means for an epistemic environment to declare war on anything. Your boo words [LW · GW] are chosen to make both of these sound like bad things. Is it your view that they are both bad, or are you indeed in favour of the former, of declaring war on humans?

I'm using Ninety-Three's metaphorical words. I agree that they are not literally true, and perhaps a better way of phrasing it would be something like "So I guess the question is whether you prefer being in an epistemic environment that is caging, mutilating, slaughtering animals and eating their flesh for pleasure and sustenance and covers up the gruesomeness of the process, or being in an epistemic environment that uses any means they feel may be effective to expose and undermine this caging/mutilation/slaughtering/eating process, even if that includes manipulation".

If you want to know more about why Ninety-Three used the words "declared war on me" in order to refer to manipulation to stop him from hiring people to cage, mutilate and slaughter animals so he can eat their flesh, feel free to ask him.

Torture is not the purpose of farming animals. Meat is the purpose, suffering a side-effect. No farmer is going to out of their way to torture their livestock if they think it isn't suffering enough.

This is true.

I'm ignoring a very fringe group, maybe smaller than vegans, of people who have decided to eat almost nothing but meat. But even they are doing so for health reasons, not ideology.

I'm not sure what you mean by "not ideology". My understanding is that they have an ideology that falsely claims that it is healthy to eat nothing but meat. In this case, health reasons and ideology are tightly linked.

Similarly, most people have a liberal ideology about eating meat which says that it's a personal choice that anyone may choose how to make as they want. While this is liberal and thus can feel neutral in the sense of permitting many different lifestyles, it is presumably not neutral in the sense of having zero causal effect on a society's consumption of animals.

Replies from: Slapstick, Richard_Kennaway
comment by Slapstick · 2023-10-18T19:11:59.499Z · LW(p) · GW(p)

Torture is not the purpose of farming animals. Meat is the purpose, suffering a side-effect. No farmer is going to out of their way to torture their livestock if they think it isn't suffering enough.

This is true.

While it's true that torture isn't the purpose, many workers in animal agriculture/farmers do go out of their way to torture animals, beyond what is entailed in profit seeking. Whether as a form of sadism, taking out their anger, a twisted and ineffective attempt at discipline, it certainly happens.

This may be somewhat tangential, but I think it's worth noting.

Replies from: tailcalled
comment by tailcalled · 2023-10-18T19:17:51.905Z · LW(p) · GW(p)

Can you expand on what you are referring to?

Replies from: Slapstick
comment by Slapstick · 2023-10-18T21:10:03.759Z · LW(p) · GW(p)

Personally I've listened to a farmer I know gleefully recounting stories of repeatedly hitting his cows with baseball bats, in the face and body. It's anecdotal, but growing up in rural environments I've heard a lot of things like that. He also talked/joked about performing DIY surgery on their genitals without anesthesia, which technically has a profit motive, but I think it's indicative of an attitude of indifference to causing them extreme suffering.

There's also all sorts of reports and undercover footage of workers beating and mutilating animals, often without any purpose behind it.

It's a bit difficult to disentangle torture for the sake of torture, from torture which is vaguely aimed at profit seeking (though torture for the sake of harming the animal does happen).

If someone wants an animal to move somewhere, or if they want to perform an excruciating procedure on the animal without it struggling too much, they may beat the animal until it does what they want it to. They may be using this as an opportunity to vent their aggression. You could say profit seeking/meat is the ultimate purpose of that. However I think there's a lot of context in between the dichotomy of 'torture for profit' vs 'torture for torture'.

These things often happen in a context where the industries have final say over whether a particular practice constitutes unlawful treatment. Where it's illegal to record and release video footage of what goes on there. Where local law enforcement has no interest in enforcing laws when there are laws.

I think abuse for the sake of abuse is common in any environment with power imbalances, lack of oversight, and resource constraints. Nursing homes, schools, prisons, policing, hospitals, etc. All entail countless examples of people with power over others abusing others, with the main purpose being some sort of emotional catharsis.

Those are industries where humans are the victims, members of the ingroup, who have laws and norms meant to protect their interests. I think beyond all of the available evidence of 'torture for the sake of torture' on farms, it also makes sense to assume it does/will happen.

Replies from: tailcalled
comment by tailcalled · 2023-10-19T11:10:14.453Z · LW(p) · GW(p)

Hmm... 🤔

I guess there's a tricky thing here because one needs to distinguish:

  • Indifference about their suffering
  • Excitement about effective means of manipulating them

from

  • Sadistically enjoying their suffering

Like indifference about the animal's suffering is a core part of modern carnism, right? And insofar as you see animals as morally irrelevant agents for human manipulation, it's logical to be excited about certain things that otherwise seem grotesque.

As an analogy: I don't see random farmed trees as morally significant. So if there is some powerful manipulation one can do with a tree, e.g. sawing in it using a chainsaw or running into it with some big machine, I wouldn't feel outraged about it, even if it doesn't serve a business need. Instead I might even get excited about it, if it looks awesome enough.

So in the case of animals, if one doesn't care for them, one might enjoy showing off one's new and powerful techniques. This makes some of the more absolutist vegan policies make more sense to me. Like it seems like to avoid this, you'd need to forbid carnists from farming animals.

comment by Richard_Kennaway · 2023-10-08T13:51:35.083Z · LW(p) · GW(p)

(I deleted my previous comment before I saw your reply, as having been already said earlier. But your quoting from it contains most or all of it.)

So I guess the question is whether you prefer being in an epistemic environment that is caging, mutilating, slaughtering animals

By "epistemic environment" I understand the processes of reasoning prevalent there, be they good (systematically moving towards knowledge and truth) or bad (systematically moving away). The subject matter is not the product of the epistemic environment, only the material it operates on. Hence my perplexity at the idea of an epistemic environment doing the things you attribute to it.

I'm not sure what you mean by "not ideology". My understanding is that they have an ideology that falsely claims that it is healthy to eat nothing but meat. In this case, health reasons and ideology are tightly linked.

That is merely a belief that these people hold about what sort of diet is healthy. "Ideology" as I understand the word, means beliefs specifically about how society as a whole should be organised. These are moral beliefs. People who believe a meat-only diet is healthy do not recommend it on any other ground but health. They may believe that one morally ought to maintain one's health, but that applies to all diets, and is a reason often given for a vegetarian diet. Veganism is an ideology, holding it wrong to make or use any animal products whatever, and right to have such things forbidden, on the grounds of animal suffering, or more generally the right of animals not to be used for human purposes. Veganism is not undertaken to improve one's health, unless via a halo effect: it's morally good so it must be physically beneficial too. Ensuring a complete diet is something a vegan has to take extra care over.

Replies from: tailcalled
comment by tailcalled · 2023-10-08T14:42:46.382Z · LW(p) · GW(p)

The subject matter is not the product of the epistemic environment, only the material it operates on. Hence my perplexity at the idea of an epistemic environment doing the things you attribute to it.

I think this constitutes a rejection of rationalism and effwctive altruism? Eliezer started rationalism because he believed that a better epistemic environment would change the subject matter to AI safety and make people focus on more fruitful aspects of AI safety. I'm not sure how effective altruism started, but I believe it is a similar reasoning about how if you think well about charity you might find good neglected causes.

Just highlighting this to see if this is the crux.

That is merely a belief that these people hold about what sort of diet is healthy. "Ideology" as I understand the word, means beliefs specifically about how society as a whole should be organised. These are moral beliefs. People who believe a meat-only diet is healthy do not recommend it on any other ground but health. They may believe that one morally ought to maintain one's health, but that applies to all diets, and is a reason often given for a vegetarian diet. Veganism is an ideology, holding it wrong to make or use any animal products whatever, and right to have such things forbidden, on the grounds of animal suffering, or more generally the right of animals not to be used for human purposes.

"Eating nothing but meat is healthy" is merely a belief about nutrition, so I agree that this by itself is not an ideology. However I believe it is part of a wider belief system that can reasonably be considered an ideology.

In the only-eating-meat case, it's sort of weird because anecdotally they seem to have some strange masculinist-primitivist ideology that I don't know much about.

It's easier to talk about mainstream carnism. Carnists seem to believe that farms should raise animals for slaughter and consumption, that it's reasonable to shame people if they oppose carnism at social gatherings (i.e. if someone serves meat at their birthday and a vegan starts talking about how horrible that is, it's reasonable to call the vegan mean/crazy and tell them to shut up), and some (not sure how many) even believe that the state should send the police to arrest activists who break into factory farms in order to reveal what's going on there to others.

These very much seem like beliefs about how society should be organized to me? Like we're covering the production and consumption of fundamental resources like food, culture views about appropriate social interactions, and laws enforced through state violence.

Veganism is not undertaken to improve one's health, unless via a halo effect: it's morally good so it must be physically beneficial too. Ensuring a complete diet is something a vegan has to take extra care over.

I agree that many followers of many ideologies end up with stupid, insane, biased and otherwise wrong beliefs.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2023-10-15T19:09:22.899Z · LW(p) · GW(p)

I think this constitutes a rejection of rationalism and effwctive altruism?

Well, I do reject EA, or rather its intellectual foundation in Peter Singer and radical utilitarianism. But that's a different discussion, involving the motte-and-bailey of "Wouldn't you want to direct your efforts in the most actually effective way?" vs "Doing good isn't the most important thing, it's the only thing".

Rationalism in general, understood as the study and practice of those ways of thought and action that reliably lead towards truth and effectiveness and not away from them, yes, that's a good thing. Eliezer founded LessWrong (and before that, co-founded Overcoming Bias) because he was already motivated by the threat of AGI, but saw a basic education in how to think as a prerequisite for anyone to be capable of having useful ideas about AGI. The AGI threat drove his rationalism outreach, rather than rationalism leading to the study of how to safely develop AGI.

Carnists seem to believe that ...

I notice that people who eat meat are generally willing to accommodate vegetarians when organising a social gathering, and perhaps also vegans but not necessarily. I would expect them to throw out any vegan who knowingly comes into a non-vegan setting and starts screaming about dead animals.

More generally, calling anyone who doesn't care about someone's ideology because they have better things to think about "ideological" is on the way to saying "everything is ideological, everything is political, everything is problematic, and if you're not for us you're against us". And some people actually say that. I think they're crazy, and if I see them breaking and entering, I'll call the police on them.

Replies from: tailcalled
comment by tailcalled · 2023-10-16T17:15:33.185Z · LW(p) · GW(p)

Rationalism in general, understood as the study and practice of those ways of thought and action that reliably lead towards truth and effectiveness and not away from them, yes, that's a good thing. Eliezer founded LessWrong (and before that, co-founded Overcoming Bias) because he was already motivated by the threat of AGI, but saw a basic education in how to think as a prerequisite for anyone to be capable of having useful ideas about AGI. The AGI threat drove his rationalism outreach, rather than rationalism leading to the study of how to safely develop AGI.

Maybe a way to phrase my objection/confusion is:

In this quote, it seems like you are admitting that the epistemic environment does influence subject ("thought") and action on some "small scale". Like for instance rationalism might make people focus on questions like instrumental convergence and human values (good epistemics) instead of the meaning of life (bad epistemics due to lacking concepts of orthogonality), and might e.g. make people focus on regulating rather than accelerating AI.

Now my thought would be that if it influences subject and action on the small scale, then presumably it also influences subject and action on the large scale. After all, there's no obvious distinction between the scales. Conversely, I guess now I infer that you have some distinction you make between these scales?

I notice that people who eat meat are generally willing to accommodate vegetarians when organising a social gathering, and perhaps also vegans but not necessarily. I would expect them to throw out any vegan who knowingly comes into a non-vegan setting and starts screaming about dead animals.

I didn't say anything about screaming. It could go something like this:

Amelia's living room was a dance of warm hues with fairy lights twinkling overhead. Conversations ebbed and flowed as guests exchanged stories and laughter over drinks. The centerpiece of the food table was a roasted chicken, its golden-brown skin glistening under the ambient light.

As guests approached to fill their plates, Luna, with her striking red hair, made her way to the table. She noticed the chicken and paused, taking a deep breath.

Turning to a group that included Rob, Amelia, and a few others she didn't know well, she said, "It always makes me a bit sad seeing roasted chickens at gatherings." The group paused, forks midway to their plates, to listen to her. "Many of these chickens are raised in conditions where they're tightly packed and can't move freely. They’re bred to grow so quickly that it causes them physical pain."

Increasing the volume of one's speech is physically unpleasant and makes it harder for others to get a word in, though with the advantage being that it is easier to hear when there is background noise. Thus screaming would be indicative of there being something non-truthseeking (albeit not necessarily from the screamer, as they might be trying to overwhelm others who are being non-truthseeking, though in practice I expect that either both would be truthseeking or both would be non-truthseeking).

More generally, calling anyone who doesn't care about someone's ideology because they have better things to think about "ideological" is on the way to saying "everything is ideological, everything is political, everything is problematic, and if you're not for us you're against us". And some people actually say that.

I don't think one can avoid ideologies, or that it would be desirable to do so.

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2023-10-16T20:39:03.015Z · LW(p) · GW(p)

Turning to a group that included Rob, Amelia, and a few others she didn't know well, she said, "It always makes me a bit sad seeing roasted chickens at gatherings." The group paused, forks midway to their plates, to listen to her. "Many of these chickens are raised in conditions where they're tightly packed and can't move freely. They’re bred to grow so quickly that it causes them physical pain."

One of them replies with a shrug, "So I've heard. I can believe it." Another says, "You knew this wasn't a vegan gathering when you decided to come." A third says, "You have said this; I have heard it. Message acknowledged and understood." A fourth says, "This is important to you; but it is not so important to me." A fifth says "I'm blogging this." They carry on gnawing at the chicken wings in their hands.

These are all things that I might say, if I were inclined to say anything at all.

Replies from: tailcalled
comment by tailcalled · 2023-10-16T21:40:19.537Z · LW(p) · GW(p)

Valid responses.

comment by Richard_Kennaway · 2023-10-16T17:53:04.470Z · LW(p) · GW(p)

In this quote, it seems like you are admitting that the epistemic environment does influence subject ("thought") and action on some "small scale". Like for instance rationalism might make people focus on questions like instrumental convergence and human values (good epistemics) instead of the meaning of life (bad epistemics due to lacking concepts of orthogonality), and might e.g. make people focus on regulating rather than accelerating AI.

By "epistemic environment" I understand the standard of rationality present there. Rationality is a tool that can be deployed towards any goal. A sound epistemic environment is no guarantee that the people in it espouse any particular morality.

Replies from: tailcalled
comment by tailcalled · 2023-10-16T21:42:52.727Z · LW(p) · GW(p)

I agree that morality is not solely determined by epistemics; the orthogonality thesis holds true. However people's opinions will also be influenced by their information, due to e.g. expected utility and various other things.

comment by Natália (Natália Mendonça) · 2023-10-04T03:01:50.115Z · LW(p) · GW(p)

Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism).

[…]

My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported

 

[Note: this comment was edited heavily after people replied to it.]

I think this is wrong in a few ways:

1. None of the comments referred to “low meat omnivorism.” AHS-2 had a “semi-vegetarian” category composed of people who eat meat in low quantities, but none of the comments referred to it

2. The study indeed found that vegans had lower mortality than omnivores (the hazard ratio was 0.85 (95% CI, 0.73–1.01)); your post makes it sound like it’s the opposite by saying that the association “points in a different direction than reported.” I think what you mean to say is that vegan diets were not the best option if we look only at the point estimates of the study, because pescatariansim was very slightly better. But the confidence intervals were wide and overlapped too much [LW(p) · GW(p)] for us to say with confidence which diet was better.

Here's a hypothetical scenario. Suppose a hypertension medication trial finds that Presotex monotherapy reduced stroke incidence by 34%. The trial also finds that Systovar monotherapy decreased the incidence of stroke by 40%, though the confidence intervals were very similar to Presotex’s. 

Now suppose Bob learns this information and tells Chloe:  "Alice said something misleading about Presotex. She said that a trial supported Prestotex monotherapy for stroke prevention, but the evidence pointed in a different direction than she reported."

I think Chloe would likely come out with the wrong impression about Presotex.

3. My comment, which you refer to in this section, didn't [EA(p) · GW(p)] describe the AHS-2 as having RCT-like characteristics. I just thought it was a good observational study. A person I quoted in my comment (Froolow) was originally the person who mistakenly described it as a quasi-RCT (in another post I had not read at the time), but Froolow's comment that I quoted didn't describe it as such, and I thought it made sense without that assumption.

4. Froolow's comment and mine were both careful to notice that the study findings are weak and consistent with veganism having no effect on lifespan. I don't see how they presented it as strong evidence.

[Note: I deleted a previous comment making those points and am re-posting a reworded version.]

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-10-04T04:54:37.403Z · LW(p) · GW(p)

In the comment thread you linked, Elizabeth stated outright what she found misleading: https://forum.effectivealtruism.org/posts/3Lv4NyFm2aohRKJCH/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=mYwzeJijWdzZw2aAg [EA(p) · GW(p)]

Getting the paper author on EAF did seem like an unreasonable stroke of good luck.

I wrote out my full thoughts here, before I saw your response, but the above captures a lot of it. The data in the paper is very different than what you described. I think it was especially misleading to give all the caveats you did without mentioning that pescetarianism tied with veganism in men, and surpassed it for women.

I expect people to read the threads that they are linking to if they are claiming someone is misguided, and I do not think that you did that.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-04T05:04:45.715Z · LW(p) · GW(p)

See this comment [LW(p) · GW(p)].

Replies from: GWS
comment by Stephen Bennett (GWS) · 2023-10-04T15:50:19.682Z · LW(p) · GW(p)

See this comment [LW(p) · GW(p)].

You edited your parent comment [LW(p) · GW(p)] significantly in such a way that my response [LW(p) · GW(p)] no longer makes sense. In particular, you had said that Elizabeth summarizing this comment thread [EA(p) · GW(p)] as someone else being misleading was itself misleading.

In my opinion, editing your own content in this way without indicating that this is what you have done is dishonest and a breach of internet etiquette. If you wanted to do this in a more appropriate way, you might say something like "Whoops, I meant X. I'll edit the parent comment to say so." and then edit the parent comment to say X and include some disclaimer like "Edited to address Y"


Okay, onto your actual comment. That link does indicate that you have read Elizabeth's comment, although I remain confused about why your unedited parent comment expressed disbelief about Elizabeth's summary of that thread as claiming that someone else was misleading.

Replies from: Natália Mendonça
comment by Natália (Natália Mendonça) · 2023-10-04T19:33:03.671Z · LW(p) · GW(p)

Hi, that was an oversight, I've edited it now.

comment by Pretentious Penguin (dylan-mahoney) · 2023-09-29T18:07:22.311Z · LW(p) · GW(p)

I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

This may be a valid update, but I think there's also a Hanlon's razor-esque argument to be made that even if a claim is easy to prove, we would expect to observe many terrible arguments made in its favor due to most humans being generally stupid and lazy.

comment by Chipmonk · 2023-09-29T09:08:16.779Z · LW(p) · GW(p)

Why is this post not tagged Front Page?

comment by Elizabeth (pktechgirl) · 2023-11-20T02:12:19.252Z · LW(p) · GW(p)

In the next post I’ll do a wider but shallower review of other instances of EA being hurt by a lack of epistemic immune system. I already have a long list, but it’s not too late for you to share your examples

 

I wrote this two months ago, and people could fairly be asking “so where is it then?”. I especially worry that I broke something of a promise to vegan advocacy that they were a transitory step in criticizing something larger.

When I published this, I had a lot of the planned next post already written. A few things happened that slowed me down, but the bigger problem is that the laundry list post never felt right. It was always covering too much, too fast. For the time being I’ve been publishing targeted criticisms bit by by on EAForum quick takes (as well as some non-public stuff). I haven’t cross-posted to LessWrong because I didn’t want my LW wall overrun with deeply in the weeds criticism of EA. But if you are interested, or just want to verify I haven’t dropped the topic, you can check out my quick takes [? · GW] and posts [EA · GW] on EAF.

comment by Pretentious Penguin (dylan-mahoney) · 2023-09-29T16:03:34.322Z · LW(p) · GW(p)

But I can’t trust his math because he’s cut himself off from half the information necessary to do the calculations. How can he estimate the number of vegans harmed or lost due to nutritional issues if he doesn’t let people talk about them in public?

This paragraph seems like a fully general counterargument against ever refraining from an information-gathering action because the expected value of the information provided by the action is less than the expected harm coming from the action.  Yet evidently there are examples in e.g. medicine where one ought to do so.

Replies from: jimrandomh
comment by jimrandomh · 2023-09-29T19:46:58.949Z · LW(p) · GW(p)

Disagree. The straightforward reading of this is that claims of harm that route through sharing of true information will nearly-always be very small compared to the harms that route through people being less informed. Framed this way, it's easy to see that, for example, the argument doesnt apply to things like dangerous medical experiments, because those would have costs that aren't based in talk.

Replies from: dylan-mahoney
comment by Pretentious Penguin (dylan-mahoney) · 2023-09-29T20:11:25.180Z · LW(p) · GW(p)

I agree that "claims of harm that route through sharing of true information will nearly-always be very small compared to the harms that route through people being less informed", I just don't see it as a straightforward reading of the paragraph I was commenting on. But arguing over the exegesis of a blog post is probably a waste of time if we agree at the object level.

comment by Review Bot · 2024-02-18T14:38:37.977Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

comment by Iknownothing · 2023-10-02T09:18:55.712Z · LW(p) · GW(p)

Could a solution to some of this be to raise some chickens for eggs, treat them nicely, give them space to roam, etc? 
Obviously the best would be to raise cows as well, treat them well, don't kill the male calves, etc- but that's much less of an option for most.

Replies from: orthonormal
comment by orthonormal · 2023-10-02T17:34:07.320Z · LW(p) · GW(p)

Your framing makes it sound like individual raising of livestock, which is silly—specialization of expertise and labor is a very good thing, and "EA reducetarians find or start up a reasonably sized farm whose animal welfare standards seem to them to be net positive" seems to dominate "each EA reducetarian tries to personally raise chickens in a net positive way" (even for those who think both are bad, the second one seems simply worse at a fixed level of consumption).

Replies from: Iknownothing
comment by Iknownothing · 2023-10-02T17:58:40.286Z · LW(p) · GW(p)

I was talking about for a farmer. For a consumer, they can get their eggs/milk from such a farmer and fund/invest in such a farm, if they can. 
Or talk to a local farm about setting aside some chickens, pay for them to be given extra space, better treatment, etc.

I don't really know what you mean about the EA reducetarian stuff. 

Also, if you as an individual want to be healthy, not contribute to harming animal and have the time, space, money, willingness etc to raise some chickens, why not? 

comment by momom2 (amaury-lorin) · 2023-09-29T12:04:43.654Z · LW(p) · GW(p)

That’s the first five subsections. The next set maybe look better sourced, but I can’t imagine them being good enough to redeem the paper. I am less convinced of the link between excess meat and health issues than I was before I read it, because surely if the claim was easy to prove the paper would have better supporting evidence, or the EA Forum commenter would have picked a better source.

That's confirmation bias if I've ever seen it.
It seems likely to me that you're exposed to a lot of low-quality anti-meat content, and you should correct for selection bias since you're likely to only read what will support your views that the arguments are bad, and recommendation algorithms often select for infuriatingness.

[Note: I didn’t bother reading the pro-meat section. It may also be terrible, but this does not affect my position.]

??? Surely if meat-being-good was easy to prove, the paper would have better supporting (expected) evidence.

You should probably take a step back and disengage from that topic to restore your epistemics about how you engage with (expected low-quality) pro-vegan content.

Replies from: localdeity
comment by localdeity · 2023-09-29T20:26:55.912Z · LW(p) · GW(p)

(expected low-quality) pro-vegan content

You think that Elizabeth should have expected that taking an EA forum post with current score 87, written by "a vegan author and data scientist at a plant-based meat company", and taking "what looked like his strongest source", would yield a low-quality pro-vegan article?  I mean, maybe that's true, but if so, that seems like a harsher condemnation of vegan advocacy than anything Elizabeth has written.

Replies from: amaury-lorin
comment by momom2 (amaury-lorin) · 2023-09-29T20:58:06.304Z · LW(p) · GW(p)

Not before reading the link, but Elizabeth did state that they expected the pro-meat section to be terrible without reading it, presumably because of the first part.

Since the article is low-quality in the part they read and expected low-quality in the part they didn't, they shouldn't take it as evidence of anything at all; that is why I think it's probably confirmation bias to take it as evidence against excess meat being related to health issues.

Reason for retraction: In hindsight, I think my tone was unjustifiably harsh and incendiary. Also the karma tells that whatever I wrote probably wasn't that interesting.

comment by Roko · 2023-10-16T23:38:25.010Z · LW(p) · GW(p)

Sometimes one just has to cut the Gordian Knot:

  • Veganism is virtue signalling
  • It's unhealthy
  • It's not a priority in any meaningful value system
  • Vegan advocates are irrational about this because they are part of a secular religion
  • Saying otherwise is heresy

Conclusion: eat as much meat as you like and go work on something that actually matters, note that EA has bad epistemics in general because it has religious aspects

Addendum: I know this will get downvoted. But Truth is more important than votes. Enjoy!

Replies from: tristan-williams, FiftyTwo
comment by Tristan Williams (tristan-williams) · 2023-10-26T09:59:37.159Z · LW(p) · GW(p)

This is where I'd like to insert a meme with some text like "did you even read the post?" You:

  • Make a bunch of claims that you fail to support, like at all
  • Generally go in for being inflammatory by saying "it's not a priority in any meaningful value system" i.e. "if you value this then your system of meaning in the world is in fact shit and not meaningful"
  • Pull the classic "what I'm saying is THE truth and whatever comes (the downvotes) will be a product of peoples denial of THE truth" which means anyone who responds you'll likely just point to and say something like "That's great that you care about karma, but I care about truth, and I've already reveled that divine truth in my comment so no real need to engage further here"

If I were to grade comments on epistemic hygiene (or maybe hygiene more generally), this would get something around a "actively playing in the sewer water" rating. 

comment by FiftyTwo · 2023-10-20T10:20:26.166Z · LW(p) · GW(p)

I somewhat agree with this but I think its an uncharitable framing of the point, since virtue signalling is generally used for insincerity. My impression is that the vegans I've spoken with are mostly acting sincerely based on their moral premises, but those are not ones I share. If you sincerely believe that a vast atrocity is taking place that society is ignoring then a strident emotional reaction is understandable. 

Replies from: Roko
comment by Roko · 2023-10-25T22:11:34.730Z · LW(p) · GW(p)

virtue signalling is generally used for insincerity

Virtue signalling can be sincere.

comment by aphyer · 2023-09-29T13:08:40.868Z · LW(p) · GW(p)

It's not my problem! I have not made the mistake of moving to the Bay Area where vegans can try to punish me, so I can just ignore them and continue eating my delicious delicious meats.

Replies from: jimrandomh, GWS
comment by jimrandomh · 2023-09-29T20:09:11.216Z · LW(p) · GW(p)

If the information environment prevents people from figuring out the true cause of the obesity epidemic, or making better engineered foods, this affects you no matter what place and what social circles you run in. And if epistemic norms are damaged in ways that lead to misaligned AGI instead of aligned AGI, that could literally kill you.

The stakes here are much larger than the individual meat consumption of people within EA and rationality circles. I think this framing (moralistic vegans vs selfish meat eaters with no externalities) causes people to misunderstand the world in ways that are predictably very harmful.

comment by Stephen Bennett (GWS) · 2023-09-29T20:45:01.363Z · LW(p) · GW(p)

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.