Posts
Comments
This post didn't do well in the games of LessWrong karma, but it was probably the most personally fruitful use of my time on the site in 2023. It helped me clarify my own views which I had already formed but hadn't put to paper, or cohered properly.
I also got to think about the movement as a whole, and really enjoyed some of what Elizabeth had to share. Particularly I remember her commentary on the lack of positivity in the movement, and have taken that to heart and really thought about how I can add more positivity in.
When thinking on this, you seriously do not think that one candidate will be better than the other? Your world view doesn't bring you to a view where one is even a slightly better candidate?
Mmm okay a bit confused by the thrust of the first bit. Is it that you wish to set yourself apart from my view because you see it unavoidably leading to untenable positions (like self-extinguishing)?
Jumping to the rest of it, I liked how you put the latter option for the positioning of the shepard. I'm not sure the feeling out of the "shepard impulse" is something where the full sort of appreciation I think is important has come out.
But I think you're right to point towards a general libertarian viewpoint as a crux here, because I'm relatively willing to reason through what's good and bad for the community and work towards designing a world more in line with that vision, even if it's more choice constrained.
But yeah, the society is a good example to help us figure out where to draw that line. It makes me most immediately wonder: is there anything so bad that you'd want to restrict people from doing it, even if they voluntarily entered into it? Is creating lives one of the key goods to you, such that most forms of lives will be worth just existing?
To answer your last question, it's the latter, a world where synthetic alternatives and work on ecological stability yields a possibility of a future for predators who no longer must kill for survival. It would certainly mean a lot less cows and chickens exist, but my own conclusions from the above questions lead me to thinking this would be a better world.
Thanks for the continued dialogue, happy to jump back in :)
I think it's very reasonable to take a "what would they consent to" perspective, and I do think this sort of set up would likely lead you to a world where humane executions and lives off the factory farm were approved of. But I guess I'd turn back to my originial point that this sort of relation seems apt to encourage a certain relation to the animal that I think will be naturally unstable and will naturally undermine a caring relationship with that animal.
Perhaps I just have a dash too much of deontology in me, but if you asked me to choose between a world where many people had kids but they ate them in the end, or a world of significantly fewer kids but where there was no such chowing down at the end of their life, I'd be apt to choose the latter. But deontology isn't exactly the right frame because again, I think this will just sort of naturally encourage relationships that aren't whole, relationships where you have to do the complicated emotional gymnastics of saying that you love an animal like their your friend one day and then chopping their head from their body the next and savoring the flavor of the flesh on the grill.
Maybe my view of love is limited, but I also think nearly every example you'd give me of people who've viewed animals as "sacred or people" but still ate them likely had deficient relationship to the animal. Take goats and the Islamic faith, for example. It's not fully the "sacred" category like cows for Hindus, but this animal has come to take a ritualistic role in various celebrations of the relgion, and when I've talked to Muslims about what the reason for this treatment, or things being Halal are, they will normally point out that this is a more humane relation to have with the animal. The meat being "clean" is supposed to imply, to some degree, "moral", but I think this relation isn't quite there. I've seen throat cuttings from Eid which involve younger members of the family being brought into the fold by serving as axeman, often taking multiple strikes to severe the head in a way of slaughter that seems quite far from caring. One friend of mine, who grew up in India with his family raising a number of goats for this occassion, often saw the children loving the goats and having names for them and such. But on Eid this would stop, and I think what the tradition left my friend with is a far more friendly view to meat consumption than he would have developed otherwise.
My last stab at a response might be to bring up an analogy to slavery. I take the equivalent of your position here to be "look, if each slave can look at the potential life he will hold and prefer that life to no life at all, then isn't that better than him not existing at all?" And to me it seems like I'd be again called to say "no". We can create the life of a slave, we can create the life of a cow who we plan to eat in the end, but I'd rather just call off the suffering all together and refuse to create beings that will be shackled to such a life. It's not a perfect analogy, but I hope it illustrates that we can deny the category entirely, and that that denial can open us up to a better future, one without slaves who prefer their life to not existing, but fellow citizens, one without farmed animals who prefer their life to not existing, but of pets we welcome happily into our families. That is the sort of world I hope for.
Not sure, but maybe the new AI institute they're setting up as a result
Garrett responded to the main thrust well, but I will say that watermarking synthetic media seems fairly good as a next step for combating misinformation from AI imo. It's certainly widely applicable (not really even sure what the thrust of this distinction was) because it is meant to apply to nearly all synthetic content. Why exactly do you think it won't be helpful?
Yeah, I think the reference class for me here is other things the executive branch might have done, which leads me to "wow, this was way more than I expected".
Worth noting is that they at least are trying to address deception by including it in the full bill readout. The type of model they hope to regulate here include those that permit "the evasion of human control or oversight through means of deception or obfuscation". The director of the OMB also has to come up with tests and safeguards for "discriminatory, misleading, inflammatory, unsafe, or deceptive outputs".
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.
Hmm, I get the idea that people value succinctness a lot with these sorts of things, because there's so much AI information to take in now, so I'm not so sure about the net effect, but I'm wondering maybe if I could get at your concern here by mocking up a percentage (i.e. what percentage of the proposals were risk oriented vs progress oriented)?
It wouldn't tell you the type of stuff the Biden administration is pushing, but it would tell you the ratio which is what you seem perhaps most concerned with.
[Edit] this is included now
What alternative would you propose? I don't really like mundane risk but agree that an alternative would be better. For now I'll just change to "non-existential risk actions"
This is where I'd like to insert a meme with some text like "did you even read the post?" You:
- Make a bunch of claims that you fail to support, like at all
- Generally go in for being inflammatory by saying "it's not a priority in any meaningful value system" i.e. "if you value this then your system of meaning in the world is in fact shit and not meaningful"
- Pull the classic "what I'm saying is THE truth and whatever comes (the downvotes) will be a product of peoples denial of THE truth" which means anyone who responds you'll likely just point to and say something like "That's great that you care about karma, but I care about truth, and I've already reveled that divine truth in my comment so no real need to engage further here"
If I were to grade comments on epistemic hygiene (or maybe hygiene more generally), this would get something around a "actively playing in the sewer water" rating.
I don't think we can rush to judgement on your character so quick. My ability to become a vegan, or rather to at least take this step in trying to be that sort of person, was heavily intertwined with some environmental factors. I grew up on a farm, so I experienced some of what people talk about first hand. Even though I didn't process it as something overall bad at the time, a part of me was unsettled, and I think I drew pretty heavily on that memory and being there in my vegan transition period.
I guess the point is something like you can't just become that person the day after you decide you want to be. Sometimes the best thing you can do is try to learn and engage more and see where that gets you. With this example that would mean going to a slaughterhouse yourself and participating, which maybe isn't a half bad idea (though I haven't thought this through at all, so I may be missing something).
Also giving up chicken is not a salve, its a great first step, a trial period that can serve as a positive exemplar of what's possible for the version of yourself that might wish to fully revert back one day. I believe in you, and wish you the best of luck with your journey :)
Have no idea what it entails but I enjoy conversing and learning more about the world, so I'd happy do a dialogue! Happy to keep it in the clouds too.
But yeah you make a good point. I mean, I'm not convinced what the proper schelling point is, and would eagerly eat up any research on this. Maybe what I think is that for a specific group of people like me (no idea what exactly defines that group) it makes sense, but that generally what's going to make sense for a person has to be quite tailored to their own situation and traits.
I would push back on the no animal products through the mouth bit. Sure, it happens to include lesser forms of suffering that might be less important than changing other things in the world (and if you assumed that this was zero sum that may be a problem, but I don't think it is). But generally it focuses on avoiding suffering that you are in control of, in a way that updates in light of new evidence. Vegetarianism in India is great because it leads to less meat consumption, but because it involves specific things to avoid instead of setting a basis as suffering it becomes much harder to convincingly explain why they should update to avoid eggs for example. So yeah, protesting rat poison factories may not be a mainstream vegan thing, but I'd be willing to bet vegans are less apt to use it. And sure, vegans may be divided on what to do about sugar, but I'd be surprised if any said "it doesn't involved an animal going in my mouth so it's okay with me". I don't think it's arbitrary but find it rather intentional.
I could continue on here but I'm also realizing some part of you wanted to avoid debates about vegan stuffs, so I'll let this suffice and explicitly say if you don't want to respond I fully understand (but would be happy to hear if you do!).
Thanks for such an in depth and wonderful response, I have a couple of questions.
On 1. Perhaps the biggest reason I've stayed away from Pomodors is the question of how much time for breaks you can take before you need to start logging it as a reduction in time worked. Where have you come out on that debate? I.e. maybe you've found increased productivity makes the breaks totally worth it and this hasn't really been an issue for you.
On 3. How are you strict with your weekends? The vibe I get from the rest is that normally you make sure what you're doing is restful?
On 3.5. Adding to the anecdata, I keep a fairly sporadic schedule that often extends past normal hours, and I've found that it works pretty well for me. I do find that when I'm feeling a bit down that switching back to normal hours is better for me though, because I'm apt to start playing video games in the middle of the day because I think "ah, I'm remote and have a flexible schedule, so I can do what I want!" when in reality playing video games during the day is usually just me doing a poor job of dealing with something that then ends up not resolved later and leaves me in a tricky spot to get work done.
On 4, I'd love to hear more about your targets: are they like just more concrete than goals? Do you have some sort of accountability system that you keep yourself from overriding? I think I'm coming to realize I work better with deadlines, but I'm still really unsure how to implement them in a way that forces me to stick to them but also that allows me to override it in circumstances where I'd be better off if I could push something back a bit.
Sure, sure. I'm not saying there isn't perhaps an extreme wing, I just think it's quite important to say this isn't the average, and highlight that the majority of vegans have a view more like the one I mentioned above.
I think this is a distinction worth making, because when you collapse everyone into one camp, you begin to alienate the majority that actually more or less agrees with you. I don't know what the term for the group you're talking about is, but maybe evangelical vegans isn't a bad term to use for now.
First thanks for your kind words, they were nice to receive :)
But I also think this is wonderfully put, and I think you're right to point to your feelings on truth as similar. As truth for you, life to me is sacred, and I think I generally build a lot of my world out of that basic fact. I would note that I think one another's values are likely important for us to, as truth is also really important to me and I value honestly and not lying more than most people I know. And on the flipside I imagine that you value life quite a bit.
But looking at the specific case you imagine, yeah it's really hard to imagine either totally separate on their own because I find they often lead to one another. I guess one crux for me that might give me doubts on the goodness of the truth world is not being sure on the "whether humans are innately good" question. If they aren't innately good, then everyone being honest about their intentions and what they want to do may mean places in the world where repression or some sort of suffering is common. I guess the way I imagine it going is having a hard time dealing with the people who honesty just want some version of the world that involves inflicting some sort of harm on others. I imagine that many would likely not want this, and they would make rules as such, but that they'd have a hard time critiquing others in the world far away from themselves if they've been perfectly straightforward and honest about where they stand with their values.
But I can easily imagine counterarguments here, and it's not as if a life where reducing suffering were of utmost importance wouldn't run the risk of some pretty large deviations from the truth that seem bad (i.e. a vegan government asserting there are zero potentials for negative health effects for going vegan). But then we could get into standard utilitarian responses like "oh well they really should have been going in for something like rule utilitarianism that would have told them this was an awful decision on net" and so on and so forth. Not sure where I come out either really.
Note: I'd love to know what practical response you have, it might not be my crux but could be insightful!
I think the first paragraph is well put, and do agree that my camp is likely more apt to be evangelical. But I also want to say that I don't think the second paragraph is quite representative. I know approximately 0 vegans that support the "cross the line once" philosophy. I think the current status quo is something much closer to what you imagine in the second to last sentence, where the recommendation that's most often come to me is "look, as long as you are really thinking about it and trying to do what's best not just for you but for the animals as well, that's all it takes. We all have weak moments and veganism doesn't mean perfection, it's just doing the best with what you've got"[1]
- ^
Sure, there are some obvious caveats here like you can't be a vegan if you haven't significantly reduced your consumption of animals/animal products. Joe, who eats steak every night and starts every morning with eggs and cheese and a nice hearty glass of dairy milk won't really be a vegan even if he claims the title. But I don't see the average vegan casting stones at and of the various partial reduction diets, generally I think they're happy to just have some more people on board.
What Elizabeth had to say here is broadly right. See my comment above, for some more in depth reasoning as to why I think the opposite may be true, but basically I think that the sort of loving relationship formed with other animals that I imagine as the thing that holds together commitment over a long period of time, over a large range of hard circumstances, is tricky to create when you don't go full on. I have no idea what's sustainable for you though, and want to emphasize that whatever works to reduce is something I'm happy with, so I'm quite glad for your ameliatarian addition.
I'm also trying to update my views here, so can I ask for how long you've been on a veg diet? And if you predict any changes in the near future?
While I think the environmental sustainability angle is also an active thing to think about here (because beef potentially involves less suffering for the animals, but relatively more harm to the environment), I did actually intend sustainability in the spirit of "able to stick with it for a long period of time" or something like that. Probably could have been clearer.
Just posted a comment in part in response to you (but not enough to post it as a response) and would love to have your thoughts!
[Forum Repost] Didn't catch this until just now, but happy to see the idea expanded a bit more! I'll have to sit down and think on it longer, but I did have some immediate thoughts.
I guess at its core I'm unsure what exactly a proper balance of thinking about folk ethics[1] (or commonsense good) and reasoned ethics[2] (or creative good) is, when exactly you should engage in each. You highlight the content, that reasoned ethics should be brought in for the big decisions, those with longevity generally. And Ana starts to map this out a bit further, saying reasoned ethics involves an analysis of "the small set of decisions that are worth intensive thought/effort/research" But even if the decision set is small, if it's just these really big topics, the time spent implementing major decisions like these is likely long and full of many day to day tradeoffs and choices. Sure, eating vegan is now a system one task for me, but part of what solidified veganism for me was bringing in my discomfort from reasoned ethics into my day to day for awhile, for months even. The folk ethics there (for me) was entirely in the opposite direction, and I honestly don't think I would have made the switch if I didn't bring reasoned ethics into my everyday decisions.
I guess for that reason I'm kind of on guard, looking for other ways my commonsense intuitions about what I should do might be flawed. And sure, when you set it up like "folk ethics is just sticking to basic principles of benevolence, of patience, honesty and kindness" few will argue adherence to this is flawed. But it's rarely these principles and instead the application of them where the disagreement comes in. My family and I don't disagree that kindness is an important value, we disagree on what practicing kindness in the world looks like.
In light of this, I think I'd propose the converse of Anna's comments: stick to folk ethics for most of the day to day stuff, but with some frequency[3] bring the reasoned ethics into your life, into the day to day, to see if how you are living is in accord with your philosophical commitments. This could look like literally going through a day wearing the reasoned ethics hat, or it could even look like taking stock of what what has happened over a period of time and reflecting on whether those daily decisions are in accord. Maybe this community is different, but I agree with Eccentricity that I generally see way to little of this in the world, and really wish people engaged in it more.
- ^
I'll use folk ethics in place of commonsense good hereafter because I find the term compelling
- ^
I'll use reasoned ethics in place of creative good because I think this set (folk ethics and reasoned ethics) feels more intuitive. Sorry for changing the language, it just made it easier for me to articulate myself here.
- ^
Really unsure what's best here so I'm leaving it intentionally vague. If I had to suggest something, at least an annual review and time of reflection is warranted (I like to do this at the end of the calendar year but I think you could do it w/e) and at most I think checking in each week (running through a day at the end of the week really thinking if the decisions and actions you are taking make sense) might be good.
See below if you'd like an in depth look at my way of thinking, but I defiantly see the analogy and suppose I just think of it a bit differently myself. Can I ask how long you've been vegetarian? And how you've come to the decision as to which animals lives you think are net positive?
Yeah sure. I would need a full post to explain myself, but basically I think that what seems to be really important when going vegan is standing in a certain sort of loving relationship to animals, one that isn't grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute.
I guess the first time I thought about this was with my university EA group. We had a couple of hardcore utilitarians, and one of them brought up an interesting idea one night. He was a vegan, but he'd been offered some mac and cheese, and in similar thinking to above (that dairy generally involves less suffering than eggs or chicken for ex) he wondered if it might actually be better to take the mac and donate the money he would have spent to an animal welfare org. And when he roughed up the math, sure enough, taking the mac and donating was somewhat significantly the better option.
But he didn't do it, nor do I think he changed how he acted in the future. Why? I think it's really hard to draw a line in the sand that isn't veganism that stays stable over time. For those who've reverted, I've seen time and again a slow path back, one where it starts with the less bad items, cheese is quite frequent, and then naturally over time one thing after another is added to the point that most wind up in some sort of reduceitarian state where they're maybe 80% back to normal (I also want to note here, I'm so glad for any change, and I cast no stones at anyone trying their best to change). And I guess maybe at some point it stops being a moral thing, or becomes some really watered down moral thing like how much people consider the environment when booking a plane ticket.
I don't know if this helps make it clear, but it's like how most people feel about harm to younger kids. When it comes to just about any serious harm to younger kids, people are generally against it, like super against it, a feeling of deep caring that to me seems to be one of the strongest sentiments shared by humans universally. People will give you some reasons for this i.e. "they are helpless and we are in a position of responsibility to help them" but really it seems to ground pretty quickly in a sentiment of "it's just bad".
To have this sort of love, this commitment to preventing suffering, with animals to me means pretty much just drawing the line at sentient beings and trying to cultivate a basic sense that they matter and that "it's just bad" to eat them. Sure, I'm not sure what to do about insects, and wild animal welfare is tricky, so it's not nearly as easy as I'm making it seem. And it's not that I don't want to have any idea of some of the numbers and research behind it all, I know I need to stay up to date on debates on sentience, and I know that I reference relative measures of harm often when I'm trying to guide non-veg people away from the worst harms. But what I'd love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what's better or worse and just casts it all out as beyond the pale.
And to try to return to earlier, I guess I see taking this sort of position as likely to extend people's time spent doing veg-related diets, and I think it's just a lot trickier to have this sort of relationship when you are doing some sort of utilitarian calculus of what is and isn't above the bar for you (again, much love to these people, something is always so much better than nothing). This is largely just a theory, I don't have much to back it up, and it would seem to explain some cases of reversion I've seen but certainly not all, and I also feel like this is a bit sloppy because I'd really need a post to get at this hard to describe feeling I have. But hopefully this helps explain the viewpoint a bit better, happy to answer any questions :)
Are there any hopes to get this updated again or is this on the backburner now?
Are there any hopes to get this updated again or is this on the backburner now?
Ah okay cool, so you have a certain threshold for harm and just don't consume anything above that. I've found this approach really interesting and have recommended others against because I've worried about it's sustainability, but do you think it's been a good path for you?
Did you go vegetarian because you thought it was specifically healthier than going vegan?
What do you feel like your plan is now moving forward? Like do you have a specific subsect of this you hope to try out?
How exactly do you come to "up to and including acts of war"? His writing here was concise due to it being TIME, which meant he probably couldn't caveat things in the way that protects him against EAs/Rationalists picking apart his individual claims bit by bit. But from what I understand of Yudkowsky, he doesn't seem to in spirit necessarily support an act of war here, largely I think for similar reasons as you mention below for individual violence, as the negative effects of this action may be larger than the positive and thus make it somewhat ineffective.
2. What is Overton's window? Otherwise I think I probably agree, but one question is, once this non-x-risk campaign is underway, how to you keep it on track and prevent value drift? Or do you not see that as a pressing worry?
3. Cool, will have to check that out.
4. Completely agree, and just wonder what the best way to promote less distancing is.
Yeah, I suppose I'm just trying to put myself in the shoes of the FHI people here that coordinated this and feel like many comments here are a bit more lacking in compassion than I'd like, especially for more half baked negative takes. I also agree that we want to put attention into detail and timing, but there is also the world in which too much of this leads to nothing getting done, and it's highly plausible to me that this had probably been an idea for long enough already to make that the case here.
Thanks for responding though! Much appreciated :)
The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I'll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it's getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can't really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI's current presence in the news and much of the world's psyche.
But I'm not super certain in anything, and generally came away with a lot of questions, here's a few:
- How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
- This helps clarify the "6 months isn't enough to develop the safety techniques they detail" objection which was fairly well addressed here as well as the "Should Open AI be at the front" objection.
- How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking "Should we let machines flood our information channels with propaganda and untruth?" an important question, but one that to me seems to deviate away from AI x-risk concerns.
- This is at least tangential to the "This letter felt rushed" objection, because even if you accept it was rushed, the next question is "Well, what's our bar for how good something has to be before it is put out into the world?"
- Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can't recall any specific time I know of where open letters cause significant change at the global/national level.
- Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don't think he's at that level, but I know other EAs who would be apt to characterize him that way.
- Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at "Tristan Williams, Tea Brewer" and think "oh, what is he doing on this list?"