Posts

Comments

Comment by Andaro on Should Effective Altruism be at war with North Korea? · 2019-05-09T11:47:28.280Z · LW · GW
Comment by Andaro on Should Effective Altruism be at war with North Korea? · 2019-05-08T23:45:15.101Z · LW · GW

I agree. I certainly didn't mean to imply that the Trump administration is trustworthy.

My point was that the analogy of AIs merging their utility functions doesn't apply to negotiations with the NK regime.

Comment by Andaro on Should Effective Altruism be at war with North Korea? · 2019-05-07T08:41:44.267Z · LW · GW

It's not a question of timeframes, but of how likely you are to lose the war, how big the concessions would have to be to prevent the war, and how much the war would cost you even if you win (costs can have flow-through effects into the far future).

Not that any of this matters to the NK discussion.

Comment by Andaro on Should Effective Altruism be at war with North Korea? · 2019-05-06T20:58:41.655Z · LW · GW

The idea is that isolationism and destruction aren't cheaper than compromise. Of course this doesn't work if there's no mechanism of verification between the entities, or no mechanism to credibly change the utility functions. It also doesn't work if the utility functions are exactly inverse, i.e. neither side can concede priorities that are less important to them but more important to the other side.

A human analogy, although an imperfect one, would be to design a law that fulfills the most important priorities of a parliamentary majority, even if each individual would prefer a somewhat different law.

I don't think something like this is possible with untrustworthy entities like the NK regime. They're torturing and murdering people as they go, of course they're going to lie and break agreements too.

Comment by Andaro on Asymmetric Justice · 2019-05-06T03:38:29.758Z · LW · GW

>The symmetric system is in favor of action.

This post made me think how much I value the actions of others, rather than just their omissions. And I have to conclude that the actions I value most in others are the ones that *thwart* actions of yet other people. When police and military take action to establish security against entities who would enslave or torture me, I value it. But on net, the activities of other humans are mostly bad for me. If I could snap my fingers and all other humans dropped dead (became inactive), I would instrumentally be better off than I am now. Sure, I'd lose their company and economic productivity, but it would remove all intelligent adversaries from my universe, including those who would torture me.

>The Good Place system...

I think it's worth noting that you have chosen an example of a system where people will not just be tortured, but tortured *for all eternity without the right to actually ever die* and not even the moral philosopher character manages to formulate a coherent in-depth criticism of that philosophy. I know it's a comedy show, but it's still premised on the acceptance that there would be a system of eternal torture and that system would be moralized as justice, and of course nonconsensual without an exit option.

Comment by Andaro on [deleted post] 2019-04-21T21:11:41.439Z

>as the world branches, my total measure should decline many orders of magnitude every second

I'm not sure why you think that. From any moment in time, it's consistent to count all future forks toward my personal identity without having to count all other copies that don't causally branch from my current self. Perhaps this depends on how we define personal identity.

>but it doesn't affect my decision making.

Perhaps it should - tempered by the possibilities that your assumptions are incorrect, of course.

Another accounting trick: Count future where you don't exist as neutral perspectives of your personal identity (empty consciousness). This should collapse the distinction between total and relative measure. Yes, it's a trick, but the alternative is even more counter-intuitive to me.

Let's regard a classical analogy: You're in a hypothetical situation where your future contains of negative utility. Let's say you suffer -5000 utils per unit time for the next 10 minutes, then you die with certainty. But you have the option of adding another 10 trillion years of life at -4999 utils per unit time. If we regard relative rather than total measure, this should be preferable because your average utils will be ~-4999 per unit time rather than -5000. But it's clearly a much more horrible fate.

I always found average utlitarianism unattractive because of mere addition problems like this, in addition to all the other problems utilitariansims have.

Comment by Andaro on [deleted post] 2019-04-21T18:54:48.121Z

That's a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can't causally affect.

Comment by Andaro on [deleted post] 2019-04-21T18:36:25.235Z

>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.

But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won't consciously experience those where she doesn't exist, which is better than suffering from the perspective of those who consider suffering negative utility.

Comment by Andaro on A Case for Taking Over the World--Or Not. · 2019-04-14T11:13:00.128Z · LW · GW

>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.

They're almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn't naturally occur. And that life obviously will contain large amounts of suffering. People don't like hearing that, especially in the x-risk reduction demographic, but it's pretty clear the goals are at odds.

Since I'm a non-altruist, there's not really any reason to care about most of that future suffering (assuming I'll be dead by then), but there's not really any reason to care about saving humanity from extinction, either.

There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity's survival. But none of these make the goals point in the same direction.

Comment by Andaro on A Roadmap: How to Survive the End of the Universe · 2019-04-11T14:28:49.933Z · LW · GW

>Our life could be eternal and thus have meaning forever.

Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.

Let's just look at our culture, as well as contemporary and historical global cultures. Do we have:

  • a consensus of consensualism (life and suffering should be voluntary)? Nope, we don't.
  • a consensus of anti-torture (torturing people being illegal and immoral universally)? Nope, we don't.
  • a consensus of proportionality (finite actions shouldn't lead to infinite punishments)? Nope, we don't.

You'd need at least one of these to just *reduce* the probability of eternal torture, and then it still wouldn't guarantee an acceptable outcome. And we have none of these.

They would if they could, and the only reason you're not being already tortured for all eternity is because they haven't found a way to implement it.

The probability of getting it done is small, but that is not an argument in favor of your suggestion; if it can't be done, you don't get eternal meaning either, if it can be done, you have effectually increased the risk of eternal torture for all of us by working in this direction.

Comment by Andaro on Two Small Experiments on GPT-2 · 2019-02-21T12:59:46.337Z · LW · GW

I’m confused about OpenAI’s agenda.

Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.

If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.

Comment by Andaro on X-risks are a tragedies of the commons · 2019-02-14T09:10:01.861Z · LW · GW

Not all proposed solutions to x-risk fit this pattern: If government spends taxes to build survival shelters that will shelter only a chose few who will then go on to perpetuate humanity in case of a cataclysm, most tax payers receive no personal benefit.

Similarly, if government-funded programs solve AI value loading problems and the ultimate values don't reflect my personal self-regarding preferences, I don't benefit from the forced funding and may in fact be harmed by it. This is also true for any scientific research whose effect can be harmful to me personally even if it reduces x-risk overall.

Comment by Andaro on Why do you reject negative utilitarianism? · 2019-02-13T02:25:25.542Z · LW · GW
What have you read about it that has caused you to stop considering it, or to overlook it from the start?

I reject impartiality on the grounds that I'm a personal identity and therefore not impartial. The utility of others is not my utility, therefore I am not a utilitarian. I reject unconditional altruism in general for this reason. It amazes me in hindsight that I was ever dumb enough to think otherwise.

Can you teach me how to see positive states as terminally (and not just instrumentally) valuable, if I currently don’t?

Teach, no, but there are some intuitions that can be evoked. I'd personally take a 10:1 ratio between pleasure and pain; if I get 10 times more pleasure out of something, I'll take any pain as a cost. It's just usually not realistic, which is why I don't agree that life has generally positive value.

There are fictional descriptions of extreme pleasure enhancement and wireheading, e.g. in fantasy that describe worthwhile states of experience. The EA movement is fighting against wireheading, as you can see in avturchin's posts. But I think such a combination of enhancement + wireheading could plausibly come closest to delivering net-positive value of life, if it could be invented (although I don't expect it in my lifetime, so it's only theoretical). Here's an example from fiction:

"You see, I have a very special spell called the Glow. It looks like this." The mage flicked his fingers and they started Glowing in a warm, radiant light. Little Joanna looked at them in awe. "It's so pretty! Can you teach me that? Is that the reward?" Melchias laughed. "No. The true reward happens when I touch you with it." She stared at him curiously. He looked down at the table in front of him with an expectant look, and she put her slender arm there so he could touch her. "Here, let me demonstrate. Now, this won't hurt one bit..." He reached out with his Glowing fingers to touch the back of her small hand, ever so gently.
And as their skin connected, Joanna's entire world exploded. The experience was indescribable. Nothing, no words and no warnings, could have prepared Joanna for the feeling that was now blasting through her young mind with the screaming ferocity of a white-hot firestorm, ripping her conscious thoughts apart like a small, flickering candlelight in a gigantic hurricane of whipping flames and shredding embers. She had no idea, no idea such pleasure existed, ever could exist! It was all of her happy memories, all of her laughter, her playfulness, her sexy tingles when she rubbed herself between the legs, the goodness of eating apple pie, the warmth of the fireplace in the winter nights, the love in her papa's strong arms, the fun of her games and friendships with the other village kids, the excitement of stories yet unheard, the exhilaration of running under the summer sun, the fascination of the nature and the animals around her, the smells and the tastes, the hugs and awkward kisses, all the goodness of all her young life, all condensed into a mere thousandth of a split-second... ...and amplified a thousand-fold... ...and shot through her mind, through her soul, again and again and again, split-second after split-second after split-second, like a bombardment of one supernova after another supernova of pure, unimaginable bliss, again and again and again, and yet again, second after second, filling her up, ripping her apart with raw ecstatic brilliance, melting her mind together in a new form, widened and brighter than it had ever been, a new, divine, god-like Joanna that no words could adequately worship, only to rip her apart again with a new fiery pulse of condensed, sizzling-hot vibrance, indescribable, unimaginable, each second an unreached peak, a new high, a new universe of fantastic pleasure, a new, unspeakably wonderful Joanna, loved and pulsing in her own Glowing light with a beauty unmatched by any other thing in all of the World Tree. She was a giant beating heart that was also a Goddess, Glowing and pulsing in the center of Everything, Glowing with the certainty of absolute affirmation, the purity of absolute perfection, the undeniability of absolute goodness. She spent centuries in seconds, serene yet in screaming ecstasy, non-living yet pulsing raw life force, non-personal yet always Joanna, Joanna in divine totality. It took Joanna a long time to realize she was still breathing, a living child with an actual human body. She had forgotton to breathe, and was sure she would have suffocated by now, but somehow, inexplicably, her body had remembered to keep itself alive without her. Master Melchias had lied: It did hurt, her chin and lip hurt, but the young girl found it was only because she had dropped to the hard stone floor in helpless twitching convulsions, and she had accidentally bitten herself. As promised by the wizard, the small wound was quickly healing. Joanna couldn't get up yet. She had no idea how much time had passed, but she just couldn't move or even open her young eyes yet. She curled up into a fetal position on the cold, hard floor of Melchias' Tower and sobbed uncontrollably. She sobbed and cried, and sobbed, and laughed madly, then sobbed and cried again. They were tears of pure joy.
The Glow wasn't just a normal pleasure spike, like an orgasm, a fit of laughter or a drug high. It went far, far beyond that. Normal human experiences existed within an intensity range that was given by nature. It served to motivate the organism for survival and reproduction, but it was not optimized for the experience itself. Even the most intense experiences, like burning alive or being skinned alive, existed within that ordinary, natural range. But the magic of the Glow didn't just stimulate pleasure within that range - it completely changed the range itself. It broke the scale on which normal experiences were measured, and then attached a vast multitude of additional ones to its top. By enhancing the part of the subject's mind that contained ordinary pleasure, it became temporarily able to experience an intensity that was hundreds of thousands times stronger than even the most extreme natural human feeling. Being drowned in hot oil, being flayed alive or tortured with needles, deep romantic love and fulfillment, orgasmic ecstasy, perfect fits of laughter - all of these human extremes represented only a miniscule fraction of the new potential. And only then did the spell induce raw, optimized pleasure within this new, widened consciousness. The result was an unimaginably pure goodness that fell so far outside of the subject's prior experience that it couldn't even be communicated by words. It had to be demonstrated. Once a potential [...] candidate had perceived even one second of the Glow, each containing more joy and happiness than an average human lifespan, with none of its pain, they all became devoted followers to Melchias. He transformed their experience from something human to something divine, and in turn, he became like a god to them.
Comment by Andaro on “She Wanted It” · 2018-11-20T12:37:09.766Z · LW · GW

Vaniver, your post is eloquent and relevant, yet of course no one gives a shit about that after being downvoted for engaging in a controversial topic in the first place. At that point, all I see is undifferentiated hostility and I'm not going to engage in the cognitive effort to change that view.

It's not even really your fault. I engaged in a conversation of a controversial, moralistic nature without having any strategic selfish reason to do so. That's a bad habit if there ever was one. Alas, humans are not always strategic, and sometimes I need the reminder what really matters and what doesn't.

From that perspective, domestic abuse is irrelevant. The average abuse victim has never done anything for me to deserve my positive reciprocity. I'm not an abuse victim and if I were, I'd simply take personal revenge. Unless of course the abuser is so valuable to my life that I see them as a net-benefit despite the occasional abuse. Hard but not impossible, which was of course my whole point.

Less Wrong and its community has done little for me. You're not as terrible as EA, and I've gained the occasional useful insight here, but you're still toxic on net, so I'd classify you as minor enemies. Marginally worth harming but no where near the top of my list.

So to sum up, fuck it and good riddance. I actually kind of thank you for the downvotes in this case, this type of negative interaction helps me refocus my perspective and priorities. In fact, I'm now slightly less caring about consent and abuse than I was before this conversation, and that's probably quite rational for my personal values.

Comment by Andaro on “She Wanted It” · 2018-11-19T20:07:05.708Z · LW · GW

I observe that you are communicating in bad faith and with hostility, so I will use my right to exit for any further communication with you.

Comment by Andaro on “She Wanted It” · 2018-11-18T20:10:55.362Z · LW · GW

What? Why? No sane person would classify "he will murder me if I leave" as "the right to exit isn't blocked". I don't expect much steelmanning from the downvote-bots here, but if you're strawmanning on a rationalist board, good-faith communication becomes disincentivized. It's not like I have skin in the game; all my relationships are nonviolent and I neither give a shit about feminism nor anti-feminism.

Still, if "she's such a nice person but sometimes she explodes" isn't compatible with revealed preference for the overall relationship, I don't know what is. My argument was never an argument that such relationships are great or that you should absolutely never use your right to exit. It's just a default interpretation of many relationships that are being maintained even though they contain abuse. Obviously if you're ankle-chained to a wall without a phone, that doesn't qualify as revealed preference. And while I don't object to ways government can buffer against the suffering of homelessness or socioeconomic hardship, it's still a logical necessity that the socioeconomic advantages of a relationship are a part of that relationship's attractiveness, just like good pay is a reason for people to stay in shitty jobs and it doesn't violate the concept of revealed preference, it doesn't make those jobs nonconsensual and it wouldn't necessarily make people better off if those jobs didn't exist.

And by the way, it's right to exit, not right to exist. There's a big difference.

Comment by Andaro on “She Wanted It” · 2018-11-18T12:03:53.693Z · LW · GW

I didn't read the whole post, but most of that is just the right to exit being blocked by various mechanisms, including socioeconomic pressure and violence. And the socioeconomic ones aren't even necessarily incompatible with revealed preference; if the alternative is homelessness, this may suck, but the partner still has no obligation to continue the relationship and the socioeconomic advantages are obviously a part of the package.

Comment by Andaro on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:27:50.697Z · LW · GW
if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.

Not interested in this kind of "moral obligation". If you want to be a hedonistic utilitarian, use your own capacity and consent-based cooperation for it.

Comment by Andaro on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-12T23:04:05.226Z · LW · GW

I think it's worth making the distinction between reward hacking, pleasure wireheading, and addiction more clearly. There's some overlap, but these are different concepts with different implications for our utility.

The whole ideological subtext reeks with puritan moralism. You imply that we exist to make humanity's future bigger, rather than to do whatever the hell we actually prefer.

As long as pleasure wireheading is consensual, you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth and reproduction agenda. However, we are not slaves owned by you who owe you labor and financial support for that agenda. If you can't find enough people willing to forgo consensual pleasure-wireheading to build the future you want to build, consider that it may be an indicator that people don't actually see your agenda as worth supporting.

Personally, I'd gladly take a drug that eliminates all my suffering and doubles all my pleasure, even if it drastically reduced my life expectancy. Mere existence isn't everything.

Comment by Andaro on “She Wanted It” · 2018-11-12T19:27:35.136Z · LW · GW

The demand for sexual violence in fiction is easy to explain. It allows us to fantasize about behavior that would be prohibitively disadvantageous in practice, and it allows us to reflect on hypothetical situations that are relevant to our interests, such as how to deal with violent people.

My default model for abusive relationships *where the right to exit is not blocked* is indeed revealed preference. Not necessarily revealed preference for the abuse, but for the total package of goods and bads in the relationship.

The sex and romance market is a market after all, and different individuals have different market power. This is why some people pay for sex, and I'm sure some people accept abuse they would not tolerate from a partner with less market power.

Of course, this isn't true if someone breaks a promise unexpectedly, like ignoring an agreed-upon safe word. That's massive enemy action. But if it happens repeatedly, and the relationship is maintained for longer periods of time, even though the right to exit is not blocked and both partners could break it off, my default interpretation is still reveled preference for the total package.

Comment by Andaro on Rationality of demonstrating & voting · 2018-11-09T01:46:28.073Z · LW · GW
Indeed, as mentioned, without altruism, voting behaviour is fairly inexplicable.

I vote to reward or penalize politicians based on their previous choices, rather than to create better outcomes. That is, I look back, not forward.

There are some exceptions, e.g. when a candidate before assuming office is sending unusually credible signals, e.g. glorifying torture or some such. Other than that, I mostly ignore promises, and instead implement reciprocity for past decisions.

Edited after more reflection:

Whereas the expected benefit of voting to you alone is the Brexit harm to you / 3 million, = $3 trillion / 2 (effect on UK only) / 65 million (UK population) / 3 million = 0.7 cents – illustrating why voting needs at least a tiny bit of altruism to be rational.

This is interesting. I do expect for things like marginal tax rates, my emotions are scope-insensitive and my reciprocity mostly symbolic/psychological.

However, if I share interests with many other voters who voted for those interests, all of their votes benefited my interests and I can reciprocate not just for/against politicians, but also for/against all these other voters. If I like low tax rates, I can benefit every voter who's voted for low tax rates by voting for low tax rates.

More importantly, some issues have much higher impact on my utility than marginal tax rates. If I could choose between $1 billion personal purchasing power, and the liberty to buy a deadly dose of pentobarbital if/when I choose to die peacefully, I'd take the pentobarbital. Which means that politicians who've reduced the probability that this liberty is legal for me have forced an opportunity cost of over $1 billion on me. Perhaps voting is still not the best way to implement reciprocity in such a case, but outside of direct attacks on ex-politicians, e.g. what the Christians did to Els Borst, it's one of the remaining ways to get back at them and therefore still well worth doing.

Comment by Andaro on Do Animals Have Rights? · 2018-10-18T23:30:10.500Z · LW · GW

I agree with other commenters that the slavery framing is unhelpful. However, I mostly do agree with Jordan Peterson otherwise.

Human rights set expectations how we treat each other. From my perspective, respect for them is conditional on reciprocity. I will not respect the rights of an individual who doesn't respect mine. Their function is to set standards of behavior that make everybody better off.

A benefit of human rights, rather than mammal rights or just smaller-identity rights is that they benefit everyone who can understand the concept, so they're memetically adequate to cover the basics in a globalized world without incurring the huge cost of including the very large number of nonhuman animals. Basically, everybody who can participate in the discussion should be able to agree on the concept - and benefit from that agreement - without having to commit to universal species-independent collectivism.

For this reason, I don't see the suffering of animals as a problem except for empathy management and perhaps creating a culture of anti-cruelty, if we need it for other purposes.

One problem with human rights is that they are not necessarily well-defined in all contexts, and sometimes people can do strategically better by respecting the rights only of a subset of people. A possible solution would be to insist on minimal standards for the very basic expectations, e.g. don't randomly torture or murder people you dislike, while setting higher standards only for subgroups, e.g. citizenship transferring the right to live and work in a certain territory.

Comment by Andaro on We can all be high status · 2018-10-11T22:24:51.403Z · LW · GW

I have no idea what toonalfrink's goals for the conversation are. But when someone writes something like,

>So you find yourself in this volunteering opportunity with some EA's and they tell you some stuff you can do, and you do it, and you're left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master's degree to get grades that set you apart, maybe that'll get you invited with the cool kids?

then the only sensible option from my perspective is to take a step back and consider why you're seeking status from this community in the first place. What motivations go into this behavior. At this point, I think it's well worth reflecting

1) Why altruism in the first place?

2) Given 1, why EA?

3) Given 2, why seeking status?

Community norms tend to be self-reinforcing. It's worth pointing out that there are people with a genuinely different perspective, and that this perspective has a reason.

Comment by Andaro on We can all be high status · 2018-10-11T16:15:17.819Z · LW · GW

>We all want to save the world, right?

No. This is your first mistake, I think. You take the ideology's authority for granted. You shouldn't. Dropping altruism outside of self-based reciprocity was the single best decision I have ever made. The world is not worth saving. It's not worth destroying either.

If you're suffering from being low-status in the EA movement, you should not be a part of the EA movement. EA as an ideology has deep flaws, and as a social dynamic, it's outright horrible. Politically, it's parasitic.

The last part is the only part I still care about. I went through a curve from caring about making the world a better place and therefore supporting EA to wanting to make the world a better place but being skeptical about EA's consequences to not wanting to make the world a better place.

If EAs weren't politically parasitic, we would be free to simply ignore them, and this would be the correct answer. Unfortunately, we can't ignore them, because they push policies and influence politics in a way that makes us worse off. This is why I'm willing to actively oppose their goals.

I distinguish two aspects of status. One is to feel good about being accepted by others. That's nice, but I don't think it's central. There are many ways to feel good and many options to substitute for acceptance of any particular person or group.

The second aspect is "getting things done". Unfortunately, we live in a world filled with people who can harm us. Coercing or convincing them not to do so is unfortunately an important practical necessity. This is why we can't simply ignore the EA movement, or organized religion, or neonazis or any other ideology that wants to extract value from our lives or limit our personal choices.

I really do recommend that you stop supporting the EA movement. Nothing good will come of it.

Comment by Andaro on Resurrection of the dead via multiverse-wide acausual cooperation · 2018-09-08T17:52:58.040Z · LW · GW

If the required kind of multiverse exists, this leads to all kinds of contradictions.

For example, in some universes, Personal Identity X may have given consent to digital resurrection, while in others, the same identity may have explicitly forbidden it. In some universes, their relatives and relationships may have positive prefrences regarding X's resurrection, in others, they may have negative preferences.

Given your assumed model of personal identity and the multiverse, you will always find that shared identities have contradicting preferences. They may also have made contradicting decisions in their respecting pasts, which makes multiverse-spanning acausal reciprocity highly questionable. For every conceivable identity, there are instances that have made decisions in favor of your values, but also instances who did the exact opposite.

These problems go away if you define personal identity differently, e.g. by requiring biographical or causal continuity rather than just internal state identity. But then your approach no longer works.

I personally am not motivated to be created in other Everett branches, nor do I extend my reciprocity to acausal variants.

Comment by Andaro on [deleted post] 2018-08-18T11:46:09.588Z
To be precise, this seems like a cost to Alice of Bob having a wide circle, if Alice and Bob are close. If they aren't, and especially if we bring in a veil of ignorance, then Alice is likely to benefit somewhat from Bob having a wide circle.

Yes, but Alice doesn't benefit from Bob's having a circle so wide it contains nonhuman animals, far future entities or ecosystems/biodiversity for their own sake.

and my reaction is that none of that stops children from dying of malaria, which is really actually a thing I care about and don't want to stop caring about

The OP asks us to reexamine our moral circle. Having done that, I find that nonhuman animals and far future beings are actually a thing I don't care about and don't want to start caring about.

Comment by Andaro on [deleted post] 2018-08-15T14:39:30.812Z

But an expanding circle of moral concern increases value differences. If I have to pay for a welfare system, or else pay for a welfare system and also biodiversity maintainance and also animal protection and also development aid and also a Mars mission without a business model and also far-future climate change prevention, I'd rather just pay for the welfare system. Other ideological conflicts would also go away, such as the conflict between preventing animal suffering and maintaining pristine nature, ethical natalism vs. ethical anti-natalism, and so on.

Comment by Andaro on [deleted post] 2018-08-15T12:22:24.476Z

Yes, it certainly cuts both ways. Of course, your country's welfare system is also available to you and your family if you ever need it, and you benefit more directly from social peace and democracy in your country, which is helped by these transfers. It is hard to see how you could have a functioning democracy without poor people voting for some transfers, so unless you think democracy has no useful function for you, that's a cost in your best interest to pay.

Comment by Andaro on [deleted post] 2018-08-14T21:50:56.643Z

The moral circle is not ever expanding, and I consider that a good thing.

A very wide moral circle is actually very costly to a person. Not only can it cause a lot of stress to think of the suffering of beings in the far future or nonhuman animals in farming or in the wild, but it also requires a lot of self-sacrifice to actually live up to this expanded circle.

In addition, it can put you at odds with other well-meaning people who care about the same beings, but in a different way. For example, when I still cared about future generations, I mostly cared about them in terms of preventing their nonconsensual suffering and victimization. However, the common far-future altruism narrative is that we ought to make sure they exist, not that they be prevented from suffering or being victimized without their consent. This is cause for conflict, as exemplified by the -25 karma points or so I gathered on the Effective Altruism Forum for it at the time.

Since then, my moral circle has contracted massively, and I consider this to be a huge improvement. It now contains only me and the people who have made choices that benefit me (or at least benefit me more than they harm me). There is also a circle of negative concern now, containing all the people who have harmed me more than they benefit me. I count their harm as a positive now.

My basic mental heuristic is, how much did a being net-benefit or net-harm me through deliberate choices and intent, how much did I already reciprocate in harming or benefitting them, and how cheap or expensive is it for me to harm or benefit them further on the margin? These questions get integrated into an intuitive heuristic that shifts my indifference curves for everyday choices.

The psychological motivation for this contracted circle is based on the simple truth that the utility of others is not my utility, and the self-awareness that I have an intrinsic desire for reciprocity.

There is yet another cost to a wide circle of moral concern, and that is the discrepancy with people who have a smaller circle. If you're my compatriot or family member or fellow present human being, and you have a small circle of concern, I can expect you to allocate more of your agency to my benefit. If you have a wide circle of concern that includes all kinds of entities who can't reciprocate, I benefit less from having you as an ally.

When people have a wide circle of concern and advocate for its widening as a norm, this makes me nervous because it implies huge additional costs forced on me, through coercive means like taxation or regulations, or simply by spreading benevolence onto a large number of non-reciprocators instead of me and the people who've benefitted me. That actually makes me worse off, and people who make me worse off are more likely to receive negative reciprocity rather than positive reciprocity.

I love human rights because they're a wonderful coordination instrument that makes us all better off, but I now see animal rights as a huge memetic mistake. Similarly, there is little reason to care about far-future generations whose existence is never going to overlap with any of us in terms of reciprocity, and yet we're surrounded by memes that require we pay massive costs to their wellbeing.

Moralists who advocate this often use moralistic language to justify it. This gives them high social status and it serves as an excuse to impose costs on people who don't intrinsically care, like me. If I reciprocate this harm against them, I am instantly a villain who deserves to be shunned for being a villain. This dynamic has made me understand the weird paradoxical finding that some people punish what ostensibly seems to be prosocial behavior. Moralism can really harm us, and the moralists should be forced to compensate us for this harm.