Comment by isnasene on Egoism In Disguise · 2019-06-02T06:45:22.583Z · score: 1 (1 votes) · LW · GW

As someone leaning moral anti-realist, I learn roughly towards saying that it's true but harmful. To be more nuanced, I don't think its necessarily harmful to believe that moral preferences are like ordinary preferences but I do think that treating moral preferences like other preferences is a worse equilibrium.

If we could accept egoism without treating our own morals differently, then I wouldn't have a problem with it. However, I think that a lot of the bad (more defection) parts of how we judge other people's morals are intertwined with good (less defection) ways that we treat our own.

Can you elaborate on what you don't understand about value drift applied to divergent starting values?


Comment by isnasene on Egoism In Disguise · 2019-06-01T05:33:00.026Z · score: 1 (1 votes) · LW · GW

While I agree with your philosophical claims, I am dubious about whether treating moral beliefs as "mere preferences" would produce positive outcomes. This is because:

1. Psychologically, I think treating moral beliefs like other preferences will increase people's willingness to value-drift or even to use value-drifting for personal reasons

2. Regardless of whether we view moral preferences as egoism, we will treat people's moral preferences differently than we treat others because this preserves cooperation norms

3. These cooperation norms set up a system that makes unnecessary punishment likely regardless of whether we treat morality as egoist. Furthermore, egoist morality might make excessive punishment worse by increasing the number of value-drifters trying to exploit social signalling.

In short, I think egoist perspectives on morality could cause one really big problem (value-drift) without changing the social incentives around excessive punishment (insurance against defection caused by value-drift).

In long, here's a really lengthy write-up of my thought process of why I'm suspicious about egoism:

1. It sounds like something that would encourage deliberately inducing value drift

While I agree that utilitarianism (and morality in general) can often be described as a form of egoism, I think that reframing our moral systems as egoist (in the sense of "I care about this because it appeals to me personally") dramatically raises the likelihood of value-drift. We can often reduce the strengths of common preferences through psychological exercises or simply getting used to different situations. If we treat morals as less sacred and more like things that appeal to us, we are more likely to address high-cost moral preferences (i.e. caring about animal welfare enough to become vegetarian or to feel alienated by meat-eaters) by simply deciding the preference causes too much personal suffering and getting rid of it (i.e. by just deciding to be okay with the routine suffering of animals).

Furthermore, from personal experimentation, I can tell you that the above strategy does in fact work and individuals can use it to raise their level of happiness. I've also discussed this on my blog where I talk about why I haven't internalized moral anti-realism.

2. On a meta-level, we should aggressively punish anyone who deliberately induces value-drift

I try to avoid such strategies now but only because of a meta-level belief that deliberately causing value-drift to improve your emotional well-being is morally wrong (or at least wrong in a more sacred way than most preferences). Naively, I could accept the egoist interpretation that not deliberately inducing value-drift is a preference for how I want the world to work and throw it away to (which would be easy because meta-level preferences are more divorced from intuition than object level ones). However, this meta-level belief about not inducing value-drift has some important properties:

1. don't modify your moral preferences because it would personally benefit you is a really important and probably universal rule in the Kantian sense. The extent to which this rule is believed is (so long as competition exists) is the extent to which whatever group just got into power is willing to exploit everyone else. In other words, it puts an upperbound on the extent to which people are willing to defect against other people.

2. With the exception of public figures, it is very hard to measure whether someone is selfishly inducing value-drift or simply changing their minds.

From 1., we find that we have extreme societal value in treatiing moral preferences as different from ordinary preferences. From 2., we see that enforcing this is really hard. This means that we probably want to approach this issue using The First Offender model and commit to caring a disproportionate amount about stopping people from deliberately allowing value-drift. Practically, we see this commitment emerge when people accuse public figures of hypocrisy. We can also see that apathy towards hypocrisy claims is often couched in the idea that "all public figures are rotten"--an expression of mutual defection.

3. Punishing people with different moral beliefs makes value-drift harder

Because we want to aggressively dissuade value-drift, aggressive punishment is useful not solely to disincentivize a specific moral failing but also to force the punisher to commit to a specific moral belief. This is because:

*People don't generally like punishing others so punishing someone is a signal that you really care a lot about the thing they did.

*People tend to like tit-for-tat "do unto others as others do unto you" strategies and the action of punishing someone for an action makes you vulnerable to being similarly punished. This is mostly relevant on local and social levels of interaction rather than ones with bigger power gradients.

I personally don't really like these strategies since they probably cause more suffering than the value they provide in navigating cooperation norms. Moreover, the only reason people would benefit from being punishers in this context is if commiting to a particular belief will raise their status. This disproportionately favors people with consensus beliefs but, more problematically, it also favors people who don't have strong beliefs but could socially benefit from fostering them (i.e. value-drifters) so long as they never change again. Consequently, I think that mainstreaming egoist attitudes about morality would promote value-drifting in a way that makes mechanisms through which value-drift is prevented perform worse.

Conclusion

The above is a lot but, for me, it's enough to be very hesitant to the idea of being explicitly egoist. I think egoism could cause a lot of problems with moral value-drift. I also think that the issue of excessive punishment due to moral judgement has deeper drivers than whether humanity describes morality as egoist. I think a better solution would probably just to directly emphasize "no one deserves to suffer" attitudes in morality directly.

Comment by isnasene on Intuitions on Negative Utilitarianism · 2019-03-25T07:00:25.807Z · score: 1 (1 votes) · LW · GW

I see all of the situations you describe in the first paragraph as being on the same lexical order. I agree that effects of experiences on identity could factor into how I value things in different ways but these effects would be lexically less important than the issues that I'm worried about. For the sake of the thing I'm trying to discuss, I'd evaluate each of those trades as being the same.

An an aside, here's a more explicit summary of where I think my "a day in hell cannot be outweighted" intuition comes from--there are two things going on:

1. First, I have difficulty coming up with any upper bound on how bad the experience of being in hell is.

But this does not preclude me trading off 1000 years of heaven for a day in hell so long as heaven is appropriately defined. Just as there are situations that I cannot treat as finitely unpleasant, there are also (in principle) situations that I cannot treat as finitely pleasant and I am willing to trade those off. As I also say in that section, I generally give more credence to this utilitarian approach than a strictly negative utilitarian one. However...

2. I have a better object-level understanding of really bad suffering than the idea of comparably really intense happiness.

which is essentally where my "a day in hell cannot be outweighed" intuition comes from. It's not just the idea that the instaneous experience integrated over time in hell is an unboundedly high number relative to some less intense suffering; it's also the physical reality that I grasp (on an intuitive level) how bad hell can be but lack a similar grasp on how good heaven can get. This is just the intuition though--and it's not something I rationally stand by:

Given both that anthropocentric biases explain devaluing arbitrarily good suffering, and that I prefer ethical systems that are not species-specific, I give more credence to general forms of utilitarianism that allow for arbitrarily intense positive or negative experiences than more limited forms of negative utilitarianism. --me

Comment by isnasene on Intuitions on Negative Utilitarianism · 2019-03-18T17:06:10.413Z · score: 1 (1 votes) · LW · GW

Something like this sounds at first qualitatively similar to what I have in mind but isn't really representative of my thought process. Here are some key differences/clarifications that would help convey my thought process:

1. Clarify that U=happiness-tan(suffering) applies to each individual's happiness and suffering (and then the global utility function is calculated by summing over all people) rather than the universe's total suffering and total happiness as I talk about here. People often talk about this implicitly but I thnk being clear about this is useful.

2. I don't want a utility function that ends with something just going to infinity because it can get confused when asked questions like "Would you prefer this infinitely bad thing to happen for five minutes or ten minutes?" since both are infinite. This is why value-lexicality as shown in figure 1b is important. Many different events can be infinitely worse than other things from the inside-view and it's important that our utility function is capable of comparing between them.

3. Clarify what is meant by "happiness" and "suffering." As I mention here, I agree with Ord's Worse-For-Everyone argument and metrics of happiness and suffering are often literally metrics of how much we should value or disvalue an experience--tautologically implying any given utility function of them should be a straight line always and regardless of intensity. Going by this definition, I would never make the claim that some finite amount of suffering should be treated as infinitely bad like tan(suffering) would suggest. Instead, my intuition is essentially that, from the inside-view, certain experiences are perceived as involving infinitely bad (or lexically worse) suffering so that if our definition of suffering is based on the inside view (which I think is reasonable), then the amount of suffering experienced can become infinite. I don't value a finite amount of suffering infinitely; I just think that suffering of effectively infinite magnitude might be possible.

Alternatively, if we define happiness and suffering in terms of physical experiences rather than something subjective. My utility function for experience E would probably look something more like asking what (and how much) a person would be willing to experience in order to make E stop. This could be approximated by something like U=happiness-tan(suffering) if the physical experience is defined appropriately in the appropriate domain. For example, if suffering represents an above-room-temperature temperature that the person is subjected to for five hours, the disutility might look locally like -tan(suffering) for an appropriate temperature range of maybe 100-300 degrees Fahrenheit. But this kind of claim is more of an empirical statement about how I think about suffering than it is the actual way I think about suffering.

Intuitions on Negative Utilitarianism

2019-03-18T01:11:25.129Z · score: 7 (4 votes)
Comment by isnasene on Effective Altruism Book Review: Radical Abundance (Nanotechnology) · 2018-10-16T22:04:39.562Z · score: 3 (2 votes) · LW · GW

Given the choice between an opponent with APM armaments and a standard infrastructure, or an opponent with standard armaments and an APM infrastructure, the latter is a greater military threat.

I'm not knowledgeable enough about modern military affairs to be certain of anything but I currently agree with this overall--though it depends on how advanced the APM tech is. The main advantages that APM provides generally fall into the categories of increasing the resources we can use and increasing the precision of our equipment. For the most part, I have difficult envisioning increasingly high quality/precise versions of any APM-based conventional weapon making a bigger war-time difference than APM's potential ability to make large numbers of lower quality weapons while providing cheap vehicles for troops.

Some caveats exist to this though in unconventional weapons that do rely highly on precise design. In particular, the ability to make efficient, cheaply made autonomous drones and transport vehicles for them is a massive logistical advantage. Between reducing the risk of human life and allowing local recuperation of energy***, these sorts of weapons should quickly overwhelm weaponized civilian weapons without those advantages. That being said, if APM factories for certain civilian uses like power generation are included in APM infrastructure, I can envision a less tech-advanced opponent putting up a difficult enough fight to make war unwinnable. This is one of the points against military risk being low.

Still, APM infrastructure and APM weapons aren't an either/or situation. More frequently, the strong nations will have both. This is one of the points in favor of military risk being low.

While APM-hybrid equipment may be inferior to APM-optimized equipment, it will probably still be superior to standard equipment.

I agree with this but suspect that the weaponry possibilities that APM opens up reach far beyond the purview of upgraded civilian tech to the extent that these improvements may not matter so much. This point also applies to the question of providing APM tech to belligerents--it depends on what the tech is.

The role of logistics (which is to say, infrastructure) in modern military affairs is widely underappreciated, and I have no reason to suspect Drexler of any particular expertise in this area.

Overall, I think the crux of the impact of APM lies in how accessible APM tech will be in the future. Because geographical constraints likely won't be a limiting factor (see PeterMcCluskey's response), this comes down to how good APM nations feel about giving APM to others. This means that the military risks of APM is likely one of the key factors in determining its effectiveness. In Drexler's defense, it's hard to predict the details of a massive change like APM and logistics often come down to small details that get very confused in high variance situations.

***Unlike nuclear weapons, abandoned or broken APM weapons have a relatively high risk of being salvaged for parts. Even if it provides a logistic advantage, I'm not sure that an APM nation would be willing to send very efficient motors/engines/energy harvesters into foreign territory for risk that the opponent takes those parts and hooks them into a weapon since those simple sorts of improvements can be a nightmare to combat. This argument also applies to providing other countries with APM energy sources and motor fabrication plants too.

Comment by isnasene on Effective Altruism Book Review: Radical Abundance (Nanotechnology) · 2018-10-16T20:50:23.585Z · score: 1 (1 votes) · LW · GW

I recently had a conversation with Christine Peterson and she pointed out the same thing. Imagine! A materials engineer who forgot that there was nitrogen in the air!

After some thinking, I also believe that nitrogen deficiency in Africa is not the concern that I indicated it was in the paper. I also agree that with sufficiently advanced technology (even when that technology is limited by physical laws), one should be able to overcome any geographical constraint in principle. This is Lesswrong and, thus, I must say that I was wrong about this.

Given that we can construct APM factories anywhere--something we likely can do once APM tech is achieved (though I suspect international interests would want to keep the APM tech used to create APM factories relatively inaccessible)--and that they can be maintained (again, something that APM can also ensure happens), I don't imagine geography being an issue. Thus, I don't really expect geography to be an issue.

I'm now much more confident that APM will lead to dramatic economic improvements in the places where we most care about them. There are still some practical considerations (i.e. ensuring the accessibility of people/machines capable of trouble-shooting the factories are everywhere the factories are) but these considerations are readily achievable.

Effective Altruism Book Review: Radical Abundance (Nanotechnology)

2018-10-14T23:57:36.099Z · score: 48 (12 votes)