Posts

Comments

Comment by adriano_mannino on Crossing the experiments: a baby · 2013-08-16T18:12:17.587Z · score: 1 (1 votes) · LW · GW

I'm not sure if it's possible to separate "felt intensity" from "intensity of desire". (I don't know what pain/suffering without a desire that it not exist would be.) But however that may be, your point doesn't seem to settle the population-ethical issue: If we look at hedonic desires (weighted by intensity), should we maximize [fulfilled desires - unfulfilled desires] or minimize [unfulfilled desires]? A desire can be considered a problem to be solved. If we want to solve the world's problems (which motivation seems to underly what many people are doing), does it make more sense to minimize unsolved problems or to create as many solved problems as possible? - I think clearly the former, for the non-existence of problems (and thus of solved problems) does not intrinsically pose a problem.

Why isn't it all one scale of "felt hedonic intensity"? If it was all one scale, it seems that placing the 0-point would be a purely formal and arbitrary matter. But we agree that it's not - so there seems to be something substantial going on when hedonic tone changes from "pleasurable" to "painful". We're not sliding along a scale of more/less of the same thing - at some point, the thing in question changes. Suppose I grant you that there is a way of comparing pleasure- and pain-intensities: "Here's a pain of intensity x, and there's something that's a pleasure and has the same intensity x." Now how are you going to establish that x-intensity of that other thing (pleasure) morally outweighs x-intensity of pain? Maybe it's 2x-intensity of the other thing? How's that choice not arbitrary? As Lukas said, the choice seems to be based on how much you crave the other thing (and its greater intensity), i.e. on how much of a problem its absence is to you. And this brings us back to minimizing unsolved problems, it seems.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-11T02:30:36.400Z · score: 4 (6 votes) · LW · GW

Sorry for the delay. - I should have been more precise. I'll provide more precision by commenting on the cases you mention:

  • The injured pet case probably involves three complications: (1) people's belief that it's an "egoistic" case for the pet (instead of it being an "altruistic" trade-off case between pet-consciousness-moments), (2) people's suffering that would result from the death/future absence of the pet, and (3) people's intuitive leaning towards an (ideal) preference view (the pet "wants" to survive, right? - that would be compatible with a negative(-leaning) view in population ethics according to which what (mainly) matters is the avoidance of unfulfilled preferences).

  • It's clear that evolutionary beings will have a massive bias against the idea of childbirth being morally problematic (no matter its merit). Also, people would themselves be suffering/have thwarted preferences from childlessness.

  • People consider death an "egoistic" case - a harm to the being that died. I think that's confused. Death is the non-birth of other people/people-moments.

  • People usually don't "favor" it in the sense of considering it morally important/urgent/required. They tend to think it's fine if it's an "OK deal" for the being that comes into existence (here again: "ego"-case confounder). By contrast, they think it's morally important/urgent not to bring miserable children into existence. And again, we should abstract from childbirth and genealogical continuation (where massive biases are to be expected). So let's take the more abstract case of Omelas: Would people be willing to create miserable lives in conjunction with a sufficiently great number of intensely pleasurable lives from scratch, e.g. in some remote part of the universe (or in a simulation)? Many would not. With the above confounders ("egoism" and "personal identity", circumstantial suffering, and a preference-based intuitive axiology) removed and the population-ethical question laid bare, many people would not side with you. One might object: Maybe they agree that Omelas is much more intrinsically valuable than non-existence, but they accept deontological side-constraints against actively causing miserable lives, which is why they can't create it. But in that case they shouldn't prevent it if it arose naturally. Here again, though, my experience is that many people would in fact want to prevent abstract Omelas scenarios. Or they're at least uncertain about whether letting even Omelas (!) happen is OK - which implies a negative-leaning population axiology.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-10T15:27:13.395Z · score: 2 (2 votes) · LW · GW

Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-04T02:07:43.606Z · score: 1 (3 votes) · LW · GW

Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-04T02:00:33.321Z · score: 1 (3 votes) · LW · GW

Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-04T01:52:55.381Z · score: 6 (6 votes) · LW · GW

One important reason they like to discuss them is the fact that many people just assume, without adequate consideration and argument, that the future will be hugely net positive. Which comes at no surprise, given the existence of relevant biases.

Whether negative utilitarians believe that "there isn't any possible positive value" is semantics, I think. The framing you suggest is probably a semantic (and thus bad) trigger of the absurdity heuristic. With equal substantial justification and more semantic charity, one could say that negative utilitarians believe that the absence of suffering/unfulfilled preferences or suffering-free world-states have positive value (and one may add either that they believe that the existence of suffering/unfulfilled preferences has negative value or that they believe there isn't any possible negative value).

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-03T06:55:49.250Z · score: 8 (10 votes) · LW · GW

Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.

Lots of dystopian future scenarios are possible. Here are some of them.

How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don't think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.

But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-03T06:23:33.763Z · score: 5 (7 votes) · LW · GW

Sorry for the delay!

I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.

If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.

By contrast, if you accept (1), it's still very much an open question whether you'll be on board.

Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they're of astronomical significance.

I don't really know if I'm suggesting any further specific change to the wording - sorry about that. It's tricky... If you're speaking to non-EAs, it's important to emphasize the rejection of time preference. But there shouldn't be a "therefore", which (in my perception) is still implicitly there. And if you're speaking to people who already reject time preference, it's even more important to make it clear that this rejection doesn't imply "fill the universe with people". One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: "Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value..."

Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we're talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.

Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we've established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-08-03T04:22:52.685Z · score: 6 (6 votes) · LW · GW

Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem to disagree with the conclusion. Unfortunately and sadly, though, the utility of talking about (affecting) the far future is a tricky issue too, given fundamental disagreements in population ethics.

I don't know that the "like most people would" parenthesis is true. (A "good thing" maybe, but a morally urgent thing to bring about, if the counterfactual isn't existence with less well-being, but non-existence?) I'd like to see some solid empirical data here. I think some people are in the process of collecting it.

Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".

The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.

Comment by adriano_mannino on Arguments Against Speciesism · 2013-07-28T22:43:17.518Z · score: 8 (10 votes) · LW · GW

What about a random human instead of your grandmother? What if the human's/your grandmother's cognitive capacities were lower than the dog's or the chimp's? – What would a good altruist do?

How do you block the "chain of comparables"?

Comment by adriano_mannino on Arguments Against Speciesism · 2013-07-28T22:37:36.705Z · score: 8 (10 votes) · LW · GW

Good question, shminux. Another way of putting it: If cows and chickens don't count, why have any animal protection laws? Their guiding principle usually is the avoidance of unnecessary animal suffering. And if we agree that eating animals (1) causes animal suffering and is (2) unnecessary because we can have animal-free foods that are equally tasty, then the guiding principle of the agreed upon animal protection laws actually already implies that we should stop farming chickens and cows.

Note also that the Three Rs – which guide animal testing in many countries – reaffirm the above principle. Many believe it should be illegal to cause any animal suffering if it's unnecessary, i.e. if there is an acceptable alternative for the purpose. (And if there is not, we are under an obligation to try and create one.) If we take seriously what almost everybody accepts when it comes to animal testing, we should stop farming animals.

It seems that the arguments for according non-human animals a very important place in our practical ethics can only be blocked by claiming that they/their suffering matters zero. If it matters just a little, the aggregate animal suffering is still likely to matter a lot. And even if we are inclined to believe that it matters zero, we should retain some non-negligible uncertainty, at least if our view (like Jeff's) is based on the claim that some not-really-understood (!) combination of suffering with self-awareness, intelligence or other preferences is what makes for moral badness. If we are wrong on this one, the consequences will be catastrophic. We should take this into account.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-07-19T04:32:31.608Z · score: 7 (7 votes) · LW · GW

Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.

If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.

Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm constantly like: "No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it's morally urgent to fill the universe with people. I doubt it."

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-07-19T01:08:48.927Z · score: 9 (9 votes) · LW · GW

Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-07-15T21:16:32.420Z · score: 6 (8 votes) · LW · GW

OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.

It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection of presentist time preference makes the conclusions more likely.

But it's by no means sufficient for them. Bostrom and Beckstead need the further claim that bringing new people into existence is morally important and urgent.

This seems to be the crucial point. So I'd rather go for something like: "Many (Some?) EAs value/think it morally urgent to bring new people (with lives worth living) into existence, and therefore..."

The moral urgency of preventing miserable lives (or life-moments) is less controversial. People like Brian Tomasik place much more (or exclusive) importance on the prevention of lives not worth living, i.e. on ensuring the well-being of everyone that will exist rather than on making as many people exist as possible. The issue is not whether (far) future lives count as much as lives closer to the present. One can agree that future lives count equally, and also agree that far future considerations dominate the moral calculation (empirical claims enter the picture here). But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion). So it's possible to agree that future lives count equally and that far future considerations dominate but to still disagree on the importance of x-risk reduction or more particular things such as space colonization.

Comment by adriano_mannino on Four Focus Areas of Effective Altruism · 2013-07-13T03:10:21.797Z · score: 10 (12 votes) · LW · GW

Thanks, Luke, great overview! Just one thought:

Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future

This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill the universe with happy people. I think there is something true and important about the slogan "Make people happy, not happy people", although explaining that something is non-trivial.

Comment by adriano_mannino on [Link] Values Spreading is Often More Important than Extinction Risk · 2013-04-07T14:21:07.353Z · score: 7 (11 votes) · LW · GW

The "occasional minor pains" example is problematic because it brings in the question of aggregation too - and respective problems are not specific to NU. If NUs have to claim that sufficiently many minor pains are worse than torture, then that holds for CUs too. So the crucial issue is whether the non-existence of pleasure poses any problem or not, and whether the idea of pleasure "outweighing" pain that occurs elsewhere in space-time makes sense or not.

It's clear what's problematic about a decision to turn rocks into suffering - it's a problem for the resulting consciousness-moments. On the other hand, it's not clear at all what should be problematic about a decision not to turn rocks into happiness. In fact, if you do away with the idea that non-existence poses a problem, then the NU implications are perfectly intuitive.

Regarding Ord's intuitive counterexamples: It's unclear what their epistemic value is; and if there is any to them, CU seems to be subject to counterexamples that many would deem even worse. How many people would go along with the claim that perfect altruists would torture any finite number of people if that would turn a sufficient number of rocks into "muzak and potatoes" (cf. Ord) consciousness-seconds? As for "making everyone worse off": Take a finite population of people experiencing superpleasure only; now torture them all; add any finite number of tortured people; and add a sufficiently large number of people with lives barely worth living (i.e.: one more pinprick and non-existence would be better). - Done. And this makes you a good altruist according to CU.

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-30T12:10:31.006Z · score: 1 (1 votes) · LW · GW

It would indeed be bad (objectively, for the world) if, deep down, we did not really care about the well-being of all sentience. By definition, there will then be some sentience that ends up having a worse life than it could have had. This is an objective matter.

Yes, it is what I value, but not just. The thing is that if you're a non-utilitarian, your values don't correspond to the value/s there is/are in the world. If we're working for CEV, we seem to be engaged in an attempt to make our values correspond to the value/s in the world. If so, we're probably wrong with CEV.

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-30T11:18:07.858Z · score: 0 (2 votes) · LW · GW

I don't think we'd be more clear by saying this, I think we'd be (at least partially) wrong.

Let's compare two worlds: World1 contains a population of pigs that are all constantly superhappy. World2 contains a population of pigs that are all constantly supermiserable. Clearly, World1 is objectively better than World2. If some morals deny this, they are wrong.

Things can only be good/bad for conscious beings (not for rocks, e.g.). So insofar the world takes the form of consciousness that gets what's good for it, it's objectively the case that something good has occurred in/for the world.

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-29T11:00:54.096Z · score: 4 (6 votes) · LW · GW

Well, if humans can't and won't act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.

If we did model ethics after particular types of agent, here's what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can't morally compare types A and B.

But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-28T13:25:05.547Z · score: 5 (11 votes) · LW · GW

Why are you so certain that a population of 0 would be a problem? In fact, there'd be no one for whom it would (could!) be a problem; no one whose values could rate that state of affairs as bad. Would it be a problem if no form of consciousness had ever come into existence? Why would that be problematic?

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-28T13:16:27.186Z · score: 1 (1 votes) · LW · GW

What is it that you are strongly motivated to do in this world, then? Are you strongly motivated to reduce/prevent drethelin_tomorrow's suffering, for instance?

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-28T13:04:57.997Z · score: 7 (7 votes) · LW · GW

But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.

I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-28T12:05:50.534Z · score: 8 (8 votes) · LW · GW

As a matter of fact, I will of necessity treat them as I want to treat them. But I should of course treat them (and it would be good) to treat them as they want to be treated or as I'd want to be treated in their place.

What makes the values of individual humans important? What makes their frustration a bad thing? It seems that we can basically either hold that not getting what one (really) wants is bad tout court; or we can attempt a further reduction and only incorporate what feels bad/good when we (don't) get it.

In both cases, the focus on human minds is an arbitrary and irrational bias. For to the extent that non-human minds have equally strong wants or experience equally bad/good feelings when they (don't) get what they want, their values (or the values that they can instantiate) are no less important than the values of humans.

Comment by adriano_mannino on CEV: a utilitarian critique · 2013-01-27T20:53:18.783Z · score: 8 (8 votes) · LW · GW

Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don't include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don't include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.

The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.

Comment by adriano_mannino on An argument that animals don't really suffer · 2012-08-18T16:45:45.674Z · score: 6 (6 votes) · LW · GW

It's been asserted here that "the core distinction between avoidance, pain, and awareness of pain works" or that "there is such a thing as bodily pain we're not conciously aware of". This, I think, blurs and confuses the most important distinction there is in the world - namely the one between what is a conscious/mental state and what is not. Talk of "sub-conscious/non-conscious mental states" confuses things too: If it's not conscious, then it's not a mental state. It might cause one or be caused by one, but it isn't a mental state.

Regarding the concept of "being aware of being in pain": I can understand it as referring to a second-order mental state, a thought with the content that there is an unpleasant mental state going on (pain). But in that sense, it often happens that I am not (second-order) aware of my stream of consciousness because "I" am totally immersed in it, so to speak. But the absence of second-order mental states does not change the fact that first-order mental states exist and that it feels like something (and feels good or bad) to be in them (or rather: to be them). The claim that "no creature was ever aware of being in pain" suggests that for most non-human animals, it doesn't feel like anything to be in pain and that, therefore, such pain-states are ethically insignificant. As I said, I reject the notion of "pain that doesn't consciously feel like anything" as confused: If it doesn't feel like anything, it's not a mental state and it can't be pain. And there is no reason for believing that first-order (possibly painful and thus ethically significant) mental states require second-order awareness. At the very least, we should give non-human animals the benefit of the doubt and assign a significant probability to their brain states being mental and possibly painful and thus ethically significant.

Last but not least, there is also an argument (advanced e.g. by Dawkins) to the effect that pain intensity and frequency might even be greater in less intelligent creatures: "Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement? At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt."

Comment by adriano_mannino on Welcome to Less Wrong! (2012) · 2012-07-04T01:23:15.160Z · score: 11 (11 votes) · LW · GW

Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.

LessWrong is (by far) the best web resource on step-by-step rationality. I've been referring all aspiring rationalists to this blog as well as all the people who urgently need some rationality training (and who aren't totally lost). So thanks, you're doing an awesome job with this rationality dojo!