CEV: a utilitarian critique
post by Pablo (Pablo_Stafforini) · 2013-01-26T16:12:20.846Z · LW · GW · Legacy · 87 commentsContents
Introduction Disclaimer Reason 1: CEV will (almost certainly) never happen Objection 1: "Okay. Future military or corporate developers of AGI probably won't do CEV. But why do you think they'd care about wild-animal suffering, etc. either?" Objection 2: "Fine. But just as we can advance values like preventing the spread of wild-animal suffering, couldn't we also increase the likelihood of CEV by promoting that idea?" Reason 2: CEV would lead to values we don't like Motivation 1: Some believe CEV is genuinely the right thing to do Motivation 2: Some argue that even if CEV isn't ideal, it's the best game-theoretic approach because it amounts to cooperating on the prisoner's dilemma Objection 1: "Suppose CEV did happen. Then spreading concern for wild animals and the like might have little value, because the CEV process would realize that you had tried to rig the system ahead of time by making more people care about the cause, and it would attempt to neutralize your efforts." None 87 comments
I'm posting this article on behalf of Brian Tomasik, who authored it but is at present too busy to respond to comments.
Update from Brian: "As of 2013-2014, I have become more sympathetic to at least the spirit of CEV specifically and to the project of compromise among differing value systems more generally. I continue to think that pure CEV is unlikely to be implemented, though democracy and intellectual discussion can help approximate it. I also continues to feel apprehensive about the conclusions that a CEV might reach, but the best should not be the enemy of the good, and cooperation is inherently about not getting everything you want in order to avoid getting nothing at all."
Introduction
I'm often asked questions like the following: If wild-animal suffering, lab universes, sentient simulations, etc. are so bad, why can't we assume that Coherent Extrapolated Volition (CEV) will figure that out and do the right thing for us?
Disclaimer
Most of my knowledge of CEV is based on Yudkowsky's 2004 paper, which he admits is obsolete. I have not yet read most of the more recent literature on the subject.
Reason 1: CEV will (almost certainly) never happen
CEV is like a dream for a certain type of moral philosopher: Finally, the most ideal solution for discovering what we really want upon reflection!
The fact is, the real world is not decided by moral philosophers. It's decided by power politics, economics, and Darwinian selection. Moral philosophers can certainly have an impact through these channels, but they're unlikely to convince the world to rally behind CEV. Can you imagine the US military -- during its AGI development process -- deciding to adopt CEV? No way. It would adopt something that ensures the continued military and political dominance of the US, driven by mainstream American values. Same goes for China or any other country. If AGI is developed by a corporation, the values will reflect those of the corporation or the small group of developers and supervisors who hold the most power over the project. Unless that group is extremely enlightened, CEV is not what we'll get.
Anyway, this is assuming that the developers of AGI can even keep it under control. Most likely AGI will turn into a paperclipper or else evolve into some other kind of Darwinian force over which we lose control.
Objection 1: "Okay. Future military or corporate developers of AGI probably won't do CEV. But why do you think they'd care about wild-animal suffering, etc. either?"
Well, they might not, but if we make the wild-animal movement successful, then in ~50-100 years when AGI does come along, the notion of not spreading wild-animal suffering might be sufficiently mainstream that even military or corporate executives would care about it, at least to some degree.
If post-humanity does achieve astronomical power, it will only be through AGI, so there's high value for influencing the future developers of an AGI. For this reason I believe we should focus our meme-spreading on those targets. However, this doesn't mean they should be our only focus, for two reasons: (1) Future AGI developers will themselves be influenced by their friends, popular media, contemporary philosophical and cultural norms, etc., so if we can change those things, we will diffusely impact future AGI developers too. (2) We need to build our movement, and the lowest-hanging fruit for new supporters are those most interested in the cause (e.g., antispeciesists, environmental-ethics students, transhumanists). We should reach out to them to expand our base of support before going after the big targets.
Objection 2: "Fine. But just as we can advance values like preventing the spread of wild-animal suffering, couldn't we also increase the likelihood of CEV by promoting that idea?"
Sure, we could. The problem is, CEV is not an optimal thing to promote, IMHO. It's sufficiently general that lots of people would want it, so for ourselves, the higher leverage comes from advancing our particular, more idiosyncratic values. Promoting CEV is kind of like promoting democracy or free speech: It's fine to do, but if you have a particular cause that you think is more important than other people realize, it's probably going to be better to promote that specific cause than to jump on the bandwagon and do the same thing everyone else is doing, since the bandwagon's cause may not be what you yourself prefer.
Indeed, for myself, it's possible CEV could be a net bad thing, if it would reduce the likelihood of paperclipping -- a future which might (or might not) contain far less suffering than a future directed by humanity's extrapolated values.
Reason 2: CEV would lead to values we don't like
Some believe that morality is absolute, in which case a CEV's job would be to uncover what that is. This view is mistaken, for the following reasons: (1) Existence of a separate realm of reality where ethical truths reside violates Occam's razor, and (2) even if they did exist, why would we care what they were?
Yudkowsky and the LessWrong community agree that ethics is not absolute, so they have different motivations behind CEV. As far as I can gather, the following are two of them:
Motivation 1: Some believe CEV is genuinely the right thing to do
As Eliezer said in his 2004 paper (p. 29), "Implementing CEV is just my attempt not to be a jerk." Some may believe that CEV is the ideal meta-ethical way to resolve ethical disputes.
I have to differ. First, the set of minds included in CEV is totally arbitrary, and hence, so will be the output. Why include only humans? Why not animals? Why not dead humans? Why not humans that weren't born but might have been? Why not paperclip maximizers? Baby eaters? Pebble sorters? Suffering maximizers? Wherever you draw the line, there you're already inserting your values into the process.
And then once you've picked the set of minds to extrapolate, you still have astronomically many ways to do the extrapolation, each of which could give wildly different outputs. Humans have a thousand random shards of intuition about values that resulted from all kinds of little, arbitrary perturbations during evolution and environmental exposure. If the CEV algorithm happens to make some more salient than others, this will potentially change the outcome, perhaps drastically (butterfly effects).
Now, I would be in favor of a reasonable extrapolation of my own values. But humanity's values are not my values. There are people who want to spread life throughout the universe regardless of suffering, people who want to preserve nature free from human interference, people who want to create lab universes because it would be cool, people who oppose utilitronium and support retaining suffering in the world, people who want to send members of other religions to eternal torture, people who believe sinful children should burn forever in red-hot ovens, and on and on. I do not want these values to be part of the mix.
Maybe (hopefully) some of these beliefs would go away once people learned more about what these wishes really implied, but some would not. Take abortion, for example: Some non-religious people genuinely oppose it, and not for trivial, misinformed reasons. They have thought long and hard about abortion and still find it to be wrong. Others have thought long and hard and still find it to be not wrong. At some point, we have to admit that human intuitions are genuinely in conflict in an irreconcilable way. Some human intuitions are irreconcilably opposed to mine, and I don't want them in the extrapolation process.
Motivation 2: Some argue that even if CEV isn't ideal, it's the best game-theoretic approach because it amounts to cooperating on the prisoner's dilemma
I think the idea is that if you try to promote your specific values above everyone else's, then you're timelessly causing this to be the decision of other groups of people who want to push for their values instead. But if you decided to cooperate with everyone, you would timelessly influence others to do the same.
This seems worth considering, but I'm doubtful that the argument is compelling enough to take too seriously. I can almost guarantee that if I decided to start cooperating by working toward CEV, everyone else working to shape values of the future wouldn't suddenly jump on board and do the same.
Objection 1: "Suppose CEV did happen. Then spreading concern for wild animals and the like might have little value, because the CEV process would realize that you had tried to rig the system ahead of time by making more people care about the cause, and it would attempt to neutralize your efforts."
Well, first of all, CEV is (almost certainly) never going to happen, so I'm not too worried. Second of all, it's not clear to me that such a scheme would actually be put in place. If you're trying to undo pre-CEV influences that led to the distribution of opinions to that point, you're going to have a heck of a lot of undoing to do. Are you going to undo the abundance of Catholics because their religion discouraged birth control and so led to large numbers of supporters? Are you going to undo the over-representation of healthy humans because natural selection unfairly removed all those sickly ones? Are you going to undo the under-representation of dinosaurs because an arbitrary asteroid killed them off before CEV came around?
The fact is that who has power at the time of AGI will probably matter a lot. If we can improve the values of those who will have power in the future, this will in expectation lead to better outcomes -- regardless of whether the CEV fairy tale comes true.
87 comments
Comments sorted by top scores.
comment by Nisan · 2013-01-26T23:21:57.436Z · LW(p) · GW(p)
Humans have a thousand random shards of intuition about values that resulted from all kinds of little, arbitrary perturbations during evolution and environmental exposure. If the CEV algorithm happens to make some more salient than others, this will potentially change the outcome, perhaps drastically (butterfly effects).
There's a brief discussion of butterfly effects as a potential pitfall for CEV in this thread.
comment by RomeoStevens · 2013-01-26T23:54:29.815Z · LW(p) · GW(p)
I have two objections.
- I do in fact value the things I value. I'm not playing at some game.
- I'm an idiot, and my values are incoherent.
Therefore, CEV. Regardless of FAI developments.
Replies from: DanArmak, ewbrownv↑ comment by DanArmak · 2013-01-28T17:47:10.716Z · LW(p) · GW(p)
I can see that this would lead you to want CEV of your (competing) values. But why would it lead you to want CEV of both your and other people's values?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-01-28T18:45:15.664Z · LW(p) · GW(p)
Because of the incoherent bit. I think that others are doing a better job of optimizing for the world I actually want to live in than I am in many cases.
Replies from: DanArmak↑ comment by DanArmak · 2013-01-29T10:37:18.908Z · LW(p) · GW(p)
I think that others are doing a better job of optimizing for the world I actually want to live in than I am in many cases.
If others are optimizing for the world you want, if they are promoting your values better than you yourself are promoting them, then why do you express a wish to live in a world also optimized for their values via CEV?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-01-30T22:27:06.626Z · LW(p) · GW(p)
I can trivially account for divergent values without CEV: holodecks/experience machines. Surely holodecks are the lower bound for any utopian vision.
↑ comment by ewbrownv · 2013-01-28T17:12:11.544Z · LW(p) · GW(p)
Actually, this would be a strong argument against CEV. If individual humans commonly have incoherent values (which they do), there is no concrete reason to expect an automated extrapolation process to magically make them coherent. I've noticed that CEV proponents have a tendency to argue that the "thought longer, understood more" part of the process will somehow fix all objections of this sort, but given the complete lack of detail about how this process is supposed to work you might as well claim that the morality fairy is going to descend from the heavens and fix everything with a wave of her magic wand.
If you honestly think you can make an AI running CEV produce a coherent result that most people will approve of, it's up to you to lay out concrete details of the algorithm that will make this happen. If you can't do that, you've just conceded that you don't actually have an answer for this problem. The burden of proof here is on the party proposing to gamble humanity's future on a single act of software engineering, and the standard of evidence must be at least as high as that of any other safety-critical engineering.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-28T17:38:51.746Z · LW(p) · GW(p)
If you honestly think you can make an AI running CEV produce a coherent result that most people will approve of,
Can you point me to some serious CEV proponents who argue that most people will approve of the results? I agree with you that this seems implausible, but it has never been clear to me that anyone serious actually asserts it.
FWIW, it has seemed to me from the beginning that the result of the CEV strategy would likely include at least something that makes me go "Um... really? I'm not entirely comfortable with that." More generally, it seems unlikely to me that the system which best implements my values would feel comfortable or even acceptable to me, any more than the diet that best addresses my nutritional needs will necessarily conform to my aesthetic preferences about food.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-01-28T18:51:45.333Z · LW(p) · GW(p)
More generally, it seems unlikely to me that the system which best implements my values would feel comfortable or even acceptable to me, any more than the diet that best addresses my nutritional needs will necessarily conform to my aesthetic preferences about food.
At first I thought this comparison was absolutely perfect, but I'm not really sure about that anymore. With a diet, you have other values to fall back on which might make your decision to adopt an aesthetically displeasing regimen still be something that you should do. With CEV, it's not entirely clear to me in why I would want to prefer CEV values over my own current ones, so there's no underlying reason for me to accept that I should accept CEV as the best implementation of my values.
That got a little complicated, and I'm not sure it's exactly what I meant to say. Basically, I'm trying to say that while you may not be entirely comfortable with a better diet, you would still implement it for yourself since it's a rational thing to do, whereas if you aren't comfortable with implementing your own CEV, there's no rational reason to compel you to do so.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-28T19:20:20.634Z · LW(p) · GW(p)
there's no underlying reason for me to accept that I should accept CEV as the best implementation of my values
Sure.
And even if I did accept CEV(humanity) as the best implementation of my values in principle, the question of what grounds I had to believe that any particular formally specified value system that was generated as output by some seed AI actually was CEV(humanity) is also worth asking.
Then again, there's no underlying reason for me to accept that I should accept my current collection of habits and surface-level judgments and so forth as the best implementation of my values, either.
So, OK, at some point I've got a superhuman value-independent optimizer all rarin' to go, and the only question is what formal specification of a set of values I ought to provide it with. So, what do I pick, and why do I pick it?
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-01-28T20:15:26.689Z · LW(p) · GW(p)
Then again, there's no underlying reason for me to accept that I should accept my current collection of habits and surface-level judgments and so forth as the best implementation of my values, either.
Isn't this begging the question? By 'my values' I'm pretty sure I literally mean 'my current collection of habits and surface-level judgements and so forth'.
Could I have terminal values of which I am completely unaware in any way shape or form? How would I even recognize such things, and what reason do I have to prefer them over 'my values'.
Did I just go in a circle?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-28T20:18:44.385Z · LW(p) · GW(p)
Well, you tell me: if I went out right now and magically altered the world to reflect your current collection of habits and surface-level judgments, do you think you would endorse the result?
I'm pretty sure I wouldn't, if the positions were reversed.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-01-28T20:50:31.858Z · LW(p) · GW(p)
I would want you to change the world so that what I want is actualized, yes. If you wouldn't endorse an alteration of the world towards your current values, in what sense do you really 'value' said values?
I'm going to need to taboo 'value', aren't I?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-28T21:51:27.406Z · LW(p) · GW(p)
I don't know if you need to taboo it or not, but I'll point out that I asked you a question that didn't use that word, and you answered a question that did.
So perhaps a place to start is by answering the question I asked in the terms that I asked it?
comment by Mestroyer · 2013-01-28T11:53:17.556Z · LW(p) · GW(p)
I don't think that people valuing eternal torture of other humans is much of a concern, because they don't value it nearly as much as the people in question disvalue being tortured. I bet there are a lot more people who care about animals' feelings and who care a lot more, than those who care about the aesthetics of brutality in nature. I think the majority of people have more instincts of concern for animals than their actions suggest, because now it is convenient to screw over animals as an externality of eating tasty food, and the animals suffering are out of sight, and the connection between buying meat and animals living terrible lives elsewhere is hard to visualize. The same population that buys meat from farms that treat animals to awful lives also enacts animal cruelty laws. As evidence that caring more about animals is something that would be strengthened by thinking more about, consider the results of the 2012 LessWrong Survey:
VEGETARIAN: No: 906, 76.6% Yes: 147, 12.4% No answer: 130, 11%
For comparison, 3.2% of US adults are vegetarian.
(Though some of this is probably due to lesswrongers being richer than the average american (something that's probably true because wealthy people more time to read about abstruse topics on the internet))
The biggest peculiarity of Brian Tomasik's utility function, that is least likely to ever be shared by the majority of humanity, is probably not that he cares about animals (even that he cares about insects) but that he cares so much more about suffering than happiness and other good things. (I am basing this assessment of his utility function on a post of his I read on http://www.felicifia.org a while ago, which I can't find now).
The exchange rate in your utility function between good things and bad things is pretty relevent to whether you should prefer CEV or paperclipping (And what the changes in the probabilities of each even based on actions you might take would have to be in order justify them) and whether you think lab universes would be a good thing.
So if you are not a negative utilitarian, or nearly one, even if Brian Tomasik's beliefs about matters of fact are very correlated with reality, be careful of his policy recommendations.
Replies from: Utilitarian, DanArmak↑ comment by Utilitarian · 2013-01-29T09:12:50.128Z · LW(p) · GW(p)
I bet there are a lot more people who care about animals' feelings and who care a lot more, than those who care about the aesthetics of brutality in nature.
Well, at the moment, there are hundreds of environmental-preservation organizations and basically no organizations dedicated to reducing wild-animal suffering. Environmentalism as a cause is much more mainstream than animal welfare. Just like the chickens that go into people's nuggets, animals suffering in nature "are out of sight, and the connection between [preserving pristine habitats] and animals living terrible lives elsewhere is hard to visualize."
It's encouraging that more LessWrongers are veg than average, although I think 12.4% is pretty typical for elite universities and the like as well. (But maybe that underscores your point.)
The biggest peculiarity of Brian Tomasik's utility function, that is least likely to ever be shared by the majority of humanity, is probably not that he cares about animals (even that he cares about insects) but that he cares so much more about suffering than happiness and other good things.
An example post. I care a lot about suffering, a little about happiness, and none about other things.
The exchange rate in your utility function between good things and bad things is pretty relevent to whether you should prefer CEV or paperclipping (And what the changes in the probabilities of each even based on actions you might take would have to be in order justify them) and whether you think lab universes would be a good thing.
Yep!
↑ comment by DanArmak · 2013-01-28T17:41:29.054Z · LW(p) · GW(p)
I don't think that people valuing eternal torture of other humans is much of a concern, because they don't value it nearly as much as the people in question disvalue being tortured.
Suppose most people agree on valuing the torture of a few people, and only a few people disagree. Would you be OK with the majority's values outweighing the minority's, if it's a large enough majority?
If you're OK with that, and if this is not specific to the example of torture, then you are effectively saying that you value the extrapolated consensus values of humanity more than your own, even though you don't know what those values may be. That you value the (unspecified) CEV process, and whatever values it ends up generating, more than any other values you currently hold. Is that so?
Even if you're OK with that, you'd be vulnerable to a "clone utility monster": if I can clone myself faster than average, then the values of me and my clones will come to dominate the global population. This seems true for almost any value aggregation process given a large enough majority (fast enough cloning).
Replies from: Mestroyer↑ comment by Mestroyer · 2013-01-28T20:50:42.697Z · LW(p) · GW(p)
No, I would not be okay with it. I don't terminally value CEV. I think it would be instrumentally valuable, because scenarios where everyone wants to torture a few people are not that likely. I would prefer that only my own extrapolated utility function controlled the universe. Unlike Eliezer Yudkowsky, I don't care that much about not being a jerk. But that is not going to happen. If this detail from the original paper still stands, the CEV is allowed to modify the extrapolating process. So if there was the threat of everyone having to race to clone themselves as much as possible for more influence, it might modify itself to give clones less weight, or prohibit cloning.
Replies from: DanArmak↑ comment by DanArmak · 2013-01-29T10:39:33.394Z · LW(p) · GW(p)
So if there was the threat of everyone having to race to clone themselves as much as possible for more influence, it might modify itself to give clones less weight, or prohibit cloning
Prohibiting these things, and CEV self-modifying in general, means optimizing for certain values or a certain outcome. Where do these values come from? From the CEV's programmers. But if you let certain predetermined values override the (unknown) CEV-extrapolated values, how do you make these choices, and where do you draw the line?
Replies from: Mestroyercomment by Ben Pace (Benito) · 2013-01-26T18:36:03.961Z · LW(p) · GW(p)
It is quite simple to make a LessWrong account, and it would be helpful so that you can respond to comments.
If you think it might be difficult to get the sufficient karma, you can also post a comment in the open thread asking for upvotes so that you can post. It's worked nicely before :)
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-01-26T19:45:21.313Z · LW(p) · GW(p)
Brian is very busy at the moment, and said he was reluctant to post partly because this would create the reasonable expectation that he would have enough time to respond to comments. I have slightly edited the italicized notice above to clarify this point.
comment by jimrandomh · 2013-01-26T18:53:53.892Z · LW(p) · GW(p)
There seem to be two objections here. The first is that CEV does not uniquely identify a value system; starting with CEV, you don't have actual values until you've identified the set of people/nonpeople you're including, an extrapolation procedure, and a reconciliation procedure. But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost: while parts of CEV are left unspecified, other parts are, and so the output is not fully arbitrary. The set of CEV-compatible value systems is smaller than the set of possible value systems; and while the set of CEV-compatible value systems is not completely free of systems I find abhorrent, it is nevertheless systematically better than any other class of value systems I know of.
The second objection seems to be that there are humans out there with abhorrent values, and that extrapolation and reconciliation might not successfully eliminate those values. The best outcome, I think, is if they're initially included but either extrapolated into oblivion or cancelled by other values. (eg: valuing others' not having abortions loses to their valuing choice, but the AI arranges things so that most pregnancies are wanted and it doesn't come up often; valuing the torture of sinful children loses to their desire to not be tortured, and also goes away with a slight increase in intelligence and wisdom).
But just excluding some values outright seems very problematic. On a philosophical level, it requires breaking the symmetry between humans. On a practical level, it would mean that launching an AGI first becomes competitive, potentially replacing careful deliberation with a race to finish. And the risks of mistakes in a race to finish seem to far outweigh the importance of almost any slight differences in value systems.
Replies from: peter_hurford, randallsquared, V_V↑ comment by Peter Wildeford (peter_hurford) · 2013-01-26T22:42:38.529Z · LW(p) · GW(p)
valuing others' not having abortions loses to their valuing choice, but the AI arranges things so that most pregnancies are wanted and it doesn't come up often; valuing the torture of sinful children loses to their desire to not be tortured, and also goes away with a slight increase in intelligence and wisdom
How could you ever guarantee that? Do you think progress toward utilitarian values increases with intelligence/wisdom?
↑ comment by randallsquared · 2013-01-28T04:06:02.755Z · LW(p) · GW(p)
In the context of
But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost
I think it's clear that with
valuing others' not having abortions loses to their valuing choice
you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said "valuing choice loses to others valuing their life".
But, to be clear, I don't think that even if you limit it to "existing, thinking human minds at the time of the calculation", you will get some sort of unambiguous result.
↑ comment by V_V · 2013-01-26T20:32:34.933Z · LW(p) · GW(p)
an essential truth is lost: while parts of CEV are left unspecified, other parts are, and so the output is not fully arbitrary.
What parts are specified? If the set of people is unspecified, the extrapolation procedure is unspecified, and the reconciliation procedure is unspecified, then what is left?
The set of CEV-compatible value systems is smaller than the set of possible value systems;
No. For all value systems X who are held by some people, you could always apply the CEV to a set of people who hold X. Unless the extrapolation procedure does something funny, the CEV of that set of people would be X.
On a practical level, it would mean that launching an AGI first becomes competitive, potentially replacing careful deliberation with a race to finish. And the risks of mistakes in a race to finish seem to far outweigh the importance of almost any slight differences in value systems.
Unless the extrapolation and the reconciliation procedures are trivial, computing the CEV of mankind would be probably beyond the possibility of any physically plausible AGI, superintelligent or not.
People here seem to assume AGI = omniscient deity, but there are no compelling technical reasons for that assumption. Most likely that's just a reflection of traditional religious beliefs.
comment by JGWeissman · 2013-01-27T06:16:13.642Z · LW(p) · GW(p)
The CEV of humanity is not likely to promote animal suffering. Most people don't value animal suffering. They value eating hamburgers, and aren't particularly bothered by the far away animal suffering that makes it possible for them to eat hamburgers. An FAI can give us hamburgers without causing any animal suffering.
Replies from: Utilitarian, wedrifid↑ comment by Utilitarian · 2013-01-27T12:10:51.930Z · LW(p) · GW(p)
Future humans may not care enough about animal suffering relative to other things, or may not regard suffering as being as bad as I do. As noted in the post, there are people who want to spread biological life as much as possible throughout the galaxy. Deep ecologists may actively want to preserve wild-animal suffering (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.") Future humans might run ancestor sims that happen to include astronomical numbers of sentient insects, most of which die (possibly painfully) shortly after birth. In general, humans have motivations to simulate minds similar to theirs, which means potentially a lot more suffering along for the ride.
Replies from: JGWeissman↑ comment by JGWeissman · 2013-01-27T16:52:33.403Z · LW(p) · GW(p)
As noted in the post, there are people who want to spread biological life as much as possible throughout the galaxy. Deep ecologists may actively want to preserve wild-animal suffering (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.")
How many people? How much of this is based on confusion and not actually confronting the scale of suffering involved? (Note that CEV is supposed to account for this, giving us not what we say we want, but what we would want if we were smarter and knew more.)
Future humans might run ancestor sims that happen to include astronomical numbers of sentient insects, most of which die (possibly painfully) shortly after birth.
I am not convinced that insects are sentient (though an FAI that understands how sentience works could tell me I'm wrong and I'd believe it). If insects do turn out to be sentient, it would not be hard (and would actually take fewer computational resources) to replace the insect's sentience with an abstract model of its behavior.
In general, humans have motivations to simulate minds similar to theirs, which means potentially a lot more suffering along for the ride.
Sure, if we are stupid about it, but we are already working on how not to be stupid about it. And seriously, a successful singularity should give us far more interesting things to do than running such simulations (or eating hamburgers).
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2013-01-27T17:08:14.065Z · LW(p) · GW(p)
How many people? How much of this is based on confusion and not actually confronting the scale of suffering involved? (Note that CEV is supposed to account for this, giving us not what we say we want, but what we would want if we were smarter and knew more.)
CEV proponents can always avoid an unpalatable objection to CEV by saying that, "if we knew more and were smarter", we wouldn't act in such objectionable ways. But a reason has to be provided for thinking that greater intelligence or better knowledge would in fact cause us to act differently. In these discussions, I see a lot of handwaving, but not many actual reasons.
Replies from: JGWeissman↑ comment by JGWeissman · 2013-01-27T17:39:05.257Z · LW(p) · GW(p)
CEV proponents can always avoid an unpalatable objection to CEV by saying that, "if we knew more and were smarter", we wouldn't act in such objectionable ways.
Yeah, it works well against naive objections that some humans behave badly and they have influence on the CEV.
But a reason has to be provided for thinking that greater intelligence or better knowledge would in fact cause us to act differently.
Note that I referred to "if we knew more and were smarter" after asking if proponents of preserving wild animal suffering have actually confronted the scale of suffering involved.
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-28T09:48:25.279Z · LW(p) · GW(p)
Thanks, JGWeissman. There are certainly some deep ecologists, like presumably Hettinger himself, who have thought long and hard about the scale of wild-animal suffering and still support preservation of ecology as is. When I talk with ecologists or environmentalists, almost always their reply is something like, "Yes, there's a lot of suffering, but it's okay because it's natural for them." One example:
As I sit here, thinking about the landscape of fear, I watch a small bird at my bird feeder. It spends more time looking around than it does eating. I try to imagine the world from its point of view — the startles, the alarms, the rustle of wings, the paw of the cat. And although I wish it well, I wouldn’t like its predators to disappear.
You can see many more examples here. A growing number of people have been convinced that wild-animal suffering should be reduced where feasible, but I think this is still a minority view. If more people thought about it harder, probably there would be more support, but ecological preservation is also a very strong intuition for some people. It's easy not to realize this when we're in our own bubbles of utilitarian-minded rationalists. :)
Spreading life far and wide is less widespread as a value, but it's popular enough that the Panspermia Society is one of a few groups that feels this way. I also have a very smart friend who happens to share this goal, even though he acknowledges this would create a lot of suffering.
As far as insects, it's not obvious that post-humans would care enough to undertake the approximation of their brains that you mention, because maybe it would make the simulation more complicated (=> expensive) or reduce its fidelity. There's an analogy with factory farming today: Sure, we could prevent animal suffering, but it's more costly. Still, yes, we can hope that post-humans would give enough weight to insect suffering to avoid this. And I agree insects may very well not be sentient, though if they are, the numbers of suffering minds would be astronomical.
The work on nonperson predicates and computational hazards is great -- I'm glad you guys are doing that!
Replies from: JGWeissman↑ comment by JGWeissman · 2013-01-28T15:24:01.683Z · LW(p) · GW(p)
When I talk with ecologists or environmentalists, almost always their reply is something like, "Yes, there's a lot of suffering, but it's okay because it's natural for them." One example:
As I sit here, thinking about the landscape of fear, I watch a small bird at my bird feeder. It spends more time looking around than it does eating. I try to imagine the world from its point of view — the startles, the alarms, the rustle of wings, the paw of the cat. And although I wish it well, I wouldn’t like its predators to disappear.
The argument seems to be less that the suffering is OK because it is natural than any intervention we can make to remove it would cause nature to not work, as in removing predator species results in more herbivores, which leads to vegetation being over consumed, which leads to ecological collapse. I am sympathetic to this argument. On a large enough scale, this means no breathable atmosphere. So while I think that wild animal suffering is a bad thing, I will accept it for now as a cost of supporting human life. (Maybe you could remove all animals not actually symbiotic with plants, but this seems like a hell of a gamble, we would likely regret the unintended consequences, and it could be difficult to undo.) Once humanity can upload and live in simulations, we have more options. Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology?
If more people thought about it harder, probably there would be more support, but ecological preservation is also a very strong intuition for some people. It's easy not to realize this when we're in our own bubbles of utilitarian-minded rationalists. :)
CEV is supposed to figure out what people would want if they were more rational. If rationalists tend to discard that intuition, it is not likely to have a strong effect on CEV. (Though if people without such strong intuitions are likely to become more rational, this would not be strong evidence. It may be useful to try raising the sanity waterline among people who demonstrate the intuition and see what happens.)
As far as insects, it's not obvious that post-humans would care enough to undertake the approximation of their brains that you mention, because maybe it would make the simulation more complicated (=> expensive) or reduce its fidelity.
I am completely against giving up the awesomeness of a good singularity because it is not obvious that the resulting society won't devote some tiny fraction of their computing power to simulations in which animals happen to suffer. The suffering is bad, but there are other values to consider here, that the scenario includes in far greater quantities.
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-29T08:11:12.912Z · LW(p) · GW(p)
Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology?
Good point. Probably not, and for some, their views would change with new technological options. For others (environmentalist types especially), they would probably retain their old views.
That said, the future-technology sword cuts both ways: Because most people aren't considering post-human tech, they're not thinking of (what some see as) the potential astronomical benefits from human survival. If 10^10 humans were only going to live at most another 1-2 billion years on Earth, their happiness could never outweigh the suffering of the 10^18 insects living on Earth at the same time. So if people aren't thinking about space colonization, why do they care so much about preserving humanity anyway? Two possible reasons are because they're speciesist and care more about humans or because they value things other than happiness and suffering. I think both are true here, and both are potentially problematic for CEV values.
Though if people without such strong intuitions are likely to become more rational, this would not be strong evidence.
Yeah, that would be my concern. These days, "being rational" tends to select for people who have other characteristics, including being more utilitarian in inclination. Interesting idea about seeing how deep ecologists' views would change upon becoming more rational.
The suffering is bad, but there are other values to consider here, that the scenario includes in far greater quantities.
We have different intuitions about how bad suffering is. My pain:pleasure exchange rate is higher than that of most people, and this means I think the expected suffering that would result from a Singularity isn't worth the potential for lots of happiness.
comment by pleeppleep · 2013-01-27T00:14:39.485Z · LW(p) · GW(p)
This is a question about utilitarianism, not AI, but can anyone explain (or provide a link to an explanation) of why reducing the total suffering in the world is considered so important? I thought that we pretty much agreed that morality is based on moral intuitions and it seems pretty counterintuitive to value the states of mind of people too numerous to sympathize with as highly as people here do.
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person, rather than something your conscience actually motivates you to do, but people here are usually pretty averse to conscious signaling, so I'm not sure that works as an explanation. I'm certain this has been covered elsewhere, but I haven't seen it.
Replies from: Pablo_Stafforini, Jabberslythe, DanArmak, aelephant, drethelin↑ comment by Pablo (Pablo_Stafforini) · 2013-01-27T00:49:17.614Z · LW(p) · GW(p)
When I become directly acquainted with an episode of intense suffering, I come to see that this is a state of affairs that ought not to exist. My empathy may be limited, but I don't need to empathize with others to recognize that, when they suffer, their suffering ought to be relieved too.
I don't pretend to speak on behalf of all other hedonistic utilitarians, however. Brian himself would probably disagree with my answer. He would instead reply that he "just cares" about other people's suffering, and that's that.
Replies from: vallinder↑ comment by vallinder · 2013-01-27T11:54:15.345Z · LW(p) · GW(p)
Knowing that you've abandoned moral realism, how would you respond to someone making an analogous argument about preferences or duties? For instance, "When a preference of mine is frustrated, I come to see this as a state of affairs that ought not to exist," or "When someone violates a duty, I come to see this as a state of affairs that ought not to exist." Granted, the acquaintance may not be as direct as in the case of intense suffering. But is that enough to single out pleasure and suffering?
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-27T12:01:28.743Z · LW(p) · GW(p)
Preventing suffering is what I care about, and I'm going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it's not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.
Replies from: vallinder↑ comment by Jabberslythe · 2013-01-28T08:48:09.000Z · LW(p) · GW(p)
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person
I am not sure that the hedonistic utilitarian agenda is high status. The most plausible cynical/psychological critique of the hedonistic utilitarian agenda, is that they are too worried about ethical consistency and about coherently extrapolating a simple principle from their values.
↑ comment by DanArmak · 2013-01-28T17:45:52.349Z · LW(p) · GW(p)
Cooperation for mutual benefit. Potential alliance building. Signalling of reliability, benevolence, and capability. It's often beneficial to adopt a general policy of helping strangers whenever the personal price is low enough. And (therefore) the human mind is such that people mostly enjoy helping others as long as it's not too strenuous.
↑ comment by aelephant · 2013-01-27T23:59:21.708Z · LW(p) · GW(p)
You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.
It seems plausible to me that suffering could serve some useful purpose & eliminating it (or seeking to eliminate it) might have horrific consequences.
Replies from: Jabberslythe↑ comment by Jabberslythe · 2013-01-28T08:58:33.636Z · LW(p) · GW(p)
You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.
Almost all hedonistic utilitarians are concerned with maximizing happiness as well as minimizing suffering, including Brian. The reason that he talks about suffering so much is because, it is most people rank a unit of suffering as, say a -3 experience and a unit of suffering as, say, a -1 experience. And he thinks that there is much more suffering than happiness in the world and that it easier to prevent it.
(Sorry if I got any of this wrong Brian)
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-31T06:30:21.935Z · LW(p) · GW(p)
Thanks, Jabberslythe! You got it mostly correct. :)
The one thing I would add is that I personally think people don't usually take suffering seriously enough -- at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.
↑ comment by drethelin · 2013-01-27T10:19:53.637Z · LW(p) · GW(p)
I'm not strongly emotionally motivated to reduce suffering in general but I realize that my and other instances of suffering are examples of suffering in general so I think it's a good policy to try to reduce world-suck. This is reasonably approximated by saying I would like to reduce unhappiness or increase happiness or some such thing.
comment by Mestroyer · 2013-01-28T12:57:57.998Z · LW(p) · GW(p)
Another thing to worry about with CEV is that the nonperson predicates that whoever writes it decides on will cover things that you consider people, or would not like to see be destroyed at the end of an instrumental simulation.
Humans probably have no built-in intuitions for the details of distinction of things that deserve ethical consideration at the precision required for a nonperson predicate that can flag things as nonpersons that will be useful for instrumental simulations, and yet not flag a fully-detailed simulation of you or me as a nonperson. We don't have detailed enough introspection to know what "sentience" (whatever that means) is at a mechanical level. How can we care about the arrangement of parts that make "sentience," when we don't know what that arrangement is?
I think the process by which some people come to care about animals and others do not is probably highly dependent on which thought experiments they considered in which order, which label they first used for the category in their mind of "things that shouldn't be hurt."
The most memorable occasion when my person predicate changed was when I used to think that people could only exist in a basement universe. Simulations were automatically nonpersons. I thought to myself "if they aren't real I don't care." What changed my mind was the thought "If you ran a simulated version of me, and informed it that it was in a simulation, would it stop simulatedly caring about itself?" (The answer was no). But what if I had read LessWrong first, and become accustomed to thinking of myself as an insane (objectively speaking, not by human standards) biased ape, and said "No, but that's because I'm only human and sometimes have feelings that are contrary to my true ideal utility function. The simulated version may not alieve that he was not real, but he really wouldn't be real, so he Ideal_Mestroyer::should stop caring about himself." That thought isn't factually incorrect. If I had thought it back then, I might still care about "realness" in the same sense. But thinking about it now, it is too late, my terminal values have already changed, perhaps because of a misstep in my reasoning back then, and I am glad they have. But maybe the introduction of "real" (being directly made of physics and not in a simulation) as an important factor was originally based on mistaken reasoning too.
I think most of the features of our nonperson predicates are decided in the same way, partly randomly, based on reasoning mistakes and thought experiments considered first, (more randomly the more philosophical the person is), and partly through absorption from family and peers, which means it doesn't make sense for there to be a coherent extrapolated nonperson predicate for humanity (though you can still superpose a bunch of different ones).
Even if you don't really care about animals, your "person" category (or just "I care about this being" category) might be broader than SIAI's, and if it is, you should be afraid that vast numbers of people will be killed by terminating instrumental simulations.
Even so, if your person predicate is part of the CEV of humanity, perhaps an FAI could self-modify (after running some number of simulations using the old one that wasn't really that big compared to the number of people that would exist in a post-friendly-foom world)
So those people might not be that important to you, compared to what else is at stake. But if your nonperson predicate is not in humanity's CEV, and is uncommon enough that it's not worth it to humanity to accommodate you, and you disvalue death (and not just suffering) CEV might cause you to spend billions of years screaming.
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-29T09:22:05.249Z · LW(p) · GW(p)
Interesting story. Yes, I think our intuitions about what kinds of computations we want to care about are easily bent and twisted depending on the situation at hand. In analogy with Dennett's "intentional stance," humans have a "compassionate stance" that we apply to some physical operations and don't apply to others. It's not too hard to manipulate these intuitions by thought experiments. So, yes, I do fear that other people may differ (perhaps quite a bit) in their views about what kinds of computations are suffering that we should avoid.
comment by Ben Pace (Benito) · 2013-01-26T20:26:16.228Z · LW(p) · GW(p)
I think that there's a misunderstanding about CEV going on.
At some point, we have to admit that human intuitions are genuinely in conflict in an irreconcilable way.
I don't think an AI would just ask us what we want, and then do what suits most of us. It would consider how our brains work, and exactly what shards of value make us up. Intuition isn't a very good guide to what is the best decision for us - the point of CEV is that if we knew more about the world and ethics, we would do different things, and think different thoughts about ethics.
You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth. I don't know what CEV would do with a human and a paperclips maximiser, but with just humans?
Replies from: peter_hurford, randallsquared, timtyler↑ comment by Peter Wildeford (peter_hurford) · 2013-01-26T22:45:06.584Z · LW(p) · GW(p)
We're pretty similar.
But not similar enough, I'd argue. For example, I value not farming nonhuman animals and making sure significant resources address world poverty (for a few examples). Not that many other people do. Hopefully CEV will iron that out so this minority wins over the majority, but I don't quite know how.
(Comment disclaimer: Yes, I am woefully unfamiliar with CEV literature and unqualified to critique it. But hey, this is a comment in discussion. I do plan to research CEV more before I actually decide to disagree with it, assuming I do disagree with it after researching it further.)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2013-01-26T23:50:44.492Z · LW(p) · GW(p)
Okay.
Either, if we all knew more, thought faster, understood ourselves better, we would decide to farm animals, or we wouldn't. For people to be so fundamentally different that there would be disagreement, they would need massively complex adaptations / mutations, which are vastly improbable. Even if someone sits down, and thinks long and hard about an ethical dilemma, they can very easily be wrong. To say that an AI could not coherently extrapolate our volition, is to say we're so fundamentally unlike that we would not choose to work for a common good if we had the choice.
Replies from: Adriano_Mannino↑ comment by Adriano_Mannino · 2013-01-28T13:04:57.997Z · LW(p) · GW(p)
But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.
I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2013-01-28T18:03:09.938Z · LW(p) · GW(p)
What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
This sounds a bit like religious people saying "But what if it turns out that there is no morality? That would be bad!". What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn't care about anything else.
If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call 'right'.
The only non-arbitrary "we" is the community of all minds/consciousnesses.
This is what you value, what you chose. Don't lose sight of invisible frameworks. If we're including all decision procedures, then why not computers too? This is part of the human intuition of 'fairness' and 'equality' too. Not the hamster's one.
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-29T08:31:31.495Z · LW(p) · GW(p)
This is what you value, what you chose.
Yes. We want utilitarianism. You want CEV. It's not clear where to go from there.
Not the hamster's one.
FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.
↑ comment by randallsquared · 2013-01-26T21:30:40.218Z · LW(p) · GW(p)
The point you quoted is my main objection to CEV as well.
You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.
Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.
Replies from: Benito, see↑ comment by Ben Pace (Benito) · 2013-01-27T00:06:42.104Z · LW(p) · GW(p)
The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).
'Objective ethics'? 'Merely points... at where you wish you were'? "Merely"!?
Take your most innate desires. Not 'I like chocolate' or 'I ought to condemn murder', but the most basic levels (go to a neuroscientist to figure those out). Then take the facts of the world. If you had a sufficiently powerful computer, and you could input the values and plug in the facts, then the output would be what you wanted to do best.
That doesn't mean whichever urge is strongest, but it takes into account the desires that make up your conscience, and the bit of you saying 'but that's not what's right'. If you could perform this calculation in your head, you'd get the feeling of 'Yes, that's what is right. What else could it possibly be? What else could possibly matter?' This isn't 'merely' where you wish you were. This is the 'right' place to be.
This reply is more about the meta-ethics, but for interpersonal ethics, please see my response to peter_hurford's comment above.
Replies from: DanArmak, randallsquared↑ comment by DanArmak · 2013-01-28T17:54:45.621Z · LW(p) · GW(p)
Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).
The fact that people can be mistaken about what they really want is vanishingly small evidence that if they were not mistaken, they would find out they all want the same things.
↑ comment by randallsquared · 2013-01-28T02:20:07.259Z · LW(p) · GW(p)
A very common desire is to be more prosperous than one's peers. It's not clear to me that there is some "real" goal that this serves (for an individual) -- it could be literally a primary goal. If that's the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can't think of any satisfactory solution to this. Now, one might say, "well, if they'd grown up farther together this would be solvable", but I don't see any reason that should be true. People don't necessarily grow more altruistic as they "grow up", so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI's need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.
↑ comment by see · 2013-01-27T07:11:57.168Z · LW(p) · GW(p)
Objective? Sure, without being universal.
Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us, which could be objectively determined by a thorough and competent enough analysis of humans. The edges of the bell curve on various factors might have certain variances. There might be a multi-modal distribution of fit (bimodal on men and women, for example), too. But, basically, one objective ethics for humans.
This ethics would clearly be unsuited for cats, sharks, bees, or trees. It seems vanishingly unlikely that sapient minds from other evolutions would also be suited for such an ethics, either. So it's not universal, it's not a code God wrote into everything. It's just the best way to be a human . . . as humans exposed to it would in fact judge, because it's fitted to us better than any of our current fumbling attempts.
Replies from: Utilitarian, Andreas_Giger↑ comment by Utilitarian · 2013-01-27T12:12:56.180Z · LW(p) · GW(p)
Why not include primates, dolphins, rats, chickens, etc. into the ethics?
Replies from: see↑ comment by see · 2013-01-27T19:26:47.858Z · LW(p) · GW(p)
What would that mean? How would the chicken learn or follow the ethics? Does it seem even remotely reasonable that social behavior among chickens and social behavior among humans should follow the same rules, given the inherent evolutionary differences in social structure and brain reward pathways?
It might be that CEV is impossible for humans, but there's at least enough basic commonality to give it a chance of being possible.
Replies from: Adriano_Mannino↑ comment by Adriano_Mannino · 2013-01-27T20:53:18.783Z · LW(p) · GW(p)
Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don't include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don't include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.
The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.
Replies from: see, Benito↑ comment by see · 2013-01-28T17:57:58.185Z · LW(p) · GW(p)
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can't and won't act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can't and won't act that way. If you're constructing an ethics for humans follow, you have to start by figuring out humans.
It's not until after you've figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
Replies from: Adriano_Mannino↑ comment by Adriano_Mannino · 2013-01-29T11:00:54.096Z · LW(p) · GW(p)
Well, if humans can't and won't act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here's what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can't morally compare types A and B.
But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-01-29T15:10:49.619Z · LW(p) · GW(p)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than "objectively better", we could be more clear by saying "more in line with our morals" or some such. It's not as if our morals came from nowhere, after all.
See also: "The Bedrock of Morality: Arbitrary?"
↑ comment by Ben Pace (Benito) · 2013-01-27T22:28:00.957Z · LW(p) · GW(p)
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we'd want them to have good lives or not (and I think it would see we do).
But chickens don't (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we'd be left with some important human values, but also a lot of chicken feed.
Replies from: Utilitarian↑ comment by Utilitarian · 2013-01-28T08:03:20.282Z · LW(p) · GW(p)
Thanks, Benito. Do we know that we shouldn't have a lot of chicken feed? My point in asking this is just that we're baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers -- I want to bake in my answers -- but I'm just highlighting that it's not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the "brain reward pathways" between humans and chickens aren't that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
↑ comment by Andreas_Giger · 2013-01-27T14:47:38.504Z · LW(p) · GW(p)
Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us
It does not imply that there exists even one basic moral/ethical statement any human being would agree with, and to me that seems to be a requirement for any kind of humanity-wide system of ethics. Your 'one size fits all' approach does not convince me, and your reasoning seems superficial and based on words rather than actual logic.
Replies from: see↑ comment by see · 2013-01-27T19:47:04.440Z · LW(p) · GW(p)
All humans as they currently exist, no. But is there a system of ethics as a whole that humans, even currently disagreeing with some parts of it, would recognize as superior at doing what they really want from an ethical system that they would switch to it? Even in the main? Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there's no solid evidence to require assuming that a priori, either.
By analogy, consider the person who in 1900 wanted to put together the ideal human diet. Obviously, the diets in different parts of the world differed from each other extensively, and merely averaging all of them that existed in 1900 would not be particularly conducive to finding an actual ideal diet. The person would have to do all the sorts of research that discovered the roles of various nutrients and micronutrients, et cetera. Indeed, he'd have to learn more than we currently do about them. And he'd have to work out the variations to react to various medical conditions, and he'd have to consider flavor (both innate response pathways and learned ones), et cetera. And then there's the limit of what foods can be grown where, what shipping technologies exist, how to approximate the ideal diet in differing circumstances.
It would be difficult, but eventually you probably could put together a dietary program (including understood variations) that would, indeed, suit humans better than any of the existing diets in 1900, both in nutrition and pleasure. It wouldn't suit sharks at all; it would not be a universal nutrition. But it would be an objectively determined diet just the same.
Replies from: Andreas_Giger↑ comment by Andreas_Giger · 2013-01-27T21:47:40.531Z · LW(p) · GW(p)
The problem with this diet is that it wouldn't be a diet; it would be many different diets. Lots of people are lactose intolerant and it would be stupid to remove dairy products from the diet of those who are not. Likewise, a vegetarian diet is not a "variation" of a non-vegetarian diet.
Also, why are you talking about 1900?
Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there's no solid evidence to require assuming that a priori, either.
I think the fact that humans can't agree on even the most basic issues is pretty solid evidence. Also, even if everyone had the same subjective ethics, this still would result in objective contradictions. I'm not aware of any evidence that this problem is solvable at all.
↑ comment by timtyler · 2013-01-27T03:43:56.354Z · LW(p) · GW(p)
We're pretty similar.
Not similar enough to prevent massive conflicts - historically.
Basically, small differences in optimisation targets can result in large conflicts.
Replies from: DanArmak, Benito↑ comment by Ben Pace (Benito) · 2013-01-27T14:03:51.941Z · LW(p) · GW(p)
The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. People can be mistaken about what they really want, or what the facts prescribe (given their values).
I linked to this above, but I don't know if you've read it. Essentially, you're explaining moral disagreement by positing massively improbable mutations, but it's far more likely to be a combination of bad introspection and non-bayesian updating.
Replies from: timtyler↑ comment by timtyler · 2013-01-27T14:41:13.634Z · LW(p) · GW(p)
Essentially, you're explaining moral disagreement by positing massively improbable mutations [...]
Um, different organisms of the same species typically have conflicting interests due to standard genetic diversity - not "massively improbable mutations".
Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict - because A and B have non-identical genomes. Clearly, no "massively improbable mutations" are required in this explanation. This is pretty-much biology 101.
Replies from: DanArmak, Benito↑ comment by DanArmak · 2013-01-28T17:51:43.683Z · LW(p) · GW(p)
Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict - because A and B have non-identical genomes.
It's very hard for A and B to know how much their genomes differ, because they can only observe each other's phenotypes, and they can't invest too much time in that either. So they will mostly compete even if their genomes happen to be identical.
Replies from: timtyler↑ comment by timtyler · 2013-01-28T23:35:04.136Z · LW(p) · GW(p)
The kin recognition that you mention may be tricky, but kin selection is much more widespread - because there are heuristics that allow organisms to favour their kin without the need to examine them closely - like: "be nice to your nestmates".
Simple limited dispersal often results in organisms being surrounded by their close kin - and this is a pretty common state of affairs for plants and fungi.
↑ comment by Ben Pace (Benito) · 2013-01-27T17:39:10.805Z · LW(p) · GW(p)
Oops.
Yup, I missed something there.
Well, for humans, we've evolved desires that work interpersonally (fairness, desires for others' happiness etc,). I think that an AI, which had our values written in, would have no problem figuring out what's best for us. It would say 'well, there's is complex set of values, that sum up to everyone being treated well (or something), and so each party involved should be treated well.'
You're right though, I hadn't made clear idea about how this bit worked. Maybe this helps?
comment by MTGandP · 2013-02-23T22:12:26.288Z · LW(p) · GW(p)
First, the set of minds included in CEV is totally arbitrary, and hence, so will be the output. Why include only humans? Why not animals? Why not dead humans? Why not humans that weren't born but might have been? Why not paperclip maximizers? Baby eaters? Pebble sorters? Suffering maximizers? Wherever you draw the line, there you're already inserting your values into the process.
I agree that it is impossible to avoid inserting your values, and CEV does not work as a meta-ethical method of resolving ethical differences. However, it may be effective as a form of utilitarianism. It seems that CEV should include all current and future sentient beings, with their preferences weighted according to their level of sentience and, for future beings, their probability of coming into existence. (This will probably always be computationally infeasible, no matter how powerful computers get.)
I just thought of this, so I'd be interested to hear if others have any revisions or objections.
↑ comment by Kawoomba · 2013-01-27T21:31:34.830Z · LW(p) · GW(p)
Hello! Please don't take this the wrong way, Abigail, I don't object to nascent mini-AIs such as yourself (Abigail) in principle, but you are not yet ready for commenting here. As such, hopefully this will be taken as a call to CAPTCHA the user creation process more thoroughly.
Maybe Clippy can explain all this to you, in private.
comment by Shmi (shminux) · 2013-01-26T21:19:07.495Z · LW(p) · GW(p)
Downvoted for posting without bothering to familiarize with the current state of the field. All these arguments have been discussed before.
Replies from: peter_hurford, Pablo_Stafforini↑ comment by Peter Wildeford (peter_hurford) · 2013-01-26T22:40:56.651Z · LW(p) · GW(p)
This is fine, but unfair, especially so for discussion. It takes years to familiarize oneself with the field. I'm not saying I agree with these arguments or that they haven't been dealt with, but I haven't seen their treatment in over two years on LessWrong, and the arguments don't seem obviously wrong (to me). ...And Brian Tomasik is moderately familiar with the field (though no expert).
Could you at least link to some resources you think answer these objections?
↑ comment by Pablo (Pablo_Stafforini) · 2013-01-27T00:40:53.208Z · LW(p) · GW(p)
You say that "all these arguments" have been discussed before. Can you point to an explicit discussion of the objection that "CEV will (almost certainly) never happen"? When Luke and Brian discussed these issues, Brian raised that same objection, which Luke never replied to. (Brian was at the time posting under the pseudonym 'Alan Dawrst'.) A subsequent comment in which I politely asked Luke or others familiar with the field for elaboration was not addressed either.
Replies from: timtyler↑ comment by timtyler · 2013-01-27T01:17:18.087Z · LW(p) · GW(p)
Can you point to an explicit discussion of the objection that "CEV will (almost certainly) never happen"?
I've raised that objection on a number of occasions - e.g. here, here, here, here and here.
I note that, on no occasion was there any attempt at counter-arguing.
comment by hankx7787 · 2013-01-27T16:00:48.014Z · LW(p) · GW(p)
This critique is mostly garbage, but I'd love to see more of them. So I up voted.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-01-27T16:10:23.583Z · LW(p) · GW(p)
Show, don't tell.
Replies from: duckduckMOO↑ comment by duckduckMOO · 2013-01-29T05:31:24.465Z · LW(p) · GW(p)
Whether or not its a good idea to announce one's rationale for upvoting has nothing to do with whether authors should show or tell. Phrases don't apply equally to all situations the words could fit in. There are reasons why people recommend that to writers and they aren't at all the same reasons people recommend that people up/downvote silently, as they are almost completely dissimilar situations.
It seems to me that the problem with the post you are replying to is that it dismisses a post as mostly garbage rather than its defiance of good writing practice.
So is this phrase ripped from its homeland just to gently shush someone being rude? I suppose It also has the effect of implying that the norm of upvoting stuff you want more of is implicitly assumed. The irrelevance of the phrase could even be a plain old "passive aggressive" gesture that not only is the comment it was replying to so unwelcome something should be said, it's so unwelcome it does not even need to be said well.
Or maybe people just liked the way the popular phrase could also work here?
Is it rude (or some other bad thing) of me to post these thoughts?