Computational Morality (Part 4) - Consequentialism

post by David Cooper · 2018-04-26T23:58:40.689Z · LW · GW · 3 comments

I suggested in part 3 that all the best proposals may be converging in on the same destination, and that we might be able to use my method of calculating morality to help with the process of unification of the best ones (and rejection of the hopeless ones). So, in the absence of a league table of the best proposals, we'll just have to examine them as they turn up and hope that we encounter all the ones that matter. In the comments under Part 1, a link was provided to a page which provides a good starting point, and I'm going to use it to show where people have been making fundamental errors which have driven them in ill-advised directions: https://plato.stanford.edu/entries/consequentialism/ - it may be worth keeping it open in another tab as you read this.

What do we see on that page? Well, let's start at section 3. In the second paragraph we see an objection to the idea of an unsophisticated game being regarded as being as good as highly intellectual poetry. Now, such poetry is outside of my experience, but we can substitute a play by Shakespeare, such as King Lear, while the unsophisticated game can be darts. A common man who likes spending time in the pub might well turn down an invitation to go to see this Shakespeare play in favour of a few games of darts with his friend, while a more sophisticated individual might scoff at that and head for the theatre. If both derive exactly the same amount of pleasure from their chosen activity (which includes all the intellectual satisfaction of the appreciation of the high quality content of the play, and the intellectual satisfaction of the calculating the score for the darts player), who's to say that the Shakespeare play is really the superior option? Another intellectual with no tolerance for cheap manipulation may rate the Shakespeare play very differently from the average intellectual, finding the play deeply unsatisfying because of the extreme artificiality where two of the daughters express no love at all for anyone other than their father while the third is so cold that she doesn't make any attempt to explain that she loves her father just as much as her sisters do (or more). She is so unnaturally cold that she comes across as not caring at all. Right from the opening, the rest of the play can be predicted, all apart from the ending which will either be happy (like a lightweight fairy tale), or tragic (to pose as profound). This second intellectual may scoff at the play and wish he had gone to play darts instead at the pub where he could have engaged with real people.

Similarly, children at play can have the happiest time of their lives doing very lowly things which would bore them rigid later in life, but the pleasure is higher because there's so much tied up in it that's new to them. As adults, they may get great pleasure out of discussing philosophical issues and feel as if they're making a difference to the world, but is this really a better kind of pleasure? Later on still, they may realise that all those discussions were just rehashing old ground and that their conclusions were wayward - they had done more good when they were playing as children by making their parents feel happy. In calculating morality, we should not be misled by intellectual snobbery, but weigh up all the actual pleasures and satisfactions, the harm that may have been done in the process, and the useful results of any advances that were made - some philosophy does occasionally lead to big changes for the good (and some ideas can do inordinate amounts of harm). If you have accounted for all of that, there is simply no further adjustment needing to be made based on qualitative ratings for different kinds of activities.

The paragraphs that follow (on the page I linked to) make it clear that a lot of people tripped over this mistake, for example by considering the sadist's pleasure in whipping someone to be a worthless pleasure: it isn't worthless in itself, but is more than cancelled out by the harm being done in the process, so there's no need to misrepresent the positive feeling with a zero and then have to cancel out some of the cancelling out in order to avoid introducing a bias.

The Matrix comes up next (I haven't seen the film, but that shouldn't matter) and the argument is that pleasurable events in a virtual world are worth less than equivalent events in the real world, but the "real world" could be virtual too. You're only going to be disappointed if you come out of a virtual world into one that might be real and realise that a lot of events in that virtual world were not as you'd imagined, at which point a lot of the continuing satisfactions are gone. You could meet the love of your life in a virtual world and have the most magical children imaginable, then return to the "real" world and be left in absolute grief at their loss - you can never get back what you thought you had (unless you can erase your knowledge of the truth and jump back in). But you could live an entire life in a virtual world that's more satisfying than a real one. Indeed, the future of the universe is going to be cold and dark, so people will likely spend all their time in virtual worlds that recreate what we once had long ago, back at a time before all the big questions had been answered and when there were still things worth saying that might be new. It would be possible for everyone to live great lives where they believe they are a big hero that has solved some major problem, like designing AGI, and every stupid person in existence could live through that kind of story and get enormous pleasure out of doing so.

It doesn't matter if it's real or not. What matters is the experienced pleasure in all its forms. If you know that you're in a virtual universe, much of that pleasure will evaporate away and you may prefer to leave it in the hope that you can do something more real that generates greater satisfaction, but if you don't find that in reality, you may yet prefer to re-enter the virtual world. Children who are brought up with no freedom willingly take refuge in worlds that they know full well to be fake, because it's more satisfying than the reality of their captive lives. And would you really be happier to come out of such a game to find that reality is a cold, dark universe where all the stars have burned out and where people only survive in underground bunkers, clinging on to existence with the help of nuclear power? Of course, you would still be able to leave and re-enter the game between lives and switch in and out different memories as you do so, thereby maintaining your ability to make your own decisions about which reality to live in while always getting full satisfaction out of the virtual, but after doing that a few times, you may decide that you never need to know the truth again. The odds are that that's what we're already doing.

Even if this universe isn't virtual, we've still done the experiment to a large extent. You can live other people's lives in your imagination by reading books and watching films, and many people are obsessive about doing just that. Others dedicate their lives to making up stories, getting enormous pleasure and satisfaction out of things that aren't real. We keep returning to reality, and then keep stepping back into the fiction because pleasure from the fake feels as good as the same amount of pleasure from the real. Lies are harmful if the person lied to finds out that what they were led to believe to be real is actually fake, and we don't want to live a lie without at some point being able to know and to give permission, but if it's in our best interests to live a lie without ever knowing it's fake, then it is not immoral for us to be placed into that lie.

If you are looking at all the lives involved in a situation where some players live in virtual worlds (either knowing it or living a lie) while others live in a real world, and if you are going to have to live all those lives in turn, you can work out whether any decision related to what happens is right or wrong. If you can have a better time by making those lives more real and less virtual, you will make that decision, but just as you might enjoy reading books or watching films, you will likely be happy to spend a lot of your times living lies, although you'll likely always want to be able to spend some of the time knowing the truth too, and unless that truth is too hard to bear, it would be immoral for others not to make that knowledge available to you.

Let's move on.

We soon encounter a bit about irresolvable moral dilemmas, but why clutter up the pursuit of morality with massive diversions based on errors of approach? If you make up lots of unnecessary rules which are wrong because they merely approximate parts of morality and you then wonder why they conflict, it's because they're wrong. If you apply my method to any such case, it can still leave you with the same dilemmas, but without any conflict of rules, and where you have such a dilemma you can simply make a random decision without it being wrong. These errors appear to have been made because many philosophers have been misled by such feelings as guilt, regret, remorse, etc. - they've allowing an imperfect, evolved morality-guidance system to contaminate the search for absolute morality by trying to base rules upon them. If someone feels guilty after running a child down with a car in a situation where the child shot out onto the road ahead on an out-of-control sledge, that is the result of a response which evolved to inhibit the repetition of behaviours that might endanger relatives, but that response is blind to who is at fault and should not be part of any moral rule (although it does need to be included in the set of feelings classed as harm).

Then we have discussion of rules about it being wrong to lie and break promises, but we know straight away that these rules are bogus. It is fully moral to lie to a kidnapper and promise you won't try to escape. Even in a situation where you to make a promise to help someone carry out some dangerous task knowing full well that you won't turn up to do so (and in the full knowledge that they'll start the task without you in the belief that you'll be there when needed, and that there's no turning back for them once they've started), it isn't immoral to break that promise if you know that a second person in a similar position depends on you to help with something more dangerous and has every expectation that you'll be there to help them (even if no actual promise has been made). What is immoral is not the breaking of the promise, but the making of it. We keep seeing philosophers making mistakes like this which lead to thousands of papers being published and argued about for decades (or centuries) as they build upon those errors. Again though, for those of us looking for moral control of AGI, this is good news because it repeatedly slashes the workload as we hunt for correct answers - we can simply ditch a whole host of proposed solutions whenever we see a mistake of this kind being made.

Next, here's a bit I need to quote:-

"Compare one outcome where most people are destitute but a few lucky people have extremely large amounts of goods with another outcome that contains slightly less total goods but where every person has nearly the same amount of goods. Egalitarian critics of classical utilitarianism argue that the latter outcome is better, so more than the total amount of good matters."

This is correct. Just counting the amount of goods doesn't tell you the whole story - you have to weigh up the suffering as well, and when you have inequality with no basis for some people having a lot more than others other than luck, there's harm there. (If anyone takes issue with that, or anything else, I will gladly expand on this in the comments.) You don't need to bolt egalitarianism onto utilitarianism to handle this correctly - you just need to do utilitarianism properly. This is another mistake that people have made time and time again, multiplying the number of proposed solutions for no good reason. We can prune out everything that's superfluous.

Next we have a section about a rise in population leading to greater net utility despite quality of life going down for each individual, but is net utility really going up as the quality of life goes down? If the quality of life goes down a little, it may not be enough to notice, but the harm caused by any significant fall in quality of life for many people must outpower the rising positive component of utility relating to the increase in numbers of people at some point, so where exactly is that point?

If you judge this through negative utilitarianism, you measure the harm and call a halt to the growth in population when the harm starts to go up for the average individual. Up to a point, the rise in population is not harmful as it creates more people to have fun interactions with and more ability to specialise in work to reduce costs and increase quantity and range of possessions, so the harm from being in a community that's too small is actually going down as the population rises. Beyond a certain level though, this begins to reverse, and the harm goes up, so it's easy to identify the point to stop. Note though that even if we don't give suffering greater weight than pleasure (and thereby switch to average utilitarianism), the point to stop population growth must be in exactly the same place because that's where the average component of change that can be classed as a gain starts to be outgunned by the average component of change that's classed as a loss.

But what about classic utilitarianism? Well, up to the point where everyone's life is improved by the addition of a new member of the population, no harm is being done to them by that addition that isn't cancelled out by the gains, but as soon as you reach the point where their quality of life begins to decline, you have that reduction multiplied by the size of the population acting as harm (which wasn't the case when the previous person was added to the population), but it isn't clear that this harm is initially sufficient to outweigh the extra pleasure in the system from the existence of the extra member. We may well have a conflict between these theories (and indeed it looks as if the harm to all the rest of the population has to go up quite a bit before it negates the extra happiness of the new individual), so how do we decide which is right and which is wrong if that's the case?

Well, the answer is to see how they compare against my method of computational morality. My approach must provide the right answer because it's using a method which guarantees that it's better for me to live two thousand lives in poverty than a thousand lives in comfort, the right answer will be to live the two thousand lives (though clearly it won't be, because poverty drags down quality of life much too far for the extra lives to outweigh the losses). My method necessarily includes all factors and is guaranteed not to be hampered by any incorrect rules that are built into the foundations of the other approaches. The only thing that's currently missing from my method is the actual maths part of it, but that will be resolved by mathematicians who specialise in game theory rather than by philosophers. There will certainly either be a best outcome or a set of equally good best outcomes for you if you are going to be all the players involved, and mathematicians should be able to identify it.

I'll return to this in a moment with a similar case, but first I must comment on something else because we're reading down through the page I linked to and we need to get past this inadequate bit: "Unfortunately, negative utilitarianism also seems to imply that the government should painlessly kill everyone it can, since dead people feel no pain". That's an unfair dismissal, because any serious kind of negative utilitarianism recognises that some kinds of suffering can be outweighed by the pleasure that becomes available in return for being exposed to that suffering, so it's only the other kind of suffering that should be minimised.

So, with that out of the way, we now reach the most interesting bit of section 4, and that's the mention of the the Mere Addition Paradox. This gives us something clearer to pick apart (and this is one of the big issues that I should have been pointed to right at the start - it's clear that just as we need a league table of proposals, we also need a league table of problems to tackle with the hardest ones at the top so that AGI system builders can get straight to the meat). As always, my system automatically handles this correctly, but let's take a proper look to see what's going on. We'll go by Wikipedia's entry on this ( https://en.wikipedia.org/wiki/Mere_addition_paradox ), though in case the key parts of it are edited into a different form in future, I'll give a description here:-

Population A might have 1000 people in it with a quality of life of 8, which we'll call Q8. Population A+ is a combination of 1000 people at Q8 (population A) plus another 1000 people at Q4 (populateion A', but note that I've made up that name for them myself and you won't find it in use elsewhere). Population B- is a combination of two lots of 1000 people which are both at Q7. Population B is 2000 people at Q7.

The distinction between group B and B- is that B- keeps the two lots of 1000 people apart, which should reduce their happiness a bit as they have fewer options for friends, but we're supposed to imagine that they're equally happy whether they're kept apart (as in B-) or merged (in B).

"Parfit observes that i) A+ seems no worse than A. This is because the people in A are no worse-off in A+, while the additional people who exist in A+ are better off in A+ compared to A [where they simply wouldn't exist] (if it is stipulated that their lives are good enough that living them is better than not existing)."

Straight away we see a problem here though, because we can imagine that the extra 1000 people in A+ have fewer resources available to them, which is why their standard of living is lower. Their quality of life is only half that of the people in A, which means that the resources available to them must be greater than half of the amount available to the people in A. It's important to understand here that halving resources and reduction in quality of life do not change in proportion to each other - we know this because when it gets to the point where people have insufficient food to survive, quality of life becomes negative while resources remain positive. If you take the true relationship between resources and Q into account, you realise that there is an optimal population size for A (A being the original 1000), and an optimal population size for A' (this being the new 1000 members in A+), and an optimal population size for A+ (this being the 2000). Those optimal sizes could be worked out using my method where you have to live the lives of all the players in that population and get the best time out of all of them. If A is at that optimal population size, A' cannot be optimal. It's overpopulated. A is clearly better than A', and it's also important to realise that a fully integrated A+ (all 2000) with a fair sharing out of resources would be much better than an unintegrated A+ where the resources are not shared fairly. If you were to share the resources fairly for A+, quality of life for the 2000 might be the same as it is for B, i.e. Q7. Given that A is optimal, B cannot be as it is overpopulated, so B is worse than A, but note that this only applies if A does not have access to the resources of A'. If A has access to the resources of A' and the people of A' don't exist, then A is not optimal - it should grow its population to maximise quality of life.

"Next, Parfit suggests that ii) B− seems better than A+. This is because B− has greater total and average happiness than A+."

Yes - that bit agrees with what I just said...

"Then, he notes that iii) B seems equally as good as B−, as the only difference between B− and B is that the two groups in B− are merged to form one group in B."

...and that's (more or less) right too.

"Together, these three comparisons entail that B is better than A. However, Parfit observes that when we directly compare A (a population with high average happiness) and B (a population with lower average happiness, but more total happiness because of its larger population), it may seem that B can be worse than A."

Well, the mistake was made in the first part of his analysis: the entire paradox is broken. Yet again we have seen a philosopher making a basic error which isn't picked up on by his opponents, and they all pile in to produce avalanches of papers based upon the error without any of them identifying the obvious fault.

I'll continue my commentary in the next post - there is a lot more on the page that I linked to which needs to be discussed and debugged.

3 comments

Comments sorted by top scores.

comment by TheWakalix · 2018-05-02T12:44:04.056Z · LW(p) · GW(p)
Note though that even if we don't give suffering greater weight than pleasure (and thereby switch to average utilitarianism), the point to stop population growth must be in exactly the same place because that's where the average component of change that can be classed as a gain starts to be outgunned by the average component of change that's classed as a loss.

That's not how the math works out, actually. If you have found a point where the Benefit and Suffering curves (as determined by average utilitarianism) are such that the derivative of the sum of the curves is zero (or in other words, the derivative of one curve is equal to negative the derivative of the other curve), then multiplying one of the curves by some quantity not equal to one will make the derivatives no longer equal (assuming that the derivatives are not equal by virtue of being zero). This is because the derivative of a function times a constant is equal to the same product times the derivative of the function.

comment by TheWakalix · 2018-05-02T12:43:20.995Z · LW(p) · GW(p)
Note though that even if we don't give suffering greater weight than pleasure (and thereby switch to average utilitarianism), the point to stop population growth must be in exactly the same place because that's where the average component of change that can be classed as a gain starts to be outgunned by the average component of change that's classed as a loss.

That's not how the math works out, actually. If you multiply the Suffering curve by a quantity, you also multiply its derivative by the same quantity. If at x=3, d/dx(Benefit(x)-Suffering(x))=0 (which means that d/dx(Benefit(x))=d/dx(Suffering(x))), and ignoring the trivial case where both derivatives are zero (which doesn't fit with your qualitative description of the curves), then d/dx(Benefit(x))=/=d/dx(5*Suffering(x))=5*d/dx(Suffering(x))=

That's not how the math works out, actually. Let and be the benefit and suffering in the world when there are people, according to average utilitarianism. As you have described it, negative utilitarianism multiplies by a quantity , giving Then

comment by TheWakalix · 2018-05-02T00:24:42.311Z · LW(p) · GW(p)
And would you really be happier to come out of such a game to find that reality is a cold, dark universe where all the stars have burned out and where people only survive in underground bunkers, clinging on to existence with the help of nuclear power?

Are you claiming that this is a likely outcome, or just an example?