Posts

Comments

Comment by Relenzo on Ethical Injunctions · 2016-11-04T17:09:26.222Z · LW · GW

I understand why the notions exist--I was trying to address the question of 'what explainable-moral-intuitions should we keep as terminal values, and how do we tell them apart from those we shouldn't'.

But your first sentence is taken very much to heart, sir.

Maybe I'm being silly here, in hindsight. Certain intuitive desires are reducible to others, and some, like 'love/happiness/fun/etc.' are probably not. It feels obvious that most people should immediately see that. Yes, they want a given ethical injunction to be obeyed, but not as a fundamental/terminal value.

Then again--there are Catholic moralists, including, I think, some Catholics I know personally, who firmly believe that (for example) stealing is wrong because stealing is wrong. Not for any other reason. Not because it brings harm to the person being stolen from. If you bring up exceptions--'what about an orphan who will starve if they don't steal that bread?' they argue that this doesn't count as stealing, not that it 'proves that stealing isn't really wrong.' For them, every exception is simply to be included as another fundamental rule. At least, that's the mindset, as far as I can tell. I saw the specific argument above being formulated for use against moral relativists, who were apparently out to destroy society by showing that different things were right for different people.

Even though this article is about AI, and even though we should not trust ourselves to understand when we should be excepted from an injunction--this seems like a belief that might eventually have some negative real-world consequences. See, potentially, 'homosexuality is wrong because homosexuality is wrong'?

If I tried to tell any of these people about how ethical injunctions could be explained as heuristics for achieving higher terminal values--I can already feel myself being accused of shuffling things around, trying to convert goods into other incompatible goods in order to justify some sinister, contradictory worldview.

If I brought up reductionism, it seems almost trivial--while I'm simulating their mind--to point out that no one has ever provably applied reduction to morals.

So maybe let me rephrase: is there any way I could talk them out of it?

Comment by Relenzo on Ethical Injunctions · 2016-11-02T21:50:59.401Z · LW · GW

I've been working my way through the Sequences--and I'm wondering a lot about this essay, in light of the previously-introduce notion of 'how do you decide what values, given to you by natural selection, you are going to keep?'

Could someone use the stances you develop here, EY, to argue for something like Aristotelian ethics? (Which, admittedly, I may not properly understand fully, but my basic idea is:)

'You chose to keep human life, human happiness, love, and learning as values in YOUR utility function,' says the objector, 'even though you know where they came from. You decided that you wanted them anyway. You did this because you had to start somewhere, and you claim that if you stripped away everything provided by natural selection you wouldn't be left with anything. Under the same logic, why can't I keep all the ethical injunctions as terminal values?

'Your explanation of where 'the ends does not justify the means', is very clever and all. Your explanation of 'thou shalt not kill' is very clever. But so what if we know where they came from? If we know why nature selected on them, in our specific case? I'm no more obligated to dispose of it than I am to dispose of 'human happiness is good'.'

Is the counter-argument simply that this leads to a utility function you would call inconsistent?

Oh, and...sorry for commenting on all these dead threads...it's a pity I got here so late.

Comment by Relenzo on Zut Allais! · 2016-11-02T21:43:36.711Z · LW · GW

This appears to be (to my limited knowledge of what science knows a well-known bias. But like most biases, I think I can imagine occasions when it serves as a heuristic.

The thought occurred to me because I play miniature and card games--I see other commenters have also mentioned some games.

Let's say, for example, I have a pair of cards that both give me X of something--let's it deals a certain amount of damage, for those familiar with these games. One card gives me 4 of that something. The other gives me 1-8 over a uniform random distribution--maybe a die roll.

Experience players of these games will tell you that unless the random card gives you a higher expected value, you should play the certain card. And empirical evidence would seem to suggest that they know what they're talking about, because these are the players who win games. What do they say if you ask them why? They say you can plan around the certain gain.

I think that notion is important here. If I have a gain that is certain, at least in any of these games, I can exploit it to its fullest potential--for a high final utility. I can lure my opponent into a trap because I know I can beat them, I can make an aggressive move that only works if I deal at least four damage--heck, the mere ability to trim down my informal Minimax tree is no small gain in a situation like this.

Dealing 4 damage without exploiting it has a much smaller end payoff. And sure, I could try to exploit the random effect in just the same way--I'll get the same effect if I win my roll. But if I TRY to exploit that gain and FAIL, I'll be punished severely. If you add in these values it skews the decision matrix quite a bit.

And none of this is to say that the gambling outcomes being used as examples above aren't what they seem to be. But I'm wondering if humans are bad at these decisions partly because the ancestral environment contained many examples of situations like the one I've described. Trying to exploit a hunting technique that MIGHT work could get you eaten by a bear--a high negative utility hidden in that matrix. And this could lead, after natural selection, to humans who account for such 'hidden' downsides even when they don't exist.

Comment by Relenzo on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2016-10-25T01:37:30.121Z · LW · GW

I think this answer contains something important--

Not so much an answer to the problem, but a clue to the reason WHY we intuitively, as humans, know to respond in a way which seems un-mathematical.

It seems like a Game Theory problem to me. Here, we're calling the opponents' bluff. If we make the decision that SEEMINGLY MAXIMIZES OUR UTILITY, according to game theory we're set up for a world of hurt in terms of indefinite situations where we can be taken advantage of. Game Theory already contains lots of situations where reasons exist to take action that seemingly does not maximize your own utility.