Is moral duty/blame irrational because a person does only what they must?
post by benn@4efix.com · 2021-10-16T17:00:14.012Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 10 JBlack 6 qwertyasdef 4 Slider 3 Ape in the coat 3 TAG 2 Jiro 2 andrew sauer 1 benn@4efix.com 1 benn@4efix.com -1 TAG None 6 comments
I should note a few things from the start. I understand that there is much prewritten work available here, namely the sequences, the codex, and my favorite fanfic ever, HPMOR. I have tried to find and understand where any of these or any other prewritten works associated with LessWrong.com might have already addressed these questions. I am writing this however because either I did not find the answers I was looking for or I have not recognized them; either way I ask for assistance.
Also, full disclosure, while I have spent the majority of the past three and a half decades (of my 53 total years) on my own exploring applied rationality and discussing it face to face with others in my life’s orbit, as of the last three years or so I have become drawn to the idea of making a book out of my conclusions so far, on the off chance that doing so could be helpful to even a single other person who comes across it. My time on this planet is limited; I’d like to leave some collected thoughts, the fruits of my time to date, to survive me – again, on the off chance that they can be helpful.
One of the realizations I’ve had more recently (largely because it was not an area of thought that interested me until recently) is the question of whether we have the option of making any other choices than the ones we actually do. This by some is called the question of free will, but I am choosing to largely avoid that phrase here in order to attempt to be much more specific, to wit:
When a sentient being (such as I or you) makes decision, is it ever possible they could have made a decision different from the one that they actually made. (Or if speaking of a future decision, will make.)
I am imagining that my chain of thinking on this topic may well seem simplistic, even naïve to the minds found here, but here it is nonetheless, the best effort I have been able to make in applying reason to this question so far, in abbreviated form:
- In order to seek explanations for the patterns we see in reality, we must first embrace the idea that even when we don’t know the reasons why something has happened, we still hold that there are reasons why it happened. We must first begin with the idea that explanations are possible to begin seeking them, even if some wind up to be statistical in nature. Even when we cannot determine how something occurred, we still understand that it had a reason for occurring.
- Thus, we embrace that whatever happens in reality happens because of reality – because of the specific details and configuration of reality in the moment.
- Human decisions and choices are also things that happen in reality.
- Thus, human decisions and choice also happen due to the specific details and configuration of reality: both the reality of the environment that the person is confronted with when making the choice, and the reality of the full nature of the person making the choice at the time of doing so.
- Therefore, a person who makes choice A had to make that choice, for to have made any other choice the person would have either had to be different than they actually were, or the situation/environment would have had to be different than it actually was. Given the specific reality of both the person and the situation they are in when they make a choice, it stands to reason that the choice they do make is the choice they must make, else they would be making a different choice instead.
The above seems inescapably tautologically true, to my best effort to find otherwise. When a sentient being makes any choice of any kind, the “rules of reality”, whatever they may be, dictate what that choice will be. This is not to say we know the rules well enough to be able to predict the choice, nor to say we will necessarily ever be able to learn the rules well enough to be able to make such an accurate prediction. But we don’t need to be able to predict a future choice to know that the choice that will be made will be the one that reality dictates must be made.
It therefore becomes irrational to insist that people “ought” to have made any choice other than the one that they do, if it is indeed true that the choice a person actually makes in any circumstance is the only choice they are capable of making, given whatever the rules of reality actually are. How does it make sense to blame a person for not doing what they are incapable of doing?
If human choice works this way, as I think the above demonstrates, we cannot be reasonably any more upset with people for their “wrong” choices than we can be upset at a car that doesn’t function: both have no choice in their response, and simply work how they must.
There was an illustration I read somewhere with this exact comparison: that holding people responsible for their choices is no different from Basil Fawlty warning his car that it better stop misbehaving and start working, and then giving his car a “well-deserved” thrashing with an umbrella when it failed to shape up.
Note: I am not saying that we should not visit consequences on those who do things that we dislike, merely that those consequences make more sense from a view to changing the future than punishing the past. Imprisoning someone who enjoys murdering people makes sense if doing so reduces future murders, assuming that is one of our goals – regardless of whether or not we “blame” the murderer.
So this is where I am at. It seems to me that no one can have the capacity to do anything other than the rules of reality demand, and as such, whenever we make a decision or choice, it was the one we had to make given all the circumstances. Since we had no option of making any other decision, we cannot reasonably have any moral duty to do what we cannot do. Thus no person, no matter what they choose to do, can be reasonably told that they “ought” to have done any different, since they simply did not have that option.
And so I have concluded that there is no basis on which to judge others as “to blame” or as “morally wrong”. Thus punishment makes no sense in terms of being “deserved”, and neither does vengeance (except possibly in terms of the vengeance seeker seeking their own emotional pleasure/relief/closure).
The silver lining if I am right I think is threefold:
- This would more clearly delineate the difference between punishment and justice: whereas punishment makes no rational sense from a moral stance, justice still makes sense because justice is about influencing the future using the past only as data, but not living in the past.
- We would have to stop blaming both others and ourselves, and instead start asking better questions, like “if I don’t like what the other person did, what do I want to do about it?” Hopefully less raging and more pragmatics.
- We would have to finally embrace a practical morality. A preacher I discourse with asked me, given that fact I cannot be held to blame for any of my actions, what’s to stop me from pushing my own mother down the stairs. My answer was simple. I don’t do such things because they are not who I am, because doing things like that would distress me. And that is also the reason I didn’t do any of that before I reached this conclusion too.
(For people who don’t have such natural instincts, that is one reason we do need consequences/justice: to both disincentivize unwanted behaviors and to remove the capabilities of those to commit such acts in the future.)
One of the main reasons I am putting this all here is it seemed to me that the sequence on Free Will, as best as I could understand it, may have been disputing some of the elements above – although much of it was either not about these items directly or over my head (or both, perhaps). For all I know, the Free Will sequence is about something entirely different than what is above.
For instance, it seems to me that the Free Will sequence was focused a lot more on the question of “why does it seem to us that we have freedom of choice” rather than examining whether we actually do have it or not. And I didn’t see anything in that sequence that addressed any of my thoughts above – either because it doesn’t or because it went over my head and I may need help making it not do that.
So, where do we go from here? As far as I can see at this moment, it seems unavoidably and even factually correct to state that no being in reality can possibly have freedom to act in any way other than the rules of reality require, and those rules must thus determine what choice they must make in any given moment. Without a capability to have made any choices differently given the circumstances, we cannot rationally assign any moral duty or blame.
If this thinking is not correct, then please demonstrate to me why? I am, as always, trying to become less wrong.
Answers
There are lots of different ways to approach this question, but they do in fact end up in the same place: that it does make sense to assign blame for actions. This is not the same as saying that they should be punished for it, so let's set aside punishment, rehabilitation, deterrence, and other surrounding factors.
How does it make sense to blame a person for not doing what they are incapable of doing?
If I have an oven with a thermostat that stops working without warning one day, I can certainly blame the oven for burning my meal. Ovens in general of its model and make and age are in general capable of maintaining the correct temperature, but this one was faulty. If it functioned correctly, then my meal would not have been burned. It had no choice to do otherwise, but that doesn't stop it from being faulty. To a large extent blaming people just means the same thing: proclaiming that their decision-making was faulty. If they did something immoral (or failed to even attempt to perform a moral duty), then it was their moral decision-making that has been demonstrated to be faulty.
If fixing this fault was as easy and without side effects as replacing the thermostat in an oven, then we probably would just do that. Failing that, being able to say "this person's moral decision-making was faulty" and consequently treating them differently from those who have not demonstrated faulty moral decision-making makes sense.
If human behaviour is fully determined by the laws of the universe, then you have no choice in whether you assign moral blame or not so it doesn't make sense to discuss whether we should or shouldn't do that.
↑ comment by Charlie Steiner · 2021-10-17T22:05:33.227Z · LW(p) · GW(p)
Yes, but the discussion is part of the way that we do it. Not only do we have to have the discussion, but the discussion makes sense the same way that Newton's laws make sense for billiard balls.
You wouldn't say "These billiard balls are just going to move around deterministically anyhow, so it doesn't make much sense for them to conserve momentum." :P
Reality has no duty to be a fair puzzle. In order to extract the understandable parts we might need to assume that reasons exist but doesn't mean that every part is understandable even in principle.
On the level of quantum mechanics it is not so clear cut that there are definite courses of events. At the level that classical approximation is appropriate the logic starts to work. But reality is not required to be classical.
If we have malfunctioning cars, we will take them out of traffic and repair shop them. We don't go "oh you can't blame the cars" and let them be in the traffic and break down in the middle of the roads. Delineating which cars should be recalled doesn't make claims that a specific cars should or would work differently but will make a call on what are the road legal standards.
When we do have the attitude of steering the future and we have a problematic individual/circumstance there will arise the question to what character/state we should steer it to ie what it ought to be (or if it helps to think "what it shall be"). If we hold the stance that everything ought to do what it does do then we would never grow/develop any system. So one could try to translate "ought to X" statements as "I am going to modify you to do X"
First of all, notice how all the talk about predestination and fate doesn't change anything in our decision making process.
- Your honour, I may have killed all these kids but, I was to do it due to the laws of the universe! It's unfair to punish me!
- Be it as it may, but I'm to sentence you to life in prison due to these laws of the universe. It's useless to nag about it.
- But I was predestined to nag about it, so it's useless to ask me not to nag!
- And I'm fated to ask you to shut up, also we've already exceed 3 levels of recursion, so take this man to prison!
I think it's a good sign that we are trying to answer the wrong question. Can we correct it?
If you've read the answer to free will, than you know how "couldness" is reduced in deterministic framework. When we make a decision, we mark some outcomes as primitively reachable from our current position. Feeding forward through the decision tree we mark more and more outcomes as reachable, due to their conditions being reachable and so on.
We can reduce "shouldness" in a similar way. We start from the state of the world that corresponds to our goals and values and mark its causes as leading to it. Feeding backward through the decision tree, till we reach out current position we get a chain of things that lead to our prefered world state.
These "couldness" and "shouldness" properties are essential part of our decision making algorithm, without which it wouldn't be able to work; and are meaningful under determinism. They are real in the same sense our decision making is real. People tend to mix these decision-making-couldness and shouldness with metaphysical couldness and shouldness and that's where lots of confusion comes from. But as soon as we understand that it's two completely different things, it all becomes clear.
If we didn't have decision-making-couldness and shouldness, then indeed it would be unreasonable to apply ethical categories like guilt or blame to us it would be similar to blaming a rock. Rock can't make decisions - it doesn't execute any decision making algorithm. However, we can still notice when a rock doesn't satisfy our needs or causes something we don't like.
But why not treating people behaviour the same way? Why do we need to talk about blame, or shouldness at all, when we can just talk about causes and effects? Because it captures our values and allows us to dramatically improve our decision making in regards to them, saving lots of computing power.
- Your honour, I may have killed all these kids, but their parents have caused their deaths too! If they hadn't let their children play in this particular playground, I wouldn't have killed them! If they hadn't given birth to these children in the first place, noone could have have killed them at all! If anything, it's their parents who should be judged here! They had much more opportunities to prevent the death of their children then I did!
- Be it as it may, but our society values the right of people to have children, as well as freedom of movement of the citizens. But it doesn't value killing random children, for no reason. That's why it's considered a crime and you are guilty of it and that's why I'm sentencing you to a life in prison.
In order to seek explanations for the patterns we see in reality, we must first embrace the idea that even when we don’t know the reasons why something has happened, we still hold that there are reasons why it happened.
That's a much weaker claim than a claim to strict determinism.
Yet you do in fact end with a claim of strict determinism..
Therefore, a person who makes choice A had to make that choice, for to have made any other choice the person would have either had to be different than they actually were, or the situation/environment would have had to be different than it actually was
By a kind of semantic drift ... from patterns reasons to causes to deterministic causes .
Causal determinism is a form of causality, clearly enough. But not all causality is deterministic , since indeterministic causality can be coherently defined. For instance: "An indeterministic cause raises the probability of its effect, but doesn't raise it to certainty". Far from being novel, or exotic, this is a familiar way of looking at causality. We all know that smoking causes cancer, and we all know that you can smoke without getting cancer...so the "causes" in "smoking causes cancer" must mean "increased the risk of".
Vengeance is a form of precommitment. If you are the type of person who would take vengenace when a bad thing is done to you, that discourages others from doing bad things to you. But this only works if you are unable to change your mind once the bad thing is done. So you have to precommit.
And so I have concluded that there is no basis on which to judge others as “to blame” or as “morally wrong”. Thus punishment makes no sense in terms of being “deserved”, and neither does vengeance (except possibly in terms of the vengeance seeker seeking their own emotional pleasure/relief/closure).
I think this is a pretty commonly held view on this site, and Yudkowsky would probably agree, especially since you clarified that you still see utility in justice for practical reasons, rather than for its own sake.
I unfortunately can't find where right now, but I think Yudkowsky has said somewhere that he wants AI to create a utopia for everybody, and not punish criminals, no matter how horrible of people they were. After all, once an FAI has taken power, those criminals are no more threat to society because the FAI can always stop them from doing evil things, and thus there is no reason to try to rein them in with punishment, except for what is essentially spite/arbitrary emotional pleasure.
As for free will, I think it does exist, but not in the sense that libertarians want it to. It exists in a more practical sense, it's more something like "if we made a law against this thing it might actually deter people" than "people are metaphysically able to take different choices in a way which somehow is neither deterministic nor random".
↑ comment by ChristianKl · 2021-10-21T10:01:20.706Z · LW(p) · GW(p)
If you follow timeless decision theory which is what Yudkowsky advocated in the sequences there are many times where you want to punish people for defecting from cooperation. The word deserve seems to me very fine to speak about that dynamic.
Replies from: andrew-sauer↑ comment by andrew sauer (andrew-sauer) · 2021-10-21T15:00:27.516Z · LW(p) · GW(p)
Sure that would fall into the category of "justice for practical reasons rather than for its own sake"
Follow-up protocol question: in responding to these answers and replies, which method is more appropriate for this venue, creating one comment per reply, or creating one massive comment that addresses everything? The former is probably easier for me. The latter would likely be a ginormous wall of text.
You have given me much to think about. I will digest and think before I reply further. But thank you for the very high quality level of engagement and feedback.
In order to seek explanations for the patterns we see in reality, we must first embrace the idea that even when we don’t know the reasons why something has happened, we still hold that there are reasons why it happened.
That's a much weaker claim than a claim to strict determinism.
Yet you do in fact end with a claim of strict determinism..
Therefore, a person who makes choice A had to make that choice, for to have made any other choice the person would have either had to be different than they actually were, or the situation/environment would have had to be different than it actually was
By a kind of semantic drift .
6 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2021-10-16T18:40:26.334Z · LW(p) · GW(p)
Saying that someone should've acted differently than they actually did is a bug report. If a calculator already answered "10" to the question of "7+4=?", it should've given a different answer. This formulates a task of figuring out what has gone wrong, what should be fixed. It's useful even if the calculator is never posed that exact question again, any error in reasoning has many effects.
Another point is that decisions make use of prediction of hypothetical consequences. If blame or vengeance are predicted consequences of some course of action, that already acts as deterrence, even if the hypothetical future blame hadn't been meted out and never will precisely because it was effective.
comment by Gunnar_Zarncke · 2021-10-16T22:25:32.024Z · LW(p) · GW(p)
I have tried to find and understand where any of these or any other prewritten works associated with LessWrong.com might have already addressed these questions.
The sequence on free will can be found under the tag Free Will [? · GW]. Posts building up to a solution step by step can be found under Free Will Solution [? · GW].
I'm not super happy with how the dissolution is presented, but I would say all the relevant parts are there. For example, I would say your point
Therefore, a person who makes choice A had to make that choice, for to have made any other choice the person would have either had to be different than they actually were, or the situation/environment would have had to be different than it actually was.
is addressed under Couldness [LW · GW]. I would also count your post as a successful dissolution of free will.
You may have trouble connecting what's on LessWrong to your thoughts and writing. I think that is due mainly to you spending many years building up your island of rationality. Words in such a Private Language-island tend to get used slightly differently from elsewhere. It may take time to connect.
comment by Gunnar_Zarncke · 2021-10-16T22:31:04.427Z · LW(p) · GW(p)
I may agree with your conclusion that
- justice still makes sense because justice is about influencing the future using the past only as data, but not living in the past.
- We would have to stop blaming both others and ourselves
- We would have to finally embrace a practical morality.
For me, the more immediate difficulty seems to be how to act in the intermediate world where only a few understand this and could act accordingly but the majority doesn't.
comment by Shmi (shminux) · 2021-10-17T01:55:34.153Z · LW(p) · GW(p)
You might find a recent book review useful: https://www.lesswrong.com/posts/jr3sxQb6HDS87ve3m. [? · GW]
I commented on the topic there [LW · GW].
Also, see my favorite quote by Ambrose Bierce on the subject, from another post [LW · GW]:
"There's no free will," says the philosopher; "To hang is most unjust."
"There is no free will," assents the officer; "We hang because we must."
Basically, all an embedded agent does is discovering what the world is really like. There is no ability to steer the world in a desired direction, but also no ability to not to steer it. We are all NPCs, but most of us think we are PCs.
Replies from: cata