Ethical Choice under Uncertainty
post by Anders_H · 2014-08-10T22:13:38.756Z · LW · GW · Legacy · 2 commentsContents
Ethical Choice under Uncertainty: None 2 comments
Ethical Choice under Uncertainty:
Most discussions about utilitarian ethics are attempt to determine the goodness of an outcome. For instance, discussions may focus on whether it would be ethical to increase total utility by increasing the total number of individuals but reducing their average utility. Or, one could argue about whether we should give more weight to those who are worst off when we aggregate utility over individuals.
These are all important questions. However, even if they were answered to everyone's satisfaction, the answers would not be sufficient to guide the choices of agents acting under uncertainty. To elaborate, I believe textbook versions of utilitarianism are unsatisfactory for the following reasons:
- Ethical theories that don't account for the agent's beliefs will have absurd consequences such as claiming that it is unethical to rescue a drowning child if the child goes on to become Hitler. Clearly, if we are interested in judging whether the agent is acting ethically, the only relevant consideration is his beliefs about the consequences at the time the choice is made. If we define "ethics" to require him to act on information from the future, it becomes impossible in principle to act ethically.
- In real life, there will be many situations where the agent makes a bad choice because he has incorrect beliefs about the consequences of his actions. For most people, if they were asked to judge the morality of a person who has pushed a fat man to his death, it is important to know whether the man believed he could save the lives of five children by doing so. Whether the belief is correct or not is not ethically relevant: There is a difference between stupidity and immorality.
- The real choices are never of the type "If you choose A, the fat man dies with probability 1, whereas if you choose B, the five children die with probability 1". Rather, they are of the type "If you choose A, the fat man dies with probability 0.5, the children die with probability 0.25 and they all die with probablity 0.25". Choosing between such options will require a formalization of the concept of risk aversion as an integral component of the ethical theory.
I will attempt to fix this by providing the following definition of ethical choice, which is based on the same setup as Von Neumann Morgenstern Expected Utility Theory:
An agent is making a decision, and can choose from a choice set A, with elements (a1, a2, an). The possible outcome states of the world are contained in the set W, with elements (w1,w2,wm). The agent is uncertain about the consequences of his choice; he is not able to perfect predict whether choosing a1 will lead to state w1, w2 or wm. In other words, for every element of the choice set, he has a separate subjective probability distribution ("prior") on W.
He also has a cardinal social welfare function f over possible states of the world. The social welfare function may have properties such as risk aversion or risk neutrality over attributes of W. Since the choice made by the agent is one aspect of the state of the world, the social welfare function may include terms for A.
We define that the agent is acting "ethically" if he chooses the element of the choice set that maximizes the expected value of the social welfare function, under the agent's beliefs about the probability of each possible state of the world that could arise under that action:
Max Σw Pr (W|a) * f(W, a)
Note here that "risk aversion" is defined as the second derivative of the social welfare function. For details, I will unfortunately have to refer the reader to a textbook on Decision Theory, such as Notes on the Theory of Choice.
The advantage of this setup is that it allows us to define the ethical choice precisely, in terms of the intentions and beliefs of the agent. For example, if an individual makes a bad choice because he honestly has a bad prior about the consequences of his choice, we interpret him as acting stupidly, but not unethically. However, ignorance is not a complete "get out of jail for free" card: One element of the choice set is always "seek more information / update your prior". If your true prior says that you can maximize the expected social welfare function by updating your prior, the ethical choice is to seek more information (this is analogous to the decision theoretic concept "value of information").
At this stage, the “social welfare function” is completely unspecified. Therefore, this definition places only minor constraints on what we mean by the word “ethics”. Some ethical theories are special cases of this definition of ethical choice. For example, deontology is the special case where the social welfare function f(W,A) is independent of the state of the world, and can be simplified to f(A). (If the social welfare function is constant over W, the summation over the prior will cancel out)
One thing that is ruled out by the definition, is outcome-based consequentialism, where an agent is defined to act ethically if his actions lead to good realized outcomes. Note that under this type of consequentialism, at the time a decision is made it is impossible for an agent to know what the correct choice is, because the ethical choice will depend on random events that have not yet taken place. This definition of ethics excludes strategies that cannot be followed by a rational agent acting solely on information from the past. This is a feature, not a bug.
We now have a definition of acting ethically. However, it is not yet very useful: We have no way of knowing what the social welfare function looks like. The model simply rules out some pathological ethical theories that are not usable as decision theories, and gives us an appealing definition of ethical choice that allows us to distinguish "ignorance/stupidity" from "immorality".
If nobody points out any errors that invalidate my reasoning, I will write another installment with some more speculative ideas about how we can attempt to determine what the social welfare function f(W,A) looks like
--
I have no expertise in ethics, and most my ideas will be obvious to anyone who has spent time thinking about decision theory. From my understanding of Cake or Death , it looks like similar ideas have been explored here previously, but with additional complications that are not necessary for my argument. I am puzzled by the fact that this line of thinking is not a central component of most ethical discussions, because I don't believe that it is possible for a non-Omega agent to follow an ethical theory that does not explicitly account for uncertainty. My intuition is that unless there is a flaw in my reasoning, this is a neglected point that it would be important to draw people's attention to, in a simple form with as few complications as possible. Hence this post.
This is a work in progress, I would very much appreciate feedback on where it needs more work.
Some thoughts on where this idea needs more work:
- While agents who have bad priors about the consequences of their actions are defined to act stupidly and not unethically, I am currently unclear about how to interpret the actions of agents who have incorrect beliefs about the social welfare function.
- I am also unsure if this setup excludes some reasonable forms of ethics, such as a scenario where we model the agent is simultaneously trying to optimize the social welfare function and his own utility function. In such a setup, we may want to have a definition of ethics that involves the rate of substitution between the two things he is optimizing. However, it is possible that this can be handled within my model, by finding the right social welfare function.
2 comments
Comments sorted by top scores.
comment by Agathodaimon · 2014-08-11T01:08:52.409Z · LW(p) · GW(p)
Is there a word analogous to prior for actions decided on the basis of one's matrix of possible foreseen futures?
Replies from: Anders_H