Agents of No Moral Value: Constrained Cognition?
post by Vladimir_Nesov · 2010-11-21T16:41:10.603Z · LW · GW · Legacy · 3 commentsContents
3 comments
Thought experiments involving multiple agents usually postulate that the agents have no moral value, so that the explicitly specified payoff from the choice of actions can be considered in isolation, as both the sole reason and evaluation criterion for agents' decisions. But is that really possible to require from an opposing agent to have no moral value, without constraining what it's allowed to think about?
If agent B is not a person, how do we know it can't decide to become a person for the sole reason of gaming the problem, manipulating agent A (since B doesn't care about personhood, so it costs B nothing, but A does)? If it's stipulated as part of the problem statement, it seems that B's cognition is restricted, and the most rational course of action is prohibited from being considered for no within-thought-experiment reason accessible to B.
It's not enough to require that the other agent is inhuman in the sense of not being a person and not holding human values, as our agent must also not care about the other agent. And once both agents don't care about each other's cognition, the requirement for them not being persons or valuable becomes extraneous.
Thus, instead of requiring that the other agent is not a person, the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).
(It's not a very substantive observation I would've posted with less polish in an open thread if not for the discussion section.)
3 comments
Comments sorted by top scores.
comment by Perplexed · 2010-11-21T20:00:25.561Z · LW(p) · GW(p)
the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).
Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.
A better approach, IMHO, is to stipulate that the published payoff matrix already 'factors in' any benevolence due to the other agent by reason of ethical considerations.
One objection to my approach might be that for a true utilitarian, there is no possible assignment of selfish utilities to outcomes that would result in the published payoff matrix as the post-ethical-reflection result. But, to my mind, this is just one more argument against utilitarianism as a coherent ethical theory.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-21T20:15:40.766Z · LW(p) · GW(p)
Some people may find it difficult to satisfy that requirement. In fact, most people are not indifferent.
All people are not indifferent. (And not meant to qualify.)