0 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2020-04-12T12:17:27.157Z · LW(p) · GW(p)
I cannot (it is usually assumed) coherently will promise-breaking to be a universalisable maxim.
This move is hiding a lot of work within your similarity clustering algorithm. "promise keeping" and "promise breaking" both describe a wide set of different actions taken in a wide set of situations. Within the Kantian imperative scheme, you are forced to make a single decision over all these different situations. So, what chose this set of actions, and why this set, not some other set.
Suppose a particularly nasty gang all have gang tattoos, the gang works based on promises to kill people. Meanwhile, nice people promise to do nice things. The maxim, "if you have a gang tattoo, break your promises, otherwise keep them" might have nicer consequences than everyone always breaking their promises, or always keeping them. But then introduce a gang that doesn't have tattoos, and a few reformed gang members promising to do nice things. Soon the ideal maxim becomes an enumeration of the ethical action in every conceivable situation. You get a giant lookup table of ethics, and while you can express ethics in the form of a giant lookup table, you can express anything in that form. Saying "the decisions this agent makes can be described in terms of a giant lookup table over all conceivable situations" is true for all agents, so doesn't distinguish a particular subset of agents.
I think that actual human Kantians are offloading a lot of work to the brains invisible black boxes, to properly say what a Kantian agent is, you need to figure out what the black box is doing. (This is a problem of coming up with a sensible technical definition that is similar to common usage)
Replies from: Sublation↑ comment by Sublation · 2020-04-12T12:54:59.498Z · LW(p) · GW(p)
I agree with you that choosing the appropriate set of actions is a non-trivial task, and I've said nothing here about how Kantians would choose an appropriate class of actions.
I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might become increasingly specific seems true, but unproblematic. Likewise, I think it's true that we can describe any agent's decisions in terms of a lookup table over all conceivable situations. However, this just seems to indicate that we are looking at the wrong level of resolution. It's also true that I can describe all agents' behaviour (in principle) terms of fundamental physics. But this isn't to say that there are no useful higher-level descriptions of different agents.
When you say that actual human Kantians offload work to invisble black boxes, do you mean that Kantians, when choosing an appropriate set of actions to make into a maxim, are offloading that clustering of acts into a black box? If so, then I think I agree, and would also like a more formal account of what's going on this case. However, I think a good first step towards such an formal account is looking at more qualitative instances of behaviour from Kantians, so we know what it is we're trying to capture more formally.
comment by Donald Hobson (donald-hobson) · 2020-04-12T12:29:54.771Z · LW(p) · GW(p)
'in worlds where acausal decision theorists are more consequentialist, we have an increased ability to enter into multiverse-wide acausal trades which are beneficial from the perspective of both parties. We should thus increase the number of consequentialists, so that more trades of this kind are made.'
This only holds to the extent that creating consequentialists has no other downsides, and that they are trading for things we want.
Suppose omega told me that there are gazillions of powerful agents in other universes, that are willing to fill their universe with paperclips in exchange for making one small staple in this universe. This would not encourage me to build a paperclip maximizer. A paperclip maximizer in this universe would be able to gain enormous amounts of paperclips from multiversal cooperation, but I don't particularly want paperclips, so while it benefits both parties, it doesn't benefit me.
If we are making a friendly AI, we might prefer it to be able to partake in multiverse wide trades.
Replies from: Sublation↑ comment by Sublation · 2020-04-12T13:11:21.731Z · LW(p) · GW(p)
This was my reconstruction of Caspar's argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn't going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.