A (paraconsistent) logic to deal with inconsistent preferences
post by B Jacobs (Bob Jacobs) · 2024-07-14T11:17:45.426Z · LW · GW · 2 commentsThis is a link post for https://bobjacobs.substack.com/p/a-logic-to-deal-with-inconsistent
Contents
2 comments
One problem in AI/policy/ethics is that it seems like we sometimes have inconsistent preferences.[1] For example, I want to have a painting in my office, and simultaneously don’t want to have a painting in my office.[2] I’d both prefer and not prefer φ. This is a problem because classical logic can’t really deal with contradictions.
The standard way to resolve this is to deny that we have inconsistent preferences.[3] But what if we accept their existence? Could there be a way to deal with them by making a non-classical preference logic?
Before we can can make a logic we need to know what the operators are. We consider preference statements of the form “I prefer φ to not-φ” and denote them as “Pref φ”, where “Pref” is a new modal operator representing preference. Let us also use the natural inference rule (N):
if we have Pref ¬φ (meaning “I prefer ¬φ to φ)
then we can infer ¬Pref φ (meaning “I do not prefer φ to ¬φ”)
Aka: If I want to not have a painting, we can infer I don’t want a painting[4]
Okay now that we have our operators and our rule, let’s see if we can make a preference logic.
One possible objection to the existence of inconsistent preferences is something which I’ll call a ‘preference explosion’. I will first write out the argument formally, and then in prose:
- Pref φ & Pref ¬φ -assumption
- Pref φ -from 1, conjunction elimination
- Pref φ ∨ Pref ψ -from 2, disjunction introduction
- Pref ¬φ -from 1, conjunction elimination
- ¬Pref φ -from 4, using rule (N)
- Pref ψ -from 3 & 5, disjunctive syllogism
(and ψ can be any possible preference, hence preference explosion)
In prose: I both want a painting and want to not have a painting, from which we can logically infer that I either want a painting or kill a puppy. Since we can infer from my wanting to not have a painting, that I don't want a painting, we can infer that I want to kill a puppy.
This seems strange right? Yet rule (N) is quite natural and intuitive, and all the other inferences are valid in classical logic. If we take inconsistent preferences seriously we have to give up either:
rule (N): Pref ¬φ ⊢ ¬Pref φ
(If I want to not have a painting, we can infer I don’t want a painting)
conjunction elimination: φ & ¬φ ⊢ φ
(If someone has a preference for ‘not having a painting’, and also a preference for ‘having a painting’ we can infer someone has a preference for ‘having a painting’.)
disjunction introduction: φ ⊢ φ ∨ ψ
(If someone has a preference for ‘having a painting’ we can infer that they have a preference for either ‘having a painting’ or ‘killing a puppy’)
disjunctive syllogism: ¬φ, φ ∨ ψ ⊢ ψ
(From the fact that someone has a preference for either ‘having a painting’ or ‘killing a puppy’, and the fact that they also have a preference for ‘not having a painting’, we can infer they want to kill a puppy)
I think that last one, disjunctive syllogism (DS), is the one to reject. If we believe in inconsistent preferences we can no longer see it as a logically valid rule.
So DS is invalid, but that does not mean DS is a bad argument. DS is legitimate with consistent preferences. In a situation where someone just wants a painting (and doesn’t simultaneously not want a painting) DS becomes reliable. In such situations we can treat DS as if it’s deductively valid. So DS is inductively strong, and if we are a little uncertain about whether inconsistent preferences are involved, then it makes the conclusion likely: in most such situations, the inference is valid, but in some rare situations it is not.
Our belief that DS is valid might have arisen because inconsistent situations are rare, so DS works in most situations. Compare the inference of DS with the inference:
A is a proper subset of B, so A is smaller than B.
(E.g. Ants are a type of Bug, so there are less Ants than Bugs)
This inference almost always works, only in cases of infinities does it break down. If there are infinitely many Ants there are also infinitely many Bugs, so the two sets could both be equally (infinitely) big.
This ‘subset inference’ is inductively strong, but not deductively valid, because sometimes you are talking about infinities. Perhaps this is similar to DS, it almost always works, just not when talking about a special situation (infinities and inconsistencies, respectively).[5]
So the “subset inference” can’t be deductively valid because it treats infinities and non-infinities the same way. Is there something similar going on with “preference explosion”? The proof for “preference explosion” might mistakenly equate two slightly different ways to understand the disjunction “φ ∨ ψ”
- “φ ∨ ψ” follows from φ alone. In this sense, because I know φ I can infer something else. Because I know that someone prefers “having a painting” I can therefore infer that they either prefer “having a painting” or “killing a puppy”
which is slightly different from
- “φ ∨ ψ” is somewhat equivalent to the ‘material conditional “if ¬φ then ψ”, in the sense that if it would turn out that ¬φ, we would know that ψ. This is the interpretation we have in mind when we know that at least one of φ and ψ is true, but we don’t know which one. We don’t know whether someone prefers ‘having a painting’ or ‘killing a puppy’ and we’re trying to figure it out, so if it turns out it’s not the first, it must be the second.
In the proof of preference explosion, we infer φ ∨ ψ in the first sense (we know which one), but we need the second sense to perform the disjunctive syllogism (we don’t know).[6] This distinction between the different interpretations of a disjunction is not something that is captured in classical logic, but it might be something we want to capture. Maybe the material conditionals in classical logic just don’t capture the semantics of intuitive conditionals.
Special thanks to Jobst Heitzig for greatly improving this post, and to Daan Vernieuwe for slightly improving this post. And special thanks to the people who provide free secondary literature on the topic. You guys are the reason I understand these topics and are much more deserving of my tuition money.
- ^
In this post when I say someone "prefers A", it means they "prefer A to not-A".
- ^
Some inconsistencies might be surface level and upon deeper reflection you might discover a third option that resolves it. For the sake of this post let’s assume there’s at least one person with one unresolvable inconsistency, whether that be for aesthetic, emotional or other subjective reasons.
- ^
One way is with Expected Utility Theory, however there are numerous reasons one might reject it.
One other way is to say we are not one, but multiple agents and they sometimes disagree. If so, how can we detect how many agents we are, and why do we perceive ourselves as one agent? - ^
This may seem obvious, but for the logicians among you here’s a justification:
Let’s assume that when I say “I prefer X to Y” it means: “If I had to choose between X and Y and no other option, I would certainly choose X”
And let's assume “I prefer ¬X to X” means: “If I had to choose between X and ¬X and no other option, I would certainly choose ¬X”. Let’s further assume “it is imaginable that I have to choose between X and Y and no other option”. In that case we can make the following inference:If I had to choose between X and Y and no other option, I would certainly choose ¬X
It is imaginable that I have to choose between X and Y and no other option— from which we can infer
¬ (If I had to choose between X and Y and no other option, I would not certainly choose ¬X)
— from which we can infer
¬ (If I had to choose between X and Y and no other option, I might choose X)
— from which we can infer
¬ (If I had to choose between X and Y and no other option, I would certainly choose X)
- ^
This analogy was dreamt up by philosopher Graham Priest
- ^
Inspired by Stephen Read
2 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2024-07-14T22:26:48.081Z · LW(p) · GW(p)
Maybe I'm missing something, but this theory seems to leave out considerations of what's usually the most important aspect of preference models, which is what things are preferred to what. Considering only X > ~X leaves out the many obvious cases of X > Y that we'd like to model.
The usual problem is that we are not time and context insensitive the way simple models are, such that we might feel X > Y under conditions Z, but Y > X under conditions W, and that this is sufficient to explain our seemingly inconsistent preferences because they only look inconsistent on the assumption that we should have the same preferences at all times and under all circumstances. The inclusion of context, such as by adding a time variable to all preference relations, is probably sufficient to rescue the standard preference model: our preferences are consistent at each moment in time, but are not necessarily consistent across different moments because the conditions of each moment are different and thus change what we prefer.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2024-07-15T20:52:18.340Z · LW(p) · GW(p)
Hmmm, I don't know if that works. There have definitely been times were I (phenomenologically) felt inconsistent preferences at the same time, e.g. I simultaneously want to hang a painting there and not hang a painting there. I do get this a lot more with aesthetic preferences than with other preferences for some reason. I think the proposed solution that we're multiple agents is quite plausible, but it does have some problems, so that's why I proposed this solution as a possible alternative.