The prior probability of justification for war?
post by NealW · 2010-10-27T20:52:20.850Z · LW · GW · Legacy · 15 commentsContents
15 comments
Could you use Bayes Theorem to figure out whether or not a given war is just?
If so, I was wondering how one would go about estimating the prior probability that a war is just.
Thanks for any help you can offer.
15 comments
Comments sorted by top scores.
comment by Emile · 2010-10-27T21:01:58.488Z · LW(p) · GW(p)
The difficulty of figuring out whether or not a given war is just doesn't reside in "probabilistic" uncertainty (is there a good term for this), but in uncertainty about what exactly is meant by a "just war".
Those are two very different kinds of uncertainty. Probability theory only helps us for the first.
comment by magfrump · 2010-10-28T00:51:44.685Z · LW(p) · GW(p)
This is a poorly defined and inflammatory question.
It also seems to be a question with the obvious answer of "yes" from almost any perspective.
Once you decide on a moral system and a capacity for judgment--i.e., "vengeful general" or "concerned liberal citizen"--you could use standard brands of decision and probability theory to determine optimal actions--i.e., "go to war" or "vote against war"--but basically the only claim this entails is that "it is theoretically possible to think clearly when it comes to politics and wars" which is (hopefully) not a controversial (or, really, interesting) claim.
comment by NealW · 2010-10-29T01:58:57.381Z · LW(p) · GW(p)
Thanks for the comments everyone! Here are some of my initials thoughts that prompted my question.
I was thinking that I could come to an estimate of the prior probability in this way:
For every given war there is a side that is justified in violence and a side that is not justified in violence.
OR
Both side are unjustified in violence
It can't be the case that both sides are justified. Violent conflicts can't happen between two justified parties.
Given the above realization, I should expect to randomly find myself in the justified country no more than half the time a violent conflict arises. Actually, the prior probability that I live in justified country is most likely less than .5 since many wars have probably involved both parties sharing guilt in the conflict.
That's as far as I got and then I posed the question here.
Thoughts?
comment by NancyLebovitz · 2010-10-28T12:59:08.731Z · LW(p) · GW(p)
There is at least one part which might be interestingly subject to Bayesian analysis: "Arms may not be used in a futile cause or in a case where disproportionate measures are required to achieve success".
More generally, a utilitarian analysis of how to tell whether a war is worth fighting could be very valuable.
comment by Emile · 2010-10-28T06:31:57.603Z · LW(p) · GW(p)
Could you use Bayes Theorem to figure out whether or not a given war is just?
If so, I was wondering how one would go about estimating the prior probability that a war is just.
From my understanding of Bayesian reasoning, you'd just put a prior of 0.5; you're not supposed to do any reasoning or look at any evidence in order to determine your prior. You then use all the other evidence (have other wars been just? has the invading country allowed free access to evidence or does it act as if it's been hiding things? does it profit from the war? etc.) to update your probability.
So your use of "prior" seems technically incorrect here (though I'd be glad to be corrected if I'm wrong!)
Replies from: humpolec↑ comment by humpolec · 2010-10-28T15:00:11.818Z · LW(p) · GW(p)
Prior probability is what you can infer from what you know before considering a given piece of data.
If your overall information is I, and new data is D, then P(H|I) is your prior probability and P(H|DI) posterior probability for hypothesis H.
No one says you have to put exactly 0.5 as prior (this would be especially absurd for absurd-sounding hypotheses like "the lady next door is a witch, she did it".)
Replies from: Emile↑ comment by Emile · 2010-10-28T15:47:47.187Z · LW(p) · GW(p)
If we distinguish between "previous information" and "new information", than yes. In this case, the OP made no such distinction, so I can only assume his use of "prior" means "previous to all information we know" (I is nothing - an uninformative prior).
By the way, I don't really see a problem with starting with a prior of 0.5 for "the lady next door is a witch" (which could be formalized as "the lady next door has powers of action at a distance that break the laws of physics as we know them") - more generally, it's reasonable to have a prior of 50% for "the lady next door is a flebboogy", and then update that probability based on what information you have on flebboogies (for example, if despite intensive search, nobody has been able to prove the existence of a single flebboogy, your probability will fall quite low).
However taking a 50% probability of "she did it" (assuming only one person did "it") wouldn't be a good prior, a better one would be a probability of 1/N for each human, where N is the number of humans on earth. Again, this estimate will wildly vary as you take more information into account, going up if the person that did "it" must have been nearby (and not a teenager in Sri Lanka), going down if she couldn't possibly have done "it" without supernatural powers.
Anyway, I don't think the OP was really asking about priors, he probably meant "how do we estimate the probability that a given war is just".
Replies from: humpolec↑ comment by humpolec · 2010-10-29T05:38:32.546Z · LW(p) · GW(p)
I was refering to the idea that complex propositions should have lower prior probability.
Of course you don't have to make use of it, you can use any numbers you want, but you can't assign a prior of 0.5 to any proposition without ending up with inconsistency. To take an example that is more detached from reality - there is a natural number N you know nothing about. You can construct whatever prior probability distribution you want for it. However, you can't just assign 0.5 for any possible property of N (for example, P(N10)=0.5).
Replies from: Emile↑ comment by Emile · 2010-10-29T08:24:12.678Z · LW(p) · GW(p)
On the other hand it has been argued that the prior of a hypothesis does not depend on its complexity.
There can also be problems with using priors based on complexity; for example the predicates "the number, executed as a computer program, will halt" and "the number, executed as a computer program, will not halt" are both quite complex, but are mutually exclusive, so priors of 50% for each seems reasonable.
Assigning 0.5 for any possible property of N is reasonable as long as you don't know anything else about those properties - if in addition you know some are mutually exclusive (like in your example), you can update your probabilities in consequence. But in any case, the complexity of the description of the property can't help us choose a prior.
comment by Perplexed · 2010-10-28T04:09:30.082Z · LW(p) · GW(p)
Could you use Bayes Theorem to figure out whether or not a given war is just? If so, I was wondering how one would go about estimating the prior probability that a war is just.
I think it is a fascinating question. After all, if we claim to be Bayesians, we ought to be interested in applying our art to useful public issues. But coming up with a prior: P(H), is only part of the problem. We also need to specify our evidence: E, the probability of encountering that evidence in any old war: P(E), the probability of encountering that evidence if the hypothesis is true that the war is just: P(E|H), and most importantly, the specification of the hypothesis: H.
Specifying the hypothesis is the easy part, if you are Roman Catholic. Here is what the Catholic catechism says about just war:
In this regard Just War doctrine gives certain conditions for the legitimate exercise of force, all of which must be met:
H1. the damage inflicted by the aggressor on the nation or community of nations must be lasting, grave, and certain;
H2. all other means of putting an end to it must have been shown to be impractical or ineffective;
H3. there must be serious prospects of success;
H4. the use of arms must not produce evils and disorders graver than the evil to be eliminated. The power of modern means of destruction weighs very heavily in evaluating this condition.
Well, as compared to a typical law passed by the US Congress, that seems pretty clearcut. So I guess we can just define H ::= H1 & H2 & H3 & H4. Note that it is assumed that for a war to be just from the standpoint of one side, the other side must be the aggressor. It is not easy to be just - you need to clear some legal hurdles to even defend yourself from aggression justly.
So, to see whether Bayesianism can help us here, let us try to estimate P(H3|E) where E is taken to be "We lost the first three battles in this war". Well, that certainly seems to be evidence that our prospects of success are not good, and hence that our war is not just. But just how strong is this evidence, and how severely does it depress our estimate of P(H3)?
Well, I suppose we could look at statistics from past wars - how often did a side that lost the first three battles eventually "succeed"? Of course H3 doesn't require that we actually succeed, only that we have "serious prospects of success". If we arbitrarily define "serious prospects" as a probability > 20%, and our historical statistics tell us that the defender succeeds against the aggressor 40% of the time, but only 25% after losing the first three battles, do we have the information we need to compute P(H3|E)?
To be honest, I don't know. Perhaps better Bayesians than me can help us out. But I'm certainly beginning to see that applying Bayes to real-world questions can get difficult very quickly.
comment by NealW · 2010-10-27T21:08:14.618Z · LW(p) · GW(p)
Emile, yeah, what ""justification" is is certainly a question, but what if we just assume a definition of just war and work from there? Could you not then look at the evidence of the other countries wrongs and so forth and come up with a probability?
Replies from: Relsqui, Emile↑ comment by Relsqui · 2010-10-27T22:04:53.546Z · LW(p) · GW(p)
FYI, you can reply to a specific comment, rather than just adding a new comment to the post. (Emile won't have gotten this comment in his inbox.)
But you haven't answered his concern. To assign probabilities, you need to be able to assign values to the various wrongs done by countries--at least relative values--and there is no agreement about even the relative value of wrongs. Never mind that different people have different ideas about what's wrong at all, in most cases.
↑ comment by Emile · 2010-10-28T06:38:43.578Z · LW(p) · GW(p)
Good point, so I added a second answer.
You could come up with a probability (I think the use of 'prior' in the OP is incorrect), but if anybody disagrees, they're much more likely to be questioning your definition of 'just war' (or how to interpret various facts in light of that definition) than your use of Bayes' Theorem.