Posts
Comments
The law is the product of many individuals each with different subjective axioms separately trying to maximize their particular utility functions during the legislation process. As a result the law as written and implemented has at best, an extremely tenuous link to any individuals morality, let alone society at large's morality. Murder is illegal because for biological reasons the vast majority of people assign large negative value to murder so the result of legislators minimax procedure is that murder is illegal in most cases.
But if an individual did not assign negative value (for whatever reason) to murder how would you convince them they're wrong to do such a thing? It should be easy if it is objective. If you can't do the extreme cases then how can you hope to address real moral issues which are significantly more nuanced? This is the real question that you need to answer here since your original claim is that it is objective. I hope you'll not quote snipe around it again.
I'm not arguing general rules from exceptional ones, I'm not proposing any rules at all. I am proposing an analytic system that is productive rather than arbitrarily exclusionary.
There are many cases were people are jailed arbitrarily, or unfairly. At no point in a legal case is the jury asked to consider whether it is moral to jail the defendant, only whether the law says they should be in jail. At best, the only moral leeway the legal system has is in the judge's ability to change the magnitude of a sentence (which in many countries is severely hampered by mandatory minimums).
An individual's morality may occasionally line up with law (especially if one's subjective axiom is 'Don't break the law'), but this alignment is rarely if ever on purpose, but rather a coincidence.
As to who likes being in jail? Many a person has purposely committed crimes and handed themselves into the police because they prefer being in jail to being homeless, or prefer the lifestyle of living and surviving in jail to having to engage in the rat race, and so on.
Their utility function, rates the reduction in utility from the lack of freedom of jail less than the gain in utility from avoiding the rat race, or living on the street, etc.
Their morality can be objectively analyzed through science by simulating their utility function and objectively determining which actions will likely lead to high expectation values. But their particular choice of a function which values being in jail over harming others via crime is completely subjective.
It is beneficial to be able to separate these two components because there may be (and likely is) many cases in which someone is objectively poor at maximizing their utility function and it would be beneficial to steer them in the right direction without getting bogged down in their choice of axiom.
None of this is about 'avoiding being punished for breaking arbitrary rules', it is about maximizing expected utility, where the definition of utility is subjective.
There are many evolutionary and biological reasons why people might have similar subjective axioms but there is no justifying a particular subjective axiom. If you claim there is, then take the example of a psychopath who claims that "I do not value life, humanity, community, empathy, or any other typically recognizable human 'goods'. I enjoy torturing and murdering people, and do not enjoy jail or capital punishment, therefore it is good if I get to torture and murder with no repercussions" and demonstrate how you can objectively prove that most extreme of example statements false. And if you find that task easy, feel free to try on any more realistic and nuanced set of justifications for various actions.
Even if that were true (it isn't, since laws do not map to morality) it wouldn't really have anything to do with the is-ought problem unless you presume that the entity implements a utility function which values not being jailed (which, is exactly the subjective axiom that allows the bridging of is-ought in my analysis above).
Moral oughts are not different to any kind of other ought statement. Almost all of my post is formulated in terms of a generic policy and utility function, anyway, so you can replace it with moral or amoral ought as you wish. If you dislike the icecream example, the same point is trivially made with any other moral ought statement.
I also feel like this conundrum is pretty easily solved, but I have a different take on it; one which analyses both situations you've presented identically, although it ultimately reduces to 'there is an is-ought problem'.
The primary thrust of my view on this is: All 'ought' statements are convolutions of objective and subjective components. The objective components can be dealt with purely scientifically and the subjective components can be variously criticised. There is no need to deal with them together.
The minimal subjective component of an ought statement is simply a goal, or utility metric, upon which to measure your ought statement. The syllogism thus becomes: If the policy scores highest on the utility metric, and if one ascribes to that utility function, implement that policy. Clause the first is completely objective and addressable by science to the fullest extent. Whether or not the utility function is ascribed to is also completely objective. But it is completely subjective as to which utility function one chooses. The conclusion then follows directly.
The objective components can be addressed objectively, through science and evidence. We can only hope that the subjective component (the choice of utility function) is well constrained by human biology (and there is objective evidence that it is), but we cannot justify any particular choice.
If we apply this to the logical approach described above the chosen utility metric/function is just an axiom, and the rest follows objectively and logically. If we apply this to the dialectical approach then we have not removed the axiom, rather only moved it.
When you argue with me about how creamy the icecream is, and how great the chocolate chips are, you are appealing to *my* axiomatic utility metric. So even from the dialectical point of you've still not solved the is-ought problem, you've just pushed the responsibility of connecting the is to the ought onto the victim of your rhetoric.
Essentially this dialectical approach is performing the two easy bits of the computation: Objectively determine if your victim maximises X, objectively determine how to maximise X, then proscribe the action to your victim. But at no point has the ought been bridged, just an existing arbitrarily chosen and non-scientifically justified ought exploited.
Subsequent to your prescription, having done so rightly, the victim, rather than yourself, says *"Oh yes, I ought do that"*, and while you might never need implement anything unscientific to get to this resolution, there is no doubt that your victim didn't bridge that gap with science or logic.
As an aside, I think it is equivocation to talk about this kind of probability as being the same kind of probability that quantum mechanics leads to. No, hidden variable theories are not really worth considering.
But projectivism has been written about for quite a long time (since at least the 1700s), and is very well known so I find it hard to believe that there are any significant proponents of 'frequentism' (as you call it).
To those who've not thought about it, everyday projectivism comes naturally, but it falls apart at the slightest consideration.
When it comes to Hempel's raven, though, even those who understand projectivism can have difficulty coming to terms with the probabilistic reality.