Posts
Comments
Why didn't that happen a long time ago?
Because our intuitions have been shaped by Darwinian forces to, as you say, work great in the ancestral environment and still work well enough in today's society.
What happens if we consider the long-term future, though?
Structuring society or civilizations in a way that is 'moral' in any common sense of the word is meaningful only from the intuition perspective. E.g. a society that aims to abolish suffering and maximize good qualia does so because it feels right/meaningful/good to do so but you cannot prove by reason alone that this is objectively good/meaningful.
Now contrast this to a hypothetical society whose decision-making is based on the tail end of 'reason'. They would realize that our subjective moral intuitions have been shaped by evolutionary Darwinian forces, i.e. to maximize reproductive fitness and that there might not be any objective 'morality' to be located on the territory of reality that they are mapping. They might start reasoning about possible ways to structure society and progress civilization and see that if they keep the Darwinian way of optimizing for fitness (rather than for morality or good qualia), they will in expectation continue to exist longer than any civilization optimizing for anything else.
Thus they would on average outlive other civilizations/societies.
This assumes that it is even possible to effectively optimize fitness through deliberate consideration and 'do the work for evolution' without the invisible hand of natural selection. However, even if it is not possible to do this in a deliberate, planned way, natural selection would lead to the same outcome. (of societies with the largest fitness rather than the largest amount of happy people/morality surviving the longest)
I agree with you, both reason and intuition are being used and very useful in the decision-making of day-to-day life, where in a lot of cases it simply is not efficient to 'compute', making intuition more efficient and interaction between the two modes necessary, which blurs the line between the two sides.
However, I was intending this post to consider the sides specifically when it comes to non-routine matters such as thinking about ontology, metaphysics, or fundamental ethical/moral questions. (I see how I should have made this context more explicit) I think in that context, the sides do become more apparent and distinct.
In that sense, when I talk about the consistency of a system, I mean 'a system's ability to fully justify its own use', for which 'self-justifying' might actually be a better term than consistency.
I think this also implies that once you're beyond the 'event horizon' and fully committed to one system (while talking about fundamental, non-routine topics), you will not have any reason to leave it for the other one, even if confronted with the best self-justifying reasons of the other system!