Death, Existentialism, and AI

post by BereczFereng · 2015-03-01T17:13:27.770Z · LW · GW · Legacy · 0 comments
A while back, I posited two questions, which are listed below, and tried to expand upon them. I am inserting this text as 'quote text' because I realize the writing and thinking are likely flawed in some places. (For example, the assertion of something like life-bias. I have no articles or research to cite here.)

I am posting them on LW because I have not really resolved any of the two questions or found reason to dismiss them. If they are worthy of pursuit, I am uncertain of how to go about it within the world. So, here I seek some constructive input.


(1) Do physical and metaphysical conditions ensure certainty of death? If so, what type(s) of death?
(2) Will Friendly AI renovations of existential epistemology behave in ways that deflect nihilism?

I assume that (1) is clear enough. I am asking if death is a definite, and if it is a thing to have to accept. It requires work in physics, biology, metaphysics, and the like. It also requires that we consider entropy and possible cosmological theories, like The Heat Death. I believe some use Many Worlds as a response by claiming that other selves live on in other universes and by choosing to identify with those other selves. There must be many more responses to this question. I just have not spent enough time considering them all to align myself to a position.

Question (2) is vague. It is unclear what is meant by existential epistemology. I believe that humans suffer from something called life-bias: the tendency to maximize life-span or otherwise minimize existential psychological tension (by believing in unscientific higher purposes, like God, or something). When this bias is exposed, and death is accepted as an inevitable outcome, nihilistic attitudes only seem natural.

I am wondering if modifications to these basic conditions (of life and death), especially in the possession of super-intelligence, will affect the existential cognition that has lead to certain values. For example, notions like ‘objectivity’ and ‘truth’ may not appear as valuable to such a super-intelligence because there would be no life-death problem for these notions to assist in. The question for objectivity and truth has enabled human scientific progress, as well as the corresponding technological and medical advancements.

It is obvious that intelligence augments cognitive sensitivity, and this includes existential cognitive sensitivity. More intelligent people are more likely to commit suicide, and gifted youth, for example, have a greater disposition to existential depression.
 
If (1) is a scientific question, then (2) may be a philosophical question. It may be true that death is an inevitable phenomenon, but a superintelligence will still exist in different existential conditions from those of current humans (because of minimized risk of death). It is hard to predict what extra cognitive sensitivities and higher power will do to the value system of an AI, but it is possible that even a originally friendly AI will arrive at existential conundrums to which there are, currently, no fulfilling answers. Who knows what can happen then?
 

My inquiries are about ultimates– the absolute nature of reality. This may be ambitious. Many people, I find, are not even sympathetic of nihilism, or they might think that there are much easier ways of dealing with it. Since I personally am inclined to ask about higher truths, I have not yet found these other approaches as manifest.

0 comments

Comments sorted by top scores.