Musings of a Layman: Technology, AI, and the Human Condition

post by Crimson Liquidity (crimson-liquidity) · 2024-07-15T18:40:28.065Z · LW · GW · 0 comments

Contents

  Is Perpetual Technological Growth Degenerative to Humanity?
  What Does it Mean to be Human?
  Pertinent Philosophical Considerations
  Artificial Intelligence and Transhumanism
  The Consciousness Conundrum
  The “Artificial Soul”
  Is the “Artificial Soul” an Inevitable Consequence of Humanity?
  Conclusion
None
No comments

The following represents broad and general thoughts on the advancement of technology and development of artificial intelligence. My thoughts on these matters are in their infancy, non-exhaustive, and should not be taken as commentary on the veracity or plausibility of any scientific or technological development. This discussion also does not address religious, spiritual, or other mystic-based beliefs or theories that one could argue impact or resolve the issues presented. Finally, this discussion presupposes that simulation theory (and any related theories) are inapplicable (because otherwise many of these discussions points would be moot).

 

Is Perpetual Technological Growth Degenerative to Humanity?

At our core, humans seem persistently driven towards growth and exploration (of the tangible and intangible, of material and of ideas). Our emotions are a fundamental feature of our humanity and a primary driving force behind our accomplishments, discoveries, and creations. Some argue that without the downsides of life there cannot be upsides. For example, without sadness, happiness arguably would be less meaningful (what is it to feel good when you’ve never felt neutral or bad). Context and perspective are vital to our understanding of the nature, scope, and characterization of all things.

Because of this “humanity,” the potential long-term growth path of technology raises numerous complex issues.

Simply put, while there may be perpetual technological growth in the traditional sense, this growth eventually could come at the cost of the “human experience.”

One could envision reaching a point technologically where eliminating, nullifying, or modifying the aspects of “human nature” that are seen as negative and proximate causes of suffering, pain, or other problems commonly associated with the negative side of humanity ultimately results in the inadvertent destruction of the positive aspects of humanity — resulting in a (for lack of a better word) robotic shell with maximum longevity but absent the capacity for “meaningful” experiences. This also raises the question: how meaningful is life if you cannot die?

 

What Does it Mean to be Human?

A primary question that invariably will need to be addressed moving forward is “what is the purpose of life?” What does it mean to be a “human,” and what do we want humanity to be? Likely an extremely difficult subject to reach consensus on.

The answers to these questions likely are fundamental in evaluating what trajectory of technological advancement humanity should embrace.

To better understand these issues, it is helpful to consider various potential goals.

If the goal is simply maximizing longevity of life, then arguably one of the most extreme iterations of that would require effectively annihilating the seemingly fundamental features of humanity and creating emotionless pragmatic forms that act with the sole and absolute function of maintaining the health of the planet and producing and acquiring the resources necessary to maintain and continue life. Taken further, technology could facilitate effective immortality. Even if accidental death is still possible, some people predict that technology may one day facilitate the existence of “back-ups” of yourself such that, in their view, death would lack permanence. This can be analogized to worker bees in a colony, but here the sole and absolute focus is on the maximization of longevity and perpetuation of the human species without the need for individual autonomy.

On the other end of the spectrum, if the goal is some form of maximum enjoyment or happiness or pleasure, then arguably one of the most extreme iterations of that would involve the implementation of technology that allows humans to receive 100% of the emotional reward without any output effort. It is generally understood that the brain inherently trends towards finding the lowest possible effort exertion for the highest possible reward. Our brain seems to prefer laziness. Thus, in this scenario, the technological pinnacle would be establishing a way to cause the brain/body to produce all the chemicals needed for a person to feel the desired positive emotions and/or physical sensations without having to take any meaningful action. In simplistic terms, a device interfaced with brain and body that allows a person to feel “pure bliss” continually and constantly. Potentially, with enough technological advancement, all of a person’s survival needs (food, water, etc.) could be automated completely, and people could remain inert while experiencing maximized physical and emotional satisfaction.

Both of these cases represent potential results of perpetual technology growth and show how a simple and seemingly beneficial goal could become something much starker when grown perpetually.

Arguably, once we hit the precipice of emotionless immortality or inert bliss generator, we likely will forestall any form of additional growth or advancement outside of the delineated goal (and perhaps there will be no further growth or advancement if we have reached a point where the driving forces of advancement no longer exist).

 

Pertinent Philosophical Considerations

With all of this in mind, one should ask, among other things:

 

Artificial Intelligence and Transhumanism

The most extreme examples of the potential consequences of this perpetual technological growth model likely require some form of integration of artificial intelligence with humans. Like perpetual technological growth, human integration with AI also invokes a myriad of complex considerations.

These issues branch into the realms of morality, ethics, politics, economics, and more. For example, one could imagine a capitalist model where “immortal” life is only available to billionaires and the hoarding of wealth, land, and property becomes locked in perpetuity (we have laws against perpetuity for a reason). Alternatively, perpetual debt instruments could be used to effectively enslave the proletariat into eternal servitude. Moreover, those with access to and control over the most advanced technological developments likely will have ever-increasing power and control over those without such access and control.

Based on the current state of the world (and arguably the entire recorded history of mankind), there should be concern that technological advancements will be used to increase class, wealth, and power disparity across the world.

 

The Consciousness Conundrum

In discussing the future of technology and artificial intelligence, some have posited that people eventually will be able to duplicate, back-up, transfer and modify the entirety of their being through or into “artificial receptacles” of various form and function. As alluded to above, they argue this as a means, among other things, to enhance the longevity of one’s existence and potentially obtain a form of immortality.

The viability and advisability of this form of maintained existence cannot be discussed without addressing the implications of consciousness.

The following discussion requires categorizing two perspectives of consciousness. For discussion purposes, consciousness can be thought of in two ways, each of which is based on perspective. The first form of consciousness can be referred to as “Personal Awareness,” and encompasses what each person perceives currently from their perspective and awareness. The unique and individual awareness and consciousness that is currently reading these words. The second form of consciousness can be referred to as “General Consciousness,” and encompasses the general idea of consciousness as commonly referenced with respect to human beings. A fundamental aspect of humans is that they have consciousness and awareness. When you see another human, you understand that they have General Consciousness because they are human. You and the other human, however, each have separate and distinct Personal Awareness.

With this distinction in mind, the idea that Personal Awareness can be transferred separate from the biological components comprising a specific person is logically untenable.

In the case of perfect identical duplication (assuming such is possible and one day achieved), the duplicated construct seemingly would have General Consciousness and would have its own unique Personal Awareness that is distinct from the original source human. This proposition arises from the paradox that would be created otherwise. For example, assume a scenario where a perfect identical duplicate is created and stored in an active artificial construct (perhaps some form robot) in a manner that leaves the human source alive with continued active existence. Would the human source have Personal Awareness as the human and as the artificial construct? If yes, that arguably would create a paradox (i.e., how would someone be simultaneously conscious of two separate existing perspectives?). More likely, the human source maintains and continues with the Personal Awareness of only itself as that specific human, and the duplicate artificial construct has a separate and unique Personal Awareness of its own.

Because of the identical duplication, however, the artificial construct arguably would believe and attest that it is the same continual conscious observer as existed in the human. Moreover, to any third-party observer, the artificial construct would be no different than the source human and it arguably would be impossible to accurately determine whether there was a continuity of consciousness (and for all intents and purposes, to the third-party observer it would be as if there was a continuity of consciousness even if there was not). Thus, the only evidence that Personal Awareness did not transfer from the human to the artificial construct would be the testimony of the human who could verify that they have Personal Awareness as themselves but not through the artificial construct.

The implications of this are extensive and important. First, if duplication or transference requires the organic death of the human source then there would be no means by which to accurately verify whether a continuity of consciousness occurred. Researchers may be convincingly misled by the assuredness of the artificial construct that Personal Awareness transferred from the human to the artificial construct because the construct likely will truly believe consciousness transferred due to fact the the construct is otherwise identical to the source human (memories, etc.) and the construct has its own Personal Awareness. It is unclear there would be any way by which the artificial construct could accurately confirm whether continuity of consciousness occurred (as it would seem to the artificial construct that at one moment it was the human source and the next it was the artificial construct).

Second, if duplication/recreation first occurs post-mortem of the human source. This would be a “back-up” type scenario where technology is developed to fully encode a person’s entire being digitally or through some other medium. The digitized duplications would be created and maintained and at some future date technology would allow the integration of that information into an artificial construct. The same issues as discussed above would arise in this scenario. The one differentiation is absent data manipulation, the artificial construct should come into existence and understand its own existence as of the time the requisite data from the human source was acquired and stored. If the human source lived beyond that point and no “update” was provided to the requisite data, the artificial construct should have no memory or knowledge of any post-encoding existence of the human source (which in and of itself would be evidence that continuity of consciousness did not occur). Nevertheless, the artificial construct likely would still adamantly assert and believe that its Personal Awareness is the same Personal Awareness that existed in the source human for the reasons highlighted above.

Whether continuity of consciousness would occur if the relevant biological elements of the human source were maintained and integrated with the artificial construct is unknown but logically seems much more likely than if no biological elements are integrated.

 

The “Artificial Soul”

From a practical perspective, this means that AI or technology-based duplication or “back-up” of a human is not a viable means for that human to attain extended life or immortality. To a third-party observer, it effectively would be like the human source continued living but it would not be the continuation of the Personal Awareness of the human source, but rather a separate conscious entity with its own unique Personal Awareness.

The reasons this is important are extensive. In particular, it would be vitally important for any potential human source to understand that from their perspective they will cease to exist, and they will have no awareness or experience from the perspective of the artificial construct. It is similarly important for the third-party observer to understand that despite the convincing appearance otherwise, the duplicated or replicated artificial construct is not the same conscious observer as existed in the human source, and the existence of the artificial construct does not change the fact that the human source’s Personal Awareness has ceased to exist. From the consciousness perspective, the human source and the artificial construct must be consciously separate as a matter of logic due to the paradox arising from simultaneous existence.

Although this is all merely speculation, it is worth considering what it means for people if they cease to exist to themselves but continue to exist to all others.

 

Is the “Artificial Soul” an Inevitable Consequence of Humanity?

Notwithstanding the foregoing, it is difficult to deny that, in the short-term, advancements in technology and AI have the potential to meaningfully improve quality of life globally and in various ways.

If we believe, however, that absent a cataclysmic event, perpetual technology growth is inevitable, then one cannot help but wonder if the ultimate attainment of certain goals would be the effective elimination of “humanity” and the creation of something else, something fundamentally different from the generally understood persistent historical traits and aspects of humanity (leaving for now any debate on whether that outcome would be “good” or “bad”).

Admittedly, a fair counterargument is that these persistent traits of humanity are the reason humans are where they are currently and also the reason humans eventually would reach a point of technological advancement that effectively eliminates “humanity” — in which case one could argue that such an end-point is not anti-humanity but rather a function of and necessary consequence of humanity (and thus, no matter what, inevitable). In such a case, one could further argue that any attempts to curb or impact this outcome is not in the interest of humanity but rather directly opposed to the fundamental nature of humanity.

 

Conclusion

What the correct view is, and whether a correct view even exists, remains unclear. Nevertheless, as we continue to exponentially grow our technological capabilities, these and related issues should be carefully and critically considered and discussed with particular appreciation of the complexities and nuance involved.

0 comments

Comments sorted by top scores.