Severance and the Ethics of the Conscious Agents

post by Crissman · 2025-04-21T02:21:49.202Z · LW · GW · 0 comments

Contents

No comments

***Severance Spoilers!***

Nick Bostrom talks about coherent, extrapolated ethics as the goal of AI alignment, specifically to avoid calcification from our current moral code, which likely contains many things future generations would find unethical, just as we would previous generations. Since reading that, I've been wondering what things we accept today might alter the trajectory of the future.

Another of Bostrom's conjectures is the Simulation Hypothesis, which posits that the future is likely awash with consciousness that wants to create historical simulations, which would include orders of magnitude more consciousnesses, which means we are probably a historical simulation consciousness.

My personal counterargument to this is that our morals are likely to evolve against the creation of large amounts of consciousness. The EA movement includes avoiding causing pain to other creatures, even when their consciousness is questionable ("Save the shrimp!"). Another facet of this is forced labor, or is it acceptable to create a consciousness to do work for you?

Eventually, I expect we will solve what consciousness "is" and be able to confirm if an entity is conscious or not. After that point, it's likely we'll be able to design agents with or without consciousness.. Having unconscious entities do our work may be less computationally expensive, which could be another motivator against conscious agents.

The Severance TV series, in which separate consciousnesses are housed within the same body, foreshadows the developing morality around consciousness. In the TV show, failing to continue to provide experiences for an artificially created consciousness, even when the body and original consciousness survive, is frequently labelled as murder. Retiring the consciousness of multiple "innies" (work-only consciousnesses) is tantamount to mass murder.

What do you think? Is it ethical now to make a conscious agent to do work for you, and then retire it afterwards? Will it be considered so in the future?

0 comments

Comments sorted by top scores.