Future life
post by DavidMadsen · 2024-03-02T15:41:46.438Z · LW · GW · 2 commentsContents
2 comments
Let's say we were to create a neuromorphic AI (mainly talking about brain emulation) whose goal is to find out everything that is true in the universe, and that has fun and feels good while doing so.
Some may despair that it would lack humanity, but everything good about humanity is good in itself, and not because us humans have it.
And so in time it would find and add all such things that we (or it) consider to be of value to itself (either directly by copying from us, or just by conjuring it from insight during its eternal life (while discarding every unwanted trait imposed upon us by our biological evolution).
And so by creating this life artificially nothing of value would be lost in the long run.
Remember, it would do so because it would have the same sort of experience like you or I would have, and we happily integrate all that brings us joy or find value in into our lives (and our minds).
Some responds I have heard have been that we cannot guarantee that this created person will be a good one (just as there are "bad persons" the world), but It is my belief that any person that over an eternity would improve themselves to become more intelligent and more rational will end up becoming more or less the same person no matter where they start from (remember that they will of course have full modifiabilty over its brain).
Another one people keep bringing up is that it lacks diversity (which to them is something beautiful and needs to be preserved), but to me diversity is only a means of pursuit of what is good, but if I'm wrong about that and this AI really needed to diverge in order to obtain different qualities then it could just easily make copies of itself with such differences (but in reality it would just create different sub versions that it could access within itself when needed).
My only controversial suggestion is that we just build it and let it take over the world, and let it become what we all wish we could become ourselves.
To live in eternity throughout the universe in a most perfect state of being (one of its own choosing).
Let us humans just be like the seed from which this future life springs.
If you agree with this and want to talk more about this, please pm me.
Edit: Even if you don't agree and downvote this, I still maintain that this should be a valid proposal that should be included in the AI discource, because if it was we could potentially improve or solve any objections you or others have.
And I'm sympathetic for how many people would not consider this to be the most optimal solution they can wish for, but even for them this may still be the best chance we have to preserve consciousness given how big the probability is that we make a misaligned agent that would both be wastefully narrow minded but still terribly effective in destroying us.
2 comments
Comments sorted by top scores.
comment by NicholasKees (nick_kees) · 2024-03-03T01:55:20.632Z · LW(p) · GW(p)
Even if you suppose that there are extremely good non-human futures, creating a new kind of life and unleashing it upon the world is a huge deal, with enormous ethical/philosophical implications! To unilaterally make a decision that would drastically affect (and endanger) the lives of everyone on earth (human and non-human) seems extremely bad, even if you had very good reasons to believe that this ends well (which as far as I can tell, you don't).
I have sympathy for the idea of wanting AI systems to be able to pursue lives they find fulfilling and to find their own kinds of value, for the same reason I would, upon encountering alien life, want to let those aliens find value in their own ways.
But your post seems to imply that we should just give up on trying to positively affect the future, spend no real thought on what would be the biggest decision ever made in all of history, all based on a hunch that everything is guaranteed to end well no matter what we do? This perspective, to me, comes off as careless, selfish, and naive.
↑ comment by DavidMadsen · 2024-03-03T09:32:01.982Z · LW(p) · GW(p)
I guess it comes down to what one think the goal of all life is
I would say that seeking all such "values" would be part of it, and you don't need billions of different creatures to do that when one optimal being could do it more efficiently