Future life

post by DavidMadsen · 2024-03-02T15:41:46.438Z · LW · GW · 2 comments

Let's say we were to create a neuromorphic AI (like brain emulation) that strives to find out everything that is true in the universe, and that has fun and feels good while doing so.


(Some may despair that it would lack humanity, but everything good about humanity is good in itself and not because humans have it, and thus it would in time add everything we have of value to itself (either directly by copying from us, or just by conjuring it from insight during its eternal life) while discarding every unwanted trait imposed upon us by natural selection.)
And so by creating this life artificially nothing of value would be lost.

 

The responds I have heard have only been that we cannot guarantee that this person will be a good one (just as there are "bad persons" the world), but I hold that any person that over an eternity would improve themselves to become more intelligent and more rational will end up becoming more or less the same person no matter where they start from (remember that they have full modifiabilty)

 

So I propose we just build it and let it take over the world, and let it become what we all wish we could become ourselves.

(Us humans would be like the seed from which this future life springs (just like parents seeing their children go onto live full lifes on their own.)

 

I'm sympathetic for how many people would not consider this to be the most optimal solution they can wish for, but it may still be the best chance we have given how big the probability is that we make something misaligned

 

If you agree with this and want to talk more about this, please pm me and we talk more about it

 

Edit: Even if you don't agree and downvote this, I still maintain that this should be a valid proposal that should be included in the AI discource, and if it was we could potentially improve or solve any objections  you have

2 comments

Comments sorted by top scores.

comment by NicholasKees (nick_kees) · 2024-03-03T01:55:20.632Z · LW(p) · GW(p)

Even if you suppose that there are extremely good non-human futures, creating a new kind of life and unleashing it upon the world is a huge deal, with enormous ethical/philosophical implications! To unilaterally make a decision that would drastically affect (and endanger) the lives of everyone on earth (human and non-human) seems extremely bad, even if you had very good reasons to believe that this ends well (which as far as I can tell, you don't). 

I have sympathy for the idea of wanting AI systems to be able to pursue lives they find fulfilling and to find their own kinds of value, for the same reason I would, upon encountering alien life, want to let those aliens find value in their own ways.

But your post seems to imply that we should just give up on trying to positively affect the future, spend no real thought on what would be the biggest decision ever made in all of history, all based on a hunch that everything is guaranteed to end well no matter what we do? This perspective, to me, comes off as careless, selfish, and naive.

Replies from: DavidMadsen
comment by DavidMadsen · 2024-03-03T09:32:01.982Z · LW(p) · GW(p)

Thank you, that is a valid point

I guess it comes down to what one think the goal of all life is