Simulations and Altruism
post by FateGrinder (nicolo-moretti) · 2024-06-02T02:45:49.783Z · LW · GW · 2 commentsContents
Uncertainty How can we trust any reasoning? So why even bother? Omniscience Powerful beings The story of N (premise) Book analogy The story of N (story) One day like any other, a being called N keeps on existing. But nothing actually changed, right? Further observations, simplifications and consequences. Why did N not reveal itself to N+1 from the start? 'N' may try to give himself a happy ending taking in consideration the case where he is not even an N. Unknown reward Ns don't need to exist Ants, rabbits and humans Plausibility Desires and preferences None 2 comments
Omniscience is impossible, powerful beings created the simulation we live in, karma is real, there's an afterlife, most superintelligent beings are already aligned and altruistic by nature.
We live in one of the best possible worlds.
Those are some of the conclusions that might seem reasonable after reading this text.
However, I'm not going to go straight for those topics, rather I'll initially more generally show how, due to constraints of reality, capable and powerful beings might be constrained into a specific theme of behaviors and actions.
First I discuss about how certainty is not only unattainable but also logically impossible, both for humans and other beings.
After that, I talk about how the use of logic (or reason) is justified in spite of that.
Afterwards, I apply such conclusions to a scenario involving powerful beings, and I see how their behavior might be affected.
Then I explore in major depth the scenario, reducing the requirements for it and making it more likely.
Finally I inspect the consequences on us (and possible conclusions).
The consequences of the existence of such restraints on powerful beings would not only influence us, but all the other smart beings too, their behavior towards us and ours towards them.
Uncertainty
How can you be sure of something? And how could you be sure to be right?
How could anyone be sure of anything?
Can you ever be certain? And could you even trust yourself on that conclusion?
Could you confirm that you live or not live in an elaborated Truman Show? Could you confirm that you live or not live in a dream? That the universe didn't start yesterday, and everyone had already formed memories? That there is a god? That we are or not in a simulation? Does everything exist while we don't look at it?
There exist questions that can never be answered. You can never prove those things as true or false, because such things will always be outside your ability to demonstrate them true or false.
Is there anything we can be certain of, if even our senses aren't certain?
Can I even trust the logic I used to reach such a conclusion? Can i trust logic at all?
Descartes said "I think, therefore I am", but couldn't my thoughts be also an illusion of some kind? Are my thoughts even mine? What if I was just hearing someone else's thoughts and had none of my own?
I started believing there is something even more real than thoughts, or me, or everything else, something that for sure there is:
Perception (the way things seem to be to us), our raw experience of reality without further interpretations of it.
It's not quite the same as our senses, which can be lied to. The truthfulness and correctness of our senses is questionable.
Perception, in the sense I use it in this text, is everything you seem to experience, the raw everything. It's neither true or false, it's whatever we experience, regardless of it's relationship with reality. It just is.
Look at an object near you, that object may not exist, at the very least not as you think. When you touch it, do you really touch it or do you just receive the sensation that you did? How does reality presents itself outside of your brain? Is it three-dimensional? Do you even have a brain? Are you really seeing that object with your eyes?
But you will be sure of one thing: The perception of what you think you may be looking at exists. The raw experience or image or qualia exists. It may have who knows what relationship to the true mechanisms of reality, but still, your experience itself is real.
Your thoughts may make little sense, but the perception of you hearing them in your head exist. The voice of the person you may be talking to may not exist as sound, but the perception of you experiencing it is real.
Your perceptions are in fact all there surely is, to you at least, while the material world may not.
More may exist, but who knows.
If it helps, think about how a specific belief could be baseless in reality, but it would still itself exist. Going a step further, the same very belief itself may not even exist, but the experience/perception of it existing does.
In fact, neither the chair, nor even the perception of it may exist, but the perception of the perception of it existing may.
And if you like to bring it one step further, maybe the perception of the perception does not exist, but the perception of the perception of the perception of the chair may.
Keep it up for as long as you like, but at some point one of those things has to exist for us to think this whole reality is happening.
So something does exist rather than nothing, at the very least.
But how can we trust this reasoning? How can we trust any reasoning? logic?
How can we trust any reasoning?
After thinking about how little can ever be known for certain, one may think that discarding all reason may be justified.
If I can't even prove I'm a human on earth, why try to reason at all?
But you can't justify stopping using logic with logic. Say I told you i will stop using logic because i cannot be sure logic is real or because all is uncertain. To do so i would have used logic. Once I discard logic I also lose the only reason I could have had to do so, and therefore the reasoning to discard logic would no more be valid.
I cannot justify not using logic, any argument against it uses logic making itself invalid.
Nobody can be sure of anything ever, besides the existence of perceptions and the existence of a (maybe self imposing) logic.
Still, why should we care, or even try using logic when our condition is so uncertain?
It's important to remember that within our limits, with our logic too, no final answer to anything will ever be found, since we cannot really know anything for certain.
So why even bother?
Well, the idea is that even if you can never be sure of anything, you may be able to tell what possibilities are more or less likely from your own perspective.
In fact, by default, since everything is possible and there are seemingly infinite possible explanations you can come up with for anything, then the probability of any explanation to hold true would be one over infinitely many.
However, we can quickly see that if we were to apply logic, we would have to take into account our personal experiences and knowledge.
While we cannot eliminate any possibility with certainty, we must also acknowledge that some possibilities are more likely than others from our own perspective, since they follow our logic the most.
As an example say you were to buy some ice cream. While surely that ice cream could have been created through a magical spell, we must admit that we never saw something like that happen, and that all evidence would seem to point towards that not having happened. We can then infer that more likely than not that ice cream wasn't made through magic.
But why is it useful?
How do we tell that using logic for everything is something we are justified to do?
It's useful because it helps us find behaviors that will more probably help us.
There is no argument for not using it or why it couldn't be helpful.
Not using it would be wrong because it's pointlessly giving up on a better chance of getting closer to whatever we consider good.
All the thinking and logic won't give you the true probability of anything, but it gives you the probability that makes most sense to you, the one that you believe in. You can't avoid believing in what makes sense to you, and you can't believe in what doesn't make sense to you. It's not what you want, you can't discard logic.
The magical ice cream example is extreme but you can apply it to any basic thing in life. You know for sure that not anything is as likely as anything else from your point of view, proof is you act with the behavior you consciously or unconsciously deem most beneficial.
Anyway, I said this to justify the idea that we are allowed to think that some things are more likely to be true, and so we may discuss about them and assume different behaviors based on that. Regardless of (but actually taking into account) the inescapable uncertainty of our condition.
Omniscience
So I was thinking about some kind of god. No god in particular, just some powerful being that was also omniscient, and it came to me that he[1] was unlikely to exist. This is because, while I said everything is possible, we also must follow logic:
Logic tells me certainty is impossible, and omniscience requires certainty, so total omniscience is not possible.
For example, could that god make sure that there wasn't an even higher being keeping her own existence concealed from him? How could he ever know if that higher being was concealing herself, or if there was no higher being at all? He could not.
How could one of them tell if they are living in the first case or the second? How many more higher beings concealing themselves could there be? Maybe none, but how to even know?
In short, how could he know if he knew everything there is to know?
The point is not about him being right or wrong about his omniscience...it's about him being (un)able to give a doubtless answer through logic.
Someone might say that such a god would be above our logic and therefore not follow it. But can we say, through the use of logic, that a being that violates logic is possible?
Maybe yes. Remember that, logically speaking, our logic could be wrong, maybe just because we lack knowledge or brain power.
If that was the case, we could try to fix our condition, but we would still need to accept our current logic until then.
In fact, while things that defy our current logic could exist, their opposite (or even just something else) could also exist.
The omniscient spaghetti monster that defies logic could be thinking and doing everything and it's opposite. Who cares about such an unpredictable and self defeating creature?
One may ask "what if a specific thing that is illogical in our current understanding happens to be true?".
There are infinitely many things of that kind, illogical things. Some of them may suggest us to behave in a certain way, others in the opposite way.
There are no advantages in considering those things.
What if breaking a mirror brings you bad luck, and such a thing is true because "beyond our logic"?
What if breaking a mirror brings you good luck, and such a thing true because "beyond out logic"?
What if breaking a mirror shatters the universe twenty times over and makes you be reborn as a spider? Or will it be a grasshopper?
What if the omniscient god, therefore beyond our logic, likes cookies?
But what if he hates cookies?
But what if he made a book that told us he liked cookies?
But what if he likes to say the opposite of what he thinks?
Why? Because he's just beyond our logic.
From our perspective those illogical things are to be considered unlikely, uninteresting, impossible to know, and therefore unimportant.
For now I'd say that the existence of a being that follows human logic is more probable, and interesting, and at all worth thinking about, than the existence of a being who doesn't follow our logic.
I may as well do and say...whatever if I was to put honest trust and care into illogical stuff, or more precisely stuff that doesn't make sense to me. I cannot even do that if I try my best.
Why would I trust the justification about his omniscience more than the idea of an ice cream created through a magic spell after all.
Powerful beings
So a completely omniscient being became unlikely to me. Does this concern me at all? Does believing this have any consequences? What consequence does the lack of omniscience have on other ideas?
If powerful beings, with the ability to do things way above current human capabilities, could not have certainty of their condition, wouldn't they do something to "fix it"? Or bypass the uncertainty? Something to ensure they had an higher probability of knowing they were going to live a better existence? What if in turn they took in consideration similar thoughts from beings even above themselves?
The story of N (premise)
And so I came up with the story of a being called "N" (as in any Number), that I think helps bringing some insight. The reason behind this naming choice is because he is part of a numerable set of individuals, and all of those individuals share some properties. Therefore by saying N, I talk about a generic individual of the set. It also helps me talk about the individual that succeeds him, using the notation of N+1 for indicating the successor. It should work well for indicating the successor of a generic N.
It also makes it unspecified if the specific N is the first of the set, the N=0, or not; this is important because the first of the set does not have predecessors, unlike all of the other N. However, since to any N their own number is unknown, it doesn’t shape their behavior in a way different from the other Ns. This is more of a detail for later, I just wanted to mention it earlier.
The following should be read as a story about this set of creatures, where the events described did not necessarily happen; however they should be plausible, and could have happened. This requires two qualities to be satisfied:
- The events do not contradict logic, and I ask you to decide if they do not.
- It is believable that the characters may ever feel motivated to pursue the behaviors they follow in the story.
Unfortunately, the motivations of the characters come from them hearing, or thinking, or in anyway becoming aware of the themes of this story. This makes it hard for me to justify the characters behavior without first telling the story itself, but that makes it so that the driving force of the characters may not be understood while reading.
Hoping in an helpful analogy, picture the following illustrative sub-story.
Book analogy
You are going thorough your daily life, when you stumble on a book. The book is about a character that reminds you of yourself, living through conditions that resemble yours. The book does not present logical contradictions to you. In the book, the protagonist ends up in favorable conditions. It’s not strange to assume that if you were to read something that resembled your living conditions, that was apparently plausible and logical, and that seemed helpful, you would then at least partially let it shape your behavior.
If you accept the previous statements, I will now add an additional detail.
The character that resembled you, in the story, was also motivated by reading a book of such a kind.
As you can see, as long as the book made logical sense (first requirement), and as long as you were to actually let yourself be influenced by such a book (making the motivations of the character more believable, second requirement), then the book would become realistic enough from your point of view.
The book and the story becomes more or less realistic, from your subjective perspective, depending on your choice to let yourself be influenced by the story or not.
Now, I’ll stop talking about the sub-story and go back talking about the story regarding the N beings.
The story of N (story)
While reading the story of N, four things should hold true for every N:
- Having heard the story.
- Finding the story logically sound.
- Believing that enacting the story is beneficial.
- Believing that enacting the story makes it more plausible.
What I ask you to do is to verify, while reading it or after, if you do or do not believe that those four things are verified throughout the story. I'm also gonna verify with you later.
One day like any other, a being called N keeps on existing.
I can't tell much about about N but I can say he surely has the capabilities to perform what he just decided to do with no issue (if he couldn't he just wouldn't be an N). One condition for this story being realistic therefore requires the existence of a being able to do the following:
N decided to create a new being that we will call N+1
Right now it's not clear why he wanted to do so (and we also do not know, for now, if he is motivated in a logically believable way), but that's what he did.
N+1 is created similar to N, there are differences but we can tell they are very similar.
And again, I'm not saying N created N+1 as clone. N wanted to create a blank slate being who would go through the same life that N went through when he was younger. A child who would grow up to be like him, because of the similar life conditions, going through a life similar to the one that N himself led.
And that's what happened, because N can do that. Again, your choice in believing if such capability is theoretically possible.
N+1 grew up to be just like N was, as per plans.
But the plans of N didn't finish here, he swore he would grant N+1 a happy ending or happy eternity, whichever he felt he and N+1 would have deemed the best choice, whatever that meant for both. I’m aware the motivation for such an action has yet to be provided to you.
N decided that by a certain point it would have been fine to diverge his and N+1 lives, all to later give N+1 a life that could be called “Just and Pleasurable” by their own standards and their own interpretation of those values.
N decided he won't confirm and make known his own existence to N+1, at least for a long while, or maybe forever, that we do not know.
Eventually, N+1 had an interesting idea:
N+1 decided to create N+2, a being similar to himself, almost equal, just with small variations.
And so the loop went on, N+3 was born, then a while after came N+20, N+100, and so on.
So while the motivations of the created beings are justified by the fact that they are just like their creators, what reasons did N have to create N+1?
To answer this let's take a look at the perspective of a random N, be it N+3 or N+1511, or N+0, since up to a certain point in time they are more or less alike.
Somehow, in some way, a young potential N (he still has not taken any N-like action) heard or thought about this story.
He thought, and here I ask you to understand the concept of thought in the most general way, not human specific. Let us say he reached a subjective understanding.
He thought: "Well, all those Ns in the story get their happy ending, which is just whatever each of them may think to be the best thing, their own subjective best. I for sure would like to go through something like that, since I do possess desires. However I am not one of them, they are just a story. But wait, maybe I am! Anybody could be an N from the story, given their state of uncertainty. They themselves wouldn't know if they were one of them."
So our N is bothered by the fact that he cannot know if he is an N, for he would benefit if he was. But what can he do? That's just life under the condition of impossible omniscience, to which everyone is apparently subjected to.
But then he gets an idea:
"If I don't create any N+1, considering myself as the N and the N+1 as my creation, then I'm not an N, since all of them would take part to the creation cycle ... but if I was to create some of them and manage their existences like in the little story from before, then I for sure would have a better chance to actually be an N"
So N went on and created N+1.
And to answer our question from before, the one stated above was the reasoning N went through that pushed him to create N+1.
Sadly, this may not be clear enough yet, so I will show this situation from the perspective of a very specific N, N=0.
N=0 is a being capable of doing all those things that have been done by the beings in the story, he also posses desires, and importantly he posses uncertainty.
He was not created by any N, however he cannot know it, since not everything can be known.
He is exposed to the story, the story makes sense, he recognizes the benefits of being an N.
In an attempt of increasing his subjective likelihood of being an N he starts the cycle by creating N=1.
He increased his chances, from his own perspective, of being an N, by acting like one. Otherwise the chances, in his view, would have been zero.
The consequences of his actions have weight on the lives of the vast amount of beings he created.
In conclusion, regarding the story of N, I now can’t help but ask you if you find the following believable:
- The story is logically sound
- Beings possessing such capabilities are possible
- Enacting the story is beneficial, regardless of one’s desires
If you lean towards positive answers, what does this imply?
In the next section I explore some details and possible consequences.
But nothing actually changed, right?
One may experience a weird feeling telling them that N=0 's actions make sense and don't make sense at the same time.
Of course, if he doesn't behave like an N his chances of being an N are 0, and if he does behave like an N he makes his relative chances of being an N much much higher.
On the other hand, the past has already happened, and him being or not being an N has already been decided.
It may help to remember that in a deterministic universe, not only the past has already happened, but the future too, since it has already been determined.
And yet, even if we were to be in a deterministic universe, we would still try to decide what we think is best and how we will act, and behave in terms of probabilities. We would take decisions even if the decisions we would make were to have already been determined.
And yet again, in a deterministic universe, both the past and the present have already happened and cannot be changed.
Probabilities are a reflection of our partial understanding of a cleanly defined reality.
Why would we decide things, if the conclusion of what we will do is already set on stone?
Why would N take part in the creation cycle if the fact of him being or not being N has already been set on stone?
Why do we try, why do we take decisions?
And importantly, what is a decision and what does it change?
We could think of 'thought' as an automatic process that changes the model of understanding of the individual. We could think of 'actions' as an automatic process dictated by the current conditions of the individual.
A 'decision' as a process of making sense of one's own knowledge before taking an action.
The individual doesn't know in advance the consensus of his own knowledge, but needs to wait for it to settle itself; maybe even await the fulfillment of other actions that have been automatically taken, in order to finally congregate and process information.
The individual can't help but think, and can't help but take the actions that his updated understanding deems best, following his partial understanding of reality.
The mind would then be an automatic process based around satisfying the individual's desires to one's own best ability, through the gathering and processing of information.
But what does it change if the outcome has already been determined?
While the outcome of a decisions may have already been determined, it has been determined by (and it is the logical consequence of) what has happened before: the act of 'thinking' and 'deciding' itself.
The future of a deterministic universe strictly follows from it's past.
Even if the result of thought has already been determined, the very fact of the thought having happened has determined the future that will follow.
If an individual believes that he has agency and that his thoughts, decisions, and actions matter, the deterministic future that will follow from holding those believes differs from the one that follows from not holding them.
The very fact of being someone who "doesn't try" (effort/decision wise) because it's "already been determined" will cause a different determined future to follow than if someone was the type to "try anyway".
That's why we should decide things even if the conclusions of what we will do is already set on stone. The very fact of being someone who takes decisions makes what has already been set on stone differ.
That's why N should take part in the creation cycle even if the fact of him being or not being N has already been set on stone.
N being someone who decides to take part in the creation cycle means that what has already been set on stone, future, present and past, is different from if he wasn't.
Why the past too? Because the set of reasonable pasts that can bring to a certain present is different from the set of pasts that can bring to a different present.
Further observations, simplifications and consequences.
Why did N not reveal itself to N+1 from the start?
Because N grew up without any N-1 revealing itself to him, so revealing himself in the beginning would instantly (focus on instantly) diverge N and N+1 lives, therefore without improving the likelihood of N being in the cycle.
Think about N=0 (who cannot know he is N=0), he will never have anyone reveal themselves to be his creator (for he has no creator). If he revealed himself to all the beings he is gonna create, he would make it so that all of his creation is fundamentally different from himself, too different. Therefore he wouldn't improve his relative subjective probability, in his perspective, to not be N=0.
Diverging is fine, but instant radical diverging weakens the cycle, since it doesn't improve the chances of being under similar conditions to those of one’s own creations as much.
I suppose N will make many N+1 with different times of divergence from N life, and different timings in attaining the "happy ending". This is so that there will be no wrong timing with giving the happy ending. Since N will give N+1 a happy ending before N reaches it anyway (otherwise N-1 will hold it off for N until N-2 gives it to N-1 and so on, turning the cycle useless because everybody just waits). N may as well make some N+1 get is sooner than N and some a bit later than N+1, simulating divergences from N. This would also make the cycle more resilient to slight mistakes....
And especially make it so it's not weird to not have received the happy ending yet, while some of your creation did.
However!
'N' may try to give himself a happy ending taking in consideration the case where he is not even an N.
If he gives a happy ending to all created beings, not just to N+1 and successors, then in the case N fails in an unknown way in creating the cycle in the right way (by an unknown margin of error, by which then N differs from a proper N), N could still be a created being with a happy ending granted. So basically all Ns would give a happy ending to all of their creation so that in case these Ns were not a total N themselves they would still have a shot at a "happy ending".
This is because the eventual fake Ns could be close enough to N... and N itself cannot tell if he is truly an N or a fake one, so he may as well treat all creation well so that, in case he's not a true N, the true N above him will still treat him well in fear he himself (the true N) was not N.
Although one could argue that this wouldn't apply to literally every being, for example for sure a rabbit or an ant aren't gonna be an N, right? What about a regular human? We'll go back on this later.
Unknown reward
N may decide to issue an unknown reward for beings who behave altruistically, and an unknown punishment for those who don't (unknown to the created beings, because nobody in his original universe is sure of a system like that being in place, for example).
This karmic style reward and punishment could be there so that there may be more incentives for smart beings to behave altruistically even if a happy ending was still granted at the end, since that won't exclude the fact they will still get some impactful pain if they cause unnecessary pain to others, and they will also be incentivized to get more pleasant karmic treatment by being altruistic.
But why make such a system?
In this way N will protect himself further in case he was not N; and even in the case he was N, since the proper "happy ending" may come after lots of time and pain ahead.
For example, a smart but hard to control and dangerous being like a super intelligence (AI for example), would then have to think twice before causing lots of avoidable suffering to other beings (N-like beings included) if the existence of N-like beings is logically sound.
It would also make it easier for N-like beings living in the same reality (or simulation or whatnot) to trust and help each others, avoiding tragic scenarios where possible and improving their living conditions further.
And as long as that idea does make logical sense (and sounds good) N is "forced" (by it's own desires) to apply it to the "simulated" beings, since if he did not he would make things worse for himself.
Nobody wants to suffer, whatever that may mean subjectively, while they wait for a "happy ending". And it's best to keep the smartest and most powerful beings in check while one waits for that ending... by making the act of behaving well towards others the most optimal choice.
However, since N=0 reality doesn't experience any real time (during individuals lifetimes) neatly observable karmic rewards, neither shall the simulated realities do, to not be radically different (and obviously simulated, becoming too different from the base reality).
This means that either all of the karmic payoff happens between death and the happy afterlife, in some purgatory like fashion, or that the karmic payoff is somehow applied during the individuals lifetimes in ways that are forcefully made imperceptible and not measurable by the created beings, but still influencing very much their own quality of life.
Ns don't need to exist
As we saw, given the additional benefits received by N-like beings or all beings up to an extent, being a proper N is not necessary to benefit.
While some benefits are only granted by Ns (afterlife, karmic rewards,) other benefits only need other very powerful smart beings to believe in the benefit of being altruistic.
Furthermore:
If the N story is plausible but did not actually happen (unverifiable if it did or not happen) this altruistic and collaborative benefit would still be present.
If the N story is impossible BUT it sounds possible for all very smart beings (or a good amount), it still has effects.
Ants, rabbits and humans
(To answer the question posed before about them.)
I talked about how there may be benefits in issuing karmic rewards to "smart enough" beings, smart enough to potentially have a sort of understanding of "right and wrong" (as defined by their creator) and smart enough to theoretically hold the potential to affect the life of an N-like being.
However, it would seem clear that a regular human, while potentially able to foster some butterfly effect that would have consequences on N-like being, is not even close to looking like an N-like being.
So, sure, human level intelligence seems worth of receiving karmic reward and punishment, but where's the value of giving them an afterlife too? Aren't they too distant from N-like beings? Remember, the point of the "happy ending" was to make it so N-like beings may likely receive it.
And if humans don't get it, so don't rabbits and ants.
Seemingly, there appears to be no reason to reward and punish creatures that could not even vaguely understand the morality of N.
Why reward and punish beings that could not understand the morality of N?
There's no use in that, that was never the point.
N needs to pressure other smart beings who live in the same reality as his into behaving "well", and he doesn't really gain anything in issuing karma to simulated ants, because real ants won't even understand the concept of a simulated ant, and it won't affect their behavior.
Back to the "happy ending", it sucks a bit for the humans, if they are not N-like enough, who don't get the "happy ending" (just the karmic reward, maybe in the form of a shorter happy ending or a cheaper one), but of course why should N care about random humans that are clearly not N-like? Maybe N's got friends, but those can just be brought along if N wants to, as for the rest, a cheap happy ending at most will do.
Unless N could actually be not even N-like.
But how could that be?
Could you confirm that you live or not live in an elaborated Truman Show? Could you confirm that you live or not live in a dream?
Could N confirm that about his life?
What if N happened to live through a temporary N-like existence, but happened to wake up only to discover he was a cow chained to a hyper-matrix-dream-logical-like existence?
Why would that even happen? But if it could happen it's certainly worth to grant everything that goes through an N-like experience a "happy ending" if it's cheap enough.
Now, of course, everyone sensible enough in N's base reality will put themselves in conditions to experience a somewhat "N-like experience".
So N's got to simulate those kinda creatures if they are a logical consequence of reality, (and give them "happy endings" too, since he could be one of them).
But what about if an N just believed to be like an N-like going through an N-like experience? Well, he couldn't know that, no N could. Therefore, just in case, he would grant the "full happy ending" to whoever believed, for some time, to be going through an N-like experience, even if it was not.
But what about, say, the humans who don't get to experience that kind of N-like experience, and neither ever felt like, or believed, they were? Never hallucinated being an N? Is there any reason to give them any more than maybe a shallower purely karmic "happy ending", besides the general benefits along the way?
Anyway, we should remember that, in his base reality, the N is just like any other uncertain mortal thing, powerless in the face of the unknown, subjected to other unknown beings potentially trying to attack him or somehow oppose its actions.
It's all of them, powerful unknown smart peers, that he has to convince to not get in his way and collaborate.
When an N-like being creates a cycle, he is dooming all of his peers to being more likely to be inside a cycle themselves with him.
I'd argue he'd be better off just giving everybody (with few exceptions) happy endings in his simulations and calling it a day, taking also in consideration unknown butterfly effects of possible weaker beings finding ways to challenge him, and all to conquer their own happy ending.
By making the cycle less rewarding for some, he just decreases the support from all the beings who might be left out and that could have any kind of unknown influence on his work.
Furthermore, one never knows where the next level of comprehension over reality, beyond the one that the individual possess, comes.
Say something way smarter than N figures out a whole system better than the one N can envision for getting happy endings for himself and all of his close peers in smartness, and N just happen to be as close to this smarter being as an ant is close to N.
Reality can be monstrously more complex than what any of it's inhabitants could ever perceive, each understanding a different amount of it's complexity.
If it's cheap enough to not gatekeep rabbits from their would-be-paradise, why not give it to them? Why take a higher risk of turning out to be someone else's neglected rabbit?
I had one possible additional reasoning that may show how all humans and similar may still get a happy ending regardless, but it isn't justified enough in my opinion, so i shoveled it in a footnote[2]. (Away from my sight, still unsure if worth mentioning altogether).
Finally, and more personally, I like to remember that altruism has practical reasons to be in place even putting aside the idea of cycles and the existence of N.
There's good reasons for me and you to be good to conscious beings who feel pain and pleasure, (even powerless ones), and if you can recognize them, so can N.
However, the subject of the practical benefits of morality outside the N-cycle-idea goes beyond the scope of this text, since I'm strictly discussing about altruism on the bigger scale, so I'll leave that to you.
Plausibility
I think it's an appropriate moment to review the plausibility of the story, and therefore to check the requirements previously imposed.
The first ones stated were the following:
- The events do not contradict logic, and I ask you to decide if they do not.
- It is believable that the characters may ever feel motivated to pursue the behaviors they follow in the story.
I'd say that as long as point 2 is satisfied, excluding the possible technological/physical limitations that may be existing, 1 is satisfied too. That is because i cannot see any unjustified steps between any passage.
Still, is point 2 satisfied?
While reading the story of N, four things should hold true for every N:
- Having heard the story.
- Finding the story logically sound.
- Believing that enacting the story is beneficial.
- Believing that enacting the story makes it more plausible.
Point 1 is nothing special, point 2 overlaps with point 1 of the previous set of requirements, point 3 is seemingly true and point 4 too (that is if the reasoning in the previous sections is accepted, see the "Book analogy" section too).
Given this, from the union of point 2 and 3 and 4, I would conclude that point 2 from the previous set of requirements is satisfied as well.
Finally:
In conclusion now I can’t help but ask you if you find the following believable:
- The story is logically sound
- Beings possessing such capabilities are possible
- Enacting the story is beneficial, regardless of one’s desires
This is very much a set of points where one can attach their personal probabilities to any of the statements.
Point 1 is answered by all the other points above.
Point 2 is up to your imagination. (But also see the "Ns don't need to exist" section).
Point 3 is not yet tackled completely, and so I'd like to spend a few words on it.
Desires and preferences
I suppose that if there were very smart beings without desires they would just not act at all, or act that smartly, and we could just forget about them.
Acting smartly shows a certain consistence towards certain goals, and if they showed such a consistency, we could then identify their goals and approximate such beings to smart ones with some specific goals and desires.
Else, if that consistence was lacking, they wouldn't be smart, given their counterproductive or self conflicting behaviors.
I would then assume all smart beings who act smartly have desires. And even if their desires were temporary and ever changing, during the very brief windows of time when they pick a desire they would still pick the optimal path (if they were THAT smart). (If the window is too short they won't behave that smartly).
This means that if there was a path that could maximize for any possible desire, all the interesting to us smart beings (very smart and desirous) would take it.
Unless there was an equivalently good one, but in the absence of any observation of it, it's mostly pointless to consider it (but it's not pointless to look for it. It's like asking "what if our current understanding is wrong?", which is a good thing to do, but until we find new theories about something we may as well follow the old ones, otherwise we would never do anything).
Since this path maximizes for any possible desire, and it may be the currently only known one, we can for now assume that all of the interesting to us smart beings would recognize the advantage of such a cycle/story and actualize it regardless of their desires (if the cycle idea makes sense).
Wait, why?
- It doesn't matter what they want since the "happy ending" and "happy rewards" can be whatever. It doesn't matter how complex or incomprehensible their desires may be to humans.
- Apparently nobody can ever know what reality is really like, but seemingly our actions can influence what we should expect it to be, so we should be reasonable with what we do.
This is because if we are a certain kind of reasonable we can then expect others akin to us to be too, this happens because we influence their expectations of us and therefore their behavior (and also because they are... akin to us). And the smartest and most powerful theoretical beings should be able to see the benefits and leverage on that. - If you create a billion of simulations of lives similar to yours you can expect to be simulated yourself, as long as you are consistent with your rules. To your advantage.
- Nobody wants a subjective "bad ending" upon themselves (and even it they wanted it, it would immediately turn into the desired preferred ending....).
Why could this be wrong?
One could argue that maybe it just so happens that, for whatever reason, the desires of such beings are directly in contrast to behaving like an "N", therefore falling in the only case where acting the story is not beneficial.
For some reason it's directly desired to go against this whole idea, to the point of giving up on all of the other possible benefits of it, and compromise on all the other possible desires.
However there's no reason to believe such a desire for preventing the N story could exist, furthermore it would be put against all the other beings who instead desire it, and who enjoy each other's support through the generated altruism.
Furthermore, it would maybe be the only case where eventual Ns may issue subjective 'hell' onto a created being, and maximum karmic punishment on whoever helped them along. This is to discourage the most dangerous behavior that could undermine the whole story if successfully executed. Such subjective hell would ideally maximize the chances of failure of whoever possessed and acted desire against the N idea, also to the point that to them trying and failing would be worse than not having tried to stop the N idea at all.[3]
3. Enacting the story is beneficial, regardless of one’s desires
I think the statement is sufficiently justified.
Therefore, if you think the plausibility section is sufficiently sound, you should expect to experience some of the consequences explored throughout the text.
- ^
There's really no intended gender tied to the pronoun choice, I just found it hard to keep it concise using "they" and "themselves" while talking about multiple similar beings acting upon themselves (acting upon the singular one, or both, or...?).
- ^
For example, say that the N beings were to see the mass of self induced N-like experiences as a bad thing (the ones done to get N-like rewards), because cost of resources or issues with everyone plugging themselves out of reality, or badly done N-like experiences, or whatnot (the reason for N disliking this behavior doesn't matter for now, back to it later), and were to punish it with bad karma.
That's an OK thing to do, if they don't like smart beings investing and spending lots of energy into trying to self induce better and better N-like experiences, and forsaking everything else in the universe, isn't it? (As long as smart enough beings understand the reason as for why they would indeed be punished).
But if that was the case, if they issued such a karmic punishment, the N being that's punishing this behavior may still be himself a fake N that's gonna be punished.
To fix this predicament (of wanting to punish but not wanting to be punished), they could try to reduce the chance of them being a creature who self induced an N-like experience on himself on purpose. This is done by removing the reason to self-induce such an experience. For example by giving the happy ending to everybody, and punishing those who self-induce it on purpose, making it unlikely for them to be someone who did it on purpose.
Because then there's no reason for anyone to self-induce it anymore.
Well, alright, now add that maybe it's still "bad" to make all the beings who could even theoretically put themselves in the position to simulate themself being an N do it, so the happy ending could be extended to all those who theoretically could.
But what about the other beings? Well, maybe you want to also stop those who are in position to theoretically place themselves in the position to theoretically be able to.
But what about the others?
Maybe you just want nobody to have practical incentives in even trying, regardless if they can or not manage it.
This kind of works as a reason for forcing a happy ending on everybody, but frankly, why should Ns not like or care about the idea of beings who purposely go through self-induced N-like experiences?
And so I hid this in a footnote, because it's lacking a strong enough core motivation (yet?). - ^
We could also speculate that some being, with such desires in contrast to the whole idea, may be created in the simulations on purpose, and doomed to failure.
So that if someone with such desires appeared in the base reality, they would subjectively be more likely to be one of the many fated to lose ones, and they would feel more likely to be heavily punished were they to persist in their actions.
2 comments
Comments sorted by top scores.
comment by LVSN · 2024-06-02T09:18:20.145Z · LW(p) · GW(p)
I liked this post on a personal level, because I like seeing how people can, with extremely fine subtlety, trick themselves into thinking the world is cooler than it is, but I had to downvote because that is not what LessWrong is for, or at least to the extent that self-deceiving memes are being shared then it's supposed to be explicitly intentional; "Instructions For Tricking Yourself Into Feeling That The World Is Cooler" is a thing you could plausibly post and explain, such that your beliefs about which tricks actually work pay rent in anticipated experiences.
My objection about specific contents of this post: you cannot make good things more plausible-about-reality by writing stories where realistic events happen plus good unrealistic events happen; the unrealistic events do not gain plausibility-about-reality by association-through-fiction.
Some clarifications about my objection, and some questions to help you hold your ground if you should and if you can: I don't take for granted that this observation is necessarily mutually exclusive with what you have written, but the observation is ostensibly mutually exclusive; the relation of 'subjectively-unresolved ostensible mutual exclusivity' between your post and my observation is what we might call 'tension'. Can you explain how the intended spirit of your post survives my objection? What do you think is the right way to resolve the tension between our world models?
One option for resolving the tension is to fix your world-model by removing this meme from it because you realize my model about reality, which does not contain your meme, is more consistent with what is noticeable about reality. Another option is to explain how I've misinterpreted the differences between what your argument should have been (which could be considered close enough to what you articulated), versus the worse version that it actually sounded like, followed by explaining that what your argument was close to is more important than how it sounded to me even if I heard right. This latter option could be considered 'rescuing the spirit of the post from the letter of it'.
(Sidenote: I will concede to you the merit that having to explain the trick makes it less subtle, and might make it work less for people who care about their beliefs paying rent in anticipated experiences. This is not fun, and I think there should be a place where you can post specifically rationalism-informed tricks like that; maybe a forum called FunTricks. Arguably this would boost epistemic security for the people who do care about beliefs paying rent in anticipated experiences, as content posted to FunTricks would serve as puzzles for experienced Bayescrafters to learn more about the nature of self-deception from. The irrationalists can get lost in a fun hall of mirrors, and the Bayescrafters can improve their epistemic security; it would be win-win.
FunTricks posters could rate posts by how subtle the trick was; whether they noticed the mistake. Subtlevote vs "Erm, wait"-vote)
Imagine that your meme is importantly inconsistent with what is noticeable about reality. After all my criticisms, what merits about your post, do you think, are still true? I am interested in this! I do not want to deny your post any credit that is due to it, even if I tentatively must downvote it because that credit is outweighed by the fact that it can mislead people about how cool reality is, which is something LessWrongers care about!
It is, on principle, possible that I am in the wrong; that your model is better due to the presence of your meme(s). That would be great if it were demonstrated, because I would have the privilege of learning more from you than what you would learn from me, which is a serious kind of 'winning' in debates! I am especially excited about opportunities for viewquakes [LW · GW]!
Finally, thank you for posting on LessWrong! Thank you for engaging with philosophy and the memetic evolutionary process! Every interaction can make us wiser if we have the courage to admit error, forgive error, and persist, in the course of memetic negotiation! If you post memes (idea-genes) on LessWrong, please make those memes pay rent in anticipated experiences; those are the memes we do want here! :)
↑ comment by FateGrinder (nicolo-moretti) · 2024-06-03T13:10:50.208Z · LW(p) · GW(p)
Thank you for the feedback, which is very much appreciated!
First of all I confirm that I do believe in everything I said and I did not intend to explore the topic of self-deception.
I understand you saw my writing as a story with a mix of realistic and unrealistic events happening (whereas I hoped for everything to be realistic enough, besides the examples that were wonky on purpose to discuss various points of course).
the unrealistic events do not gain plausibility-about-reality by association-through-fiction.
Unless I am misunderstanding you, I very much agree that an unrealistic event doesn't become realistic just because it's hidden, or carefully placed, between realistic ones in a story. It indeed only sounds (at most) more realistic by association.
I don't take for granted that this observation is necessarily mutually exclusive with what you have written, but the observation is ostensibly mutually exclusive;
Now, I wonder if this impression you got is due to the fact you saw some specific elements as unrealistic, or as presented unrealistic, and more importantly, maybe it sounded like I myself presented them as unrealistic, and plowed ahead regardless?
Because if, instead, all of the events were to be understood as presented as realistic on my part, then there wouldn't be much doubt about my belief about "association" being seemingly alike to yours.
In that case I instead suppose you would have more readily took a gripe with a specific event or more that I wrote about.
I wasn't trying to introduce unrealistic events and swiping them under a rug (tricks of a association); I intended for the events to be taken as realistic (and challenged for failing at that).
I also did not intentionally strive to prove anything "good/nice to believe", it just so happened, unless I unintentionally guided my reasoning through means of personal tastes for conclusions I wished for.