Posts

Comments

Comment by Raymond Potvin on Sentience · 2018-05-23T13:28:09.210Z · LW · GW

We solve inter-individual problems with laws, so we might be able to solve inter-tribal problems the same way providing that tribes accept to be governed by a superior level of government. Do you think your tribe would accept to be governed this way? How come we can accept that as individuals and not as a nation? How come some nations still have a veto at the UN?

Comment by Raymond Potvin on Sentience · 2018-05-22T15:43:40.314Z · LW · GW

Genocide is always fine for those who perpetrate them.

That solves the whole problem , if relativism is true. Otherwise, it is an uninteresting psychological observation.

To me, the interesting observation is : "How did we get here if genocide looks that fine?"

And my answer is: "Because for most of us and most of the time, we expected more profit while making friends than making enemies, which is nevertheless a selfish behavior."

Making friends is simply being part of the same group, and making enemies is being part of two different groups. No need for killings for two such groups to be enemies though, just to express a different viewpoint on the same observation.

......................

I don’t agree with any killing, but most of us do otherwise it would stop.

Or co-ordination problems exist.

Of course that they exist. Democracy is incidentally better at that kind of coordination than dictatorship, but it has not succeeded yet to stop killings, and again, I think it is because most of us still think that killings are unavoidable. Without that thinking, people would vote for politicians that think the same, and they would progressively cut the funds for defense instead of increasing them. If all the countries would do that, there would be no more armies after a while, and no more guns either. There would nevertheless still be countries, because without groups, thus without selfishness, I don't think that we could make any progress. The idea that selfishness is bad comes from religions, but it is contradictory: praying god for help is evidently selfish. Recognizing that point might have prevented them to kill miscreants, so it might also actually prevent groups from killing other groups. When you know that whatever you do is for yourself while still feeling altruistic all the time, you think twice before harming people.

Comment by Raymond Potvin on Sentience · 2018-05-19T14:51:58.603Z · LW · GW

These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you're actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.

That's a point where I can squeeze in my theory on mass. As you know, my bonded particles can't be absolutely precise, so they have to wander a bit to find the spot where they are perfectly synchronized with the other particle. They have to wander from extreme right to extreme left exactly like populations do when comes the time to chose a government. It softens the motion of particles, and I think it also softens the evolution of societies. Nobody can predict the evolution of societies anyway, so the best way is to proceed by trial and error, and that's exactly what that wandering does. To stretch the analogy to its extremes, the trial and error process is also the one scientists use to make discoveries, and the one evolution of species used to discover us. When it is impossible to know what's coming next and you need to go on, randomness is the only way out, whether you would be a universe or a particle. This way, wandering between capitalism and communism wouldn't be a mistake, it would only be a natural mechanism, and like any natural law, we should be able to exploit it, and so should an AGI.

............

(Congratulation baby AGI, you did it right this time! You've put my post at the right place. :0)

Comment by Raymond Potvin on Sentience · 2018-05-18T16:10:20.238Z · LW · GW

Why would AGI have a problem with people forming groups? So long as they're moral, it's none of AGI's business to oppose that.

If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?

AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).

To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god's one, but if they did, they wouldn't be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right. What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear. A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics. Groups need to be selfish to exist, and an AGI would try to convince them to be altruist. There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior. If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups. There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn't Russia fail just because it lacked competition? Isn't China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn't we become apathetic?

By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response. I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn't send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted.

Comment by Raymond Potvin on Sentience · 2018-05-17T19:08:08.833Z · LW · GW

That's how religion became so powerful, and it's also why even science is plagued by deities and worshippers as people organize themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.
To me, what you say is the very definition of a group, so I guess that your AGI wouldn't permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups. Do what I say and not what I do would he be forced to say. He might convince others, but I'm afraid he wouldn't convince me. I don't like to feel part of a group, and for the same reason that you gave, but I can't see how we could change that behavior if it comes from an instinct. Testing my belief is exactly what I am actually doing, but I can't avoid to believe in what I think to test it, so if ever I can't prove that I'm right, I will go on believing in a possibility forever, which is exactly what religions do. It is easy to understand that religions will never be able to prove anything, but it is less easy when it is a theory. My theory says that it would be wrong to build a group out of it, because it explains how we intrinsically resist to change, and how building groups increases exponentially that resistance, but I can't see how we could avoid it if it is intrinsic. It's like trying to avoid mass.

Comment by Raymond Potvin on Sentience · 2018-05-16T19:03:57.784Z · LW · GW

slaughter has repeatedly selected for those who are less moral

From the viewpoint of selfishness, slaughter has only selected for the stronger group. It may look too selfish for us, but for animals, the survival of the stronger also serves to create hierarchy, to build groups, and to eliminate genetic defects. Without hierarchy, no group could hold together during a change. It is not because the leader knows what to do that the group doesn't dissociate, he doesn't, but because it takes a leader for the group not to dissociate. Even if the leader makes a mistake, it is better for the group to follow him than risking a dissociation. Those who followed their leaders survived more often, so they transmitted their genes more often. That explains why soldiers automatically do what their leaders tell them to do, and the decision those leader take to eliminate the other group shows that they only use their intelligence to exacerbate the instinct that has permitted them to be leaders. In other words, they think they are leaders because they know better than others what to do. We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I'm selfish, but I can't admit that I would be more selfish than others otherwise I would have to feel guilty and I can't stand that feeling.

Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups.

In our democracies, if what you say was true, there would already be no wars. Leaders would have understood that they had to stop preparing for war to be reelected. I think that they still think that war is necessary, and they think so because they think their group is better than the others. That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want.

Comment by Raymond Potvin on Sentience · 2018-05-15T19:36:36.586Z · LW · GW

Hi Tag,

Genocide is always fine for those who perpetrate them. With selfishness as the only morality, I think it gets complex only when we try to take more than one viewpoint at a time. If we avoid that, morality then becomes relative: the same event looks good for some people, and bad for others. This way, there is no absolute morality as David seems to think, or like religions seemed to think also. When we think that a genocide is bad, it is just because we are on the side of those who are killed, otherwise we would agree with it. I don't agree with any killing, but most of us do otherwise it would stop. Why is it so? I think it is due to our instinct automatically inciting us to build groups, so that we can't avoid to support one faction or the other all the time. The right thing to do would be to separate the belligerents, but our instinct is too strong and the international rules too weak.

Comment by Raymond Potvin on Computational Morality (Part 1) - a Proposed Solution · 2018-05-15T15:49:49.815Z · LW · GW

I wonder how we could move away from universal since we are part of it. The problem with wars is that countries are not yet part of a larger group that could regulate them. When two individuals fight, the law of the country permits the police to separate them, and it should be the same for countries. What actually happens is that the powerful countries prefer to support a faction instead of working together to separate them. They couldn't do that if they were ruled by a higher level of government.

If a member of your group does something immoral, it is your duty not to stand with or defend them - they have ceased to belong to your true group (the set of moral groups and individuals).

Technically, it is the duty of the law to defend the group, not of individuals, but if an individual that is part of a smaller group is attacked, the group might fight the law of the larger group it is part of. We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid. If nothing is urgent, we can take a larger viewpoint, but whenever we don't have the time, we automatically take our own viewpoint. In between, we take the viewpoints of the groups we are part of. It's a selfish behavior that propagates from one scale to the other. It's because our atoms are selfish that we are. Selfishness is about resisting to change: we resist to others' ideas, a selfish behavior, simply because the atoms of our neurons resist to a change. The cause for our own resistance is our atoms' one. Without resistance, nothing could hold together.

A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own - their true group is that entire set of moral groups and individuals.

Without selfishness from the individual, no group can be formed. The only way I could accept to be part of a group is while hoping for an individual advantage, but since I don't like hierarchy, I can hardly feel part of any group. I even hardly feel part of Canada since I believe Quebec should separate from it. I bet I wouldn't like to be part of Quebec anymore if we succeeded to separate from Canada. The only group I can't imagine being separated from is me. I'm absolutely selfish, but that doesn't prevent me from caring for others. I give money to charity organizations for instance, and I campaign for equality of chances or ecology. I feel better doing that than nothing, but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world. Don't think further though says the little voice in my head, because when I did, I always found that I wouldn't be satisfied if I would ever live in such a beautiful world. I'm always looking for something else, which is not a problem for me, but it becomes to be a problem if everybody does that, which is the case. It's because we are able to speculate on the future that we develop scale problems, not because we are selfish.

Being selfish is necessary to make groups, what animals can do, but they can't speculate, so they don't develop that kind of problem. No rule can stop us from speculating if it is a function of the brain. Even religion recognizes that when it tries to stop us from thinking while praying. We couldn't make war if we couldn't speculate on the future. Money would have a smell. There would be no pollution and no climate changes. Speculation is the only way to precede the changes that we face, it is the cause for our artificiality, which is a very good way to develop an easier life, but it has been so efficient that it is actually threatening that life. You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn't, he couldn't be as intelligent as we are if speculation is what differentiates us from animals.

Comment by Raymond Potvin on Computational Morality (Part 1) - a Proposed Solution · 2018-05-15T15:32:58.240Z · LW · GW

Comment by Raymond Potvin on Sentience · 2018-05-14T18:42:02.475Z · LW · GW

If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone.

Considering that morality rules only serve to protect the group, then no individual sentience is needed, just subconscious behaviors similar to our instinctive ones. Our cells work the same: each one of them works to protect itself, and so doing, they work in common to protect me, but they don't have to be sentient to do that, just selfish.

Comment by Raymond Potvin on Computational Morality (Part 1) - a Proposed Solution · 2018-05-11T15:46:12.180Z · LW · GW

"You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn't depend on treating other people unfairly."
Here in Quebec, we have groups that promote a french and/or a secular society, and others that promote an english and/or a religious one. None of those groups has the feeling that it is treated fairly by its opponents, but all of them have the feeling to treat the others fairly. In other words, we don't have to be treated unfairly to feel so, and that feeling doesn't help us to treat others very fairly. This phenomenon is less obvious with music or dance or literature groups, but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior. That selfish behavior doesn't prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones. Incidentally, I'm actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group's own selfishness. I can't avoid that feeling even if it is disagreeable, but it prevents me from being disagreeable with you since it automatically gives me the feeling that you are not selfish with me. It's as if the group had implanted that feeling in me to protect itself. If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group. Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal.

Comment by Raymond Potvin on Computational Morality (Part 1) - a Proposed Solution · 2018-05-11T15:45:17.619Z · LW · GW

Comment by Raymond Potvin on Origin of Morality · 2018-05-11T14:56:04.965Z · LW · GW

Sorry, I can't see the link between selfishness and honesty. I think that we are all selfish, but that some of us are more honest than others, so I think that an AGI could very well be selfish and honest. I consider myself honest for instance, but I know I can't help to be selfish even when I don't feel so. As I said, I only feel selfish when I disagree with someone I consider being part my own group.

We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long.

You probably think so because you think you can't get easily fooled. It may be right that you can't get fooled on a particular subject once you know how it works, and this way, you could effectively avoid to be fooled on many subjects at a time if you have a very good memory, so an AGI could do so for any subject since his memory would be perfect, but how would he be able to know how a new theory works if it contradicts the ones he already knows? He would have to make a choice, and he would chose what he knows like every one of us. That's what is actually happening to relativists if you are right about relativity: they are getting fooled without even being able to recognize it, worse, they even think that they can't get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory. If an AGI was actually ruling the world, he wouldn't care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists. Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him. On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately? That AGI is your baby, so you want it to live, but have you thought about what would be happening to us if we suddenly had no problem to solve?

Comment by Raymond Potvin on Origin of Morality · 2018-05-10T20:09:30.808Z · LW · GW

The most extreme altruism can be seen as selfish, but inversely, the most extreme selfishness can also be seen as altruist: it depends on the viewpoint. We may think that Trump is selfish while closing the door to migrants for instance, but he doesn't think so because this way, he is being altruist to the republicans, which is a bit selfish since he needs them to be reelected, but he doesn't feel selfish himself. Selfishness is not about sentience since we can't feel selfish, it is about defending what we are made of, or part of. Humanity holds together because we are all selfish, and because selfishness implies that the group will help us if we need it. Humanity itself is selfish when it wants to protect the environment, because it is for itself that it does so. The only way to feel guilty of having been selfish is after having weakened somebody from our own group, because then, we know we also weaken ourselves. With no punishment in view from our own group, no guiltiness can be felt, and no guiltiness can be felt either if the punishment comes from another group. That's why torturers say that they don't feel guilty.


I have a metaphor for your kind of morality: it's like windows. It's going to work when everything will have been taken into account, otherwise it's going to freeze all the time like the first windows. The problem is that it might hurt people while freezing, but the risk might still be worthwhile. Like any other invention, the way to minimize the risk would be to proceed by small steps. I'm still curious about the possibility to build a selfish AGI though. I still think it could work. There would be some risks too, but they might not be more dangerous than with your's. Have you tried to imagine what kind of programming would be needed? Such an AGI should behave like a good dictator: to avoid revolutions, he wouldn't kill people just because they don't think like him, he would look for a solution where everybody likes him. But how would he proceed exactly?


The main reason why politicians have opted for democracy is selfishness: they knew their turn would come if the other parties would respect the rule, and they new it was better for the country they were part of, so better for them too. But an AGI can't leave the power to humans if he thinks it won't work, so what if the system had two AGIs for instance, one with a tendency to try new things and one with a tendency not to change things, so that the people could vote for the one they want? It wouldn't be exactly like democracy since there wouldn't be any competition between the AGIs, but there could be parties for people to adhere and play the power game. I don't like power games, but they seem to be necessary to create groups, and without groups, I'm not sure society would work.

Comment by Raymond Potvin on Origin of Morality · 2018-05-10T20:08:23.986Z · LW · GW

Sorry for the next doubloons guys, I think that our AGI is bugging! :0)

Comment by Raymond Potvin on Origin of Morality · 2018-05-09T14:14:14.185Z · LW · GW

Comment by Raymond Potvin on Origin of Morality · 2018-05-09T14:12:34.476Z · LW · GW

Comment by Raymond Potvin on Computational Morality (Part 1) - a Proposed Solution · 2018-05-09T13:53:48.879Z · LW · GW

(quote) If you're already treating everyone impartially, you don't need to do this, but many people are biased in favour of themselves, their family and friends, so this is a way of forcing them to remove that bias. (/quote)Of course that we are biased, otherwise we wouldn't be able to form groups. Would your AGI's morality have the effect of eliminating our need to form groups to get organized?

Your morality principle looks awfully complex to me David. What if your AGI would have the same morality we have, which is to care for ourselves first, and then for others if we think that they might care for us in the future? It works for us, so with a few adjustments, it might also work for an AGI. Take a judge for instance: his duty is to apply the law, so he cares for himself if he does since he wants to be paid, but he doesn't have to care for those that he sends to jail since they don't obey the law, which means that they don't care for others, including the judge. To care for himself, he only has to judge if they obey the law or not. If it works for humans, it should also work for an AGI, and it might even work better since he would know the law better. Anything a human can do that is based on memory and rules, like go and chess for example, an AGI could do better. The only thing he couldn't do better is inventing new things, because I think it depends mainly on chance. He wouldn't be better, but he wouldn't be worse either. While trying new things, we have to care for ourselves otherwise we might get hurt, so I think that your AGI should behave the same otherwise he might also get hurt in the process, which might prevent him from doing his duty, which is helping us. The only thing that would be missing in his duty is caring for him first, which would already be necessary if you wanted him to invent things.

Could a selfish AGI get as selfish as we get when we begin to care only for ourselves, or for our kin, or for our political party, or event for our country? Some of us are ready to kill people when it happens to them, but they have to feel threatened, whether the threat is real or not. I don't know if an AGI could end up imagining threats instead of measuring them, but if he did, selfish or not, he could get dangerous. If the threat is real though, selfish or not, he would have to protect himself in order to be able to protect us later, which might also be dangerous for those who threaten him. To avoid harming people, he might look for a way to control us without harming us, but as I said, I think he wouldn't be better than us to invent new things, which means that we could also invent new things to defend ourselves against him, which would be dangerous for everybody. Life is not a finite game, it's a game in progress, so an AGI shouldn't be better than us at that game. It may happen that artificial intelligence will be the next step forward, and that humans will be left behind. Who knows?

That said, I still can't see why a selfish AGI would be more dangerous than an altruist one, and I still think that your altruist morality is more complicated than a selfish one, so I reiterate my question: have you ever imagined that possibility, and if not, do you see any evident flaws in it?

Comment by Raymond Potvin on Origin of Morality · 2018-05-03T14:17:26.672Z · LW · GW

Hi everybody!

Hi David! I'm citing you answering Dagon:

Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they found out that you didn't offer it to them.

What you say is true only if the person is part of our group, and it so because we instinctively know that increasing the survival probability of our group increases ours too. Unless we use complete randomness to make a move, we can't make a completely free move. Even Mother Teresa didn't make free moves, she would help others only in exchange of god's love. The only moment we really care for others' feelings is when they yell at us because we harm them, or when they thanks us because we got them out of trouble, thus when we are close enough to communicate, but even what we do then is selfish: we get away from people that yell at us and get closer to those who thank us, thus breaking or building automatically a group in our favor. I'm pretty sure that what we do is always selfish, and I think that you are trying to design a perfectly free AGI, what I find impossible to do if the designer itself is selfish. Do you by chance think that we are not really selfish?