Fake Selfishness

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-08T02:31:09.000Z · LW · GW · Legacy · 71 comments

Contents

71 comments

Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well.  I was feeling mischievous(*) that day, so I said, "I've observed that with most religious people, at least the ones I meet, it doesn't matter much what their religion says, because whatever they want to do, they can find a religious reason for it.  Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead.  It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms."

And the one said, "I don't think that's true."

I said, "If you're genuinely selfish, then why do you want me to be selfish too?  Doesn't that make you concerned for my welfare?  Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

The one replied:  "Well, if you become selfish, then you'll realize that it's in your rational self-interest to play a productive role in the economy, instead of, for example, passing laws that infringe on my private property."

And I said, "But I'm a small-L libertarian already, so I'm not going to support those laws.  And since I conceive of myself as an altruist, I've taken a job that I expect to benefit a lot of people, including you, instead of a job that pays more.  Would you really benefit more from me if I became selfish?  Besides, is trying to persuade me to be selfish the most selfish thing you could be doing?  Aren't there other things you could do with your time that would bring much more direct benefits?  But what I really want to know is this:  Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do?  Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

And the one said, "You may be right about that last part," so I marked him down as intelligent.

(*)  Other mischievous questions to ask self-proclaimed Selfishes:   "Would you sacrifice your own life to save the entire human species?"  (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.)  Or, taking into account that scope insensitivity leads many people to be more concerned over one life than the Earth, "If you had to choose one event or the other, would you rather that you stubbed your toe, or that the stranger standing near the wall there gets horribly tortured for fifty years?"  (If they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture.)  "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?"  (Selfish libertarians only.)

71 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Constant2 · 2007-11-08T03:01:51.000Z · LW(p) · GW(p)

Have you read Mark Twain's "What is Man"? If I recall correctly, there he lays out his argument that man is already always selfish. For example, we do good deeds ultimately for our own comfort, because they make us feel good. (If I recall rightly, he also makes the, to me rather more interesting, point that people are born happy or sad rather than are made happy or sad by specific, supposedly uplifting or depressing, thoughts. That seems to anticipate the modern pharmaceutical approach to mood regulation.)

For my part, I think that "selfish" must describe a proper subset - not too small, and not too large - of human actions, in order to be a meaningful word. If, as Mark Twain claims, everything we do is selfish, then the word is useless and meaningless. While I acknowledge that practically everything done to benefit others without a clear quid pro quo in mind does indeed seem to have the effect of giving the doer spiritual comfort if nothing else, and may ultimately be done on that account (something we might test by observing a brain damaged patient who has lost the ability to feel that comfort), I would still call those actions "unselfish", simply because these sorts of actions are the paradigms, the prototypes, the models, the patterns, the exemplars, the teaching and defining examples, of "unselfishness". A selfish action necessarily involves a degree of unconcern about others. If there is sufficient concern for others, then the action is no longer selfish even if Mark Twain's psychological analysis of that concern (in terms of felt "spiritual comfort") is correct.

comment by Anonymous13 · 2007-11-08T03:17:05.000Z · LW(p) · GW(p)

Did Hopefully Anonymous figure this out and stop expending effort commenting or posting on his anonymous blog?

comment by Nominull2 · 2007-11-08T04:15:55.000Z · LW(p) · GW(p)

So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me".

comment by Adirian · 2007-11-08T04:56:04.000Z · LW(p) · GW(p)

I must point out that "whenever they want to be nice to someone" entails a desire to be nice to someone. Your very phrase defines it as being in their interests to be nice to someone. Rationalization isn't even necessary here. You wanted to do something - you did it. Selfishness isn't that complicated.

My guess would be that this individual had read Atlas Shrugged and hadn't fully understood what selfish meant in the context. Ayn Rand was setting out to redefine the word, not to glorify the "old" meaning.

comment by Tiiba2 · 2007-11-08T04:59:02.000Z · LW(p) · GW(p)

I think that people are born selfish. This is based on the fact that, if I grew up in an environment that didn't compel me to be either selfish or selfless, I'd probably be selfish. Babies are selfish. I value those creatures that I was convinced to value. Slave owners taught their children that it's okay to beat slaves, and the children were happy to comply. Now most people disregard the pain of food animals because they can get away with it.

Of course, some of my actions are genuinely altruistic. I chose to give up meat, although this brings me little tangible benefit. (It does get me out of some accusations of hypocricy.) One reason why I let myself become like this is that in human society, being nice is a habit that keeps my ass from getting kicked. And it needs to be a habit, because I'm not smart enough to delude everybody that I care, when I actually see them all as obstacles.

If I somehow become so powerful that I no longer depend on anyone, and noting they do can harm me, I will probably quickly become corrupted by my power.

But I agree that now, I can't be considered purely selfish.

"So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me"."

My definition of an intelligent person is slowly becoming "someone who agrees with Eliezer", so that's all right. Plus, the guy showed ability to revise a strongly held belief.

comment by TGGP4 · 2007-11-08T05:01:06.000Z · LW(p) · GW(p)

Read the comments at Hopefully Anonymous' most recent post. He explains why he has been inactive.

I want you to be altruistic, Eliezer. That's partly because I think you're intelligent. I would prefer if some people were more selfish though.

I choose living in comfort for one more year. There are things I might die for, but I don't know what exactly. Perhaps to spite someone. If other people knew that I had the chance to save the world and were going to punish me for failing to do so, I might not risk their wrath. I also choose the stanger getting tortured, but I might risk the toe-stubbing to prevent a policy of torture. I'd steal Gates' money (take that, beneficiary of intellectual property laws!) but really I wouldn't care if you stole a thousand dollars from me and I never found out (unless you meant finding out who specifically did it rather than finding out that it had happened at all).

comment by Gray_Area · 2007-11-08T05:10:34.000Z · LW(p) · GW(p)

"My definition of an intelligent person is slowly becoming 'someone who agrees with Eliezer', so that's all right."

That's not in the spirit of this blog. Status is the enemy, only facts are important.

comment by Tiiba2 · 2007-11-08T06:07:38.000Z · LW(p) · GW(p)

"That's not in the spirit of this blog. Status is the enemy, only facts are important."

See? Another smart man agrees with Eliezer. That's what I'm talking about.

comment by Constant2 · 2007-11-08T06:09:09.000Z · LW(p) · GW(p)

Almost as though Eliezer isn't a person, but a system of thought.

comment by Stephen · 2007-11-08T06:24:31.000Z · LW(p) · GW(p)

Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie "create the scenario I most want/approve of." Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn't want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this "external reference semantics.")

Replies from: NaomiLong
comment by NaomiLong · 2011-10-12T20:32:38.440Z · LW(p) · GW(p)

It seems like this is based more on the person's ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.

Replies from: ragnarrahl
comment by ragnarrahl · 2017-05-02T02:49:38.742Z · LW(p) · GW(p)

Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn't-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-08T06:27:25.000Z · LW(p) · GW(p)

"Do not seek to follow in the footsteps of the wise, seek what they sought." -- Nanzan Daishi, quoted by Matsuo Basho.

comment by Tiiba2 · 2007-11-08T07:02:44.000Z · LW(p) · GW(p)

You most certainly are right. He is a fool who disagrees.

comment by Gray_Area · 2007-11-08T07:15:21.000Z · LW(p) · GW(p)

Stephen: the altruist can ask the Genie the same thing as the selfish person. In some sense, though, I think these sorts of wishes are 'cheating,' because you are shifting the computational/formalization burden from the wisher to the wishee. (Sorry for the thread derail.)

comment by Tiiba2 · 2007-11-08T07:17:57.000Z · LW(p) · GW(p)

"An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie."

Eh? An altruist would voluntarily summon disaster upon the world?

By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive? Is it really so bad? If, when you imagine your brain rewired, you envision something that is too alien to be considered you, or too devoid of creative thought to be considered alive, it's possible that an AI ordered to make you happy would choose some other course of action. It would be illogical to create something that is neither you nor happy.

Replies from: ragnarrahl
comment by ragnarrahl · 2017-05-02T02:57:00.483Z · LW(p) · GW(p)

" Eh? An altruist would voluntarily summon disaster upon the world?" No, an altruist's good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.

" By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?" Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.

You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn't in and of itself absolutely repugnant to everyone-- but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it's not a path we want sufficiently powerful agents to go down.

" it's possible that an AI ordered to make you happy would choose some other course of action. " When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That's more or less the whole Yudkowskian project in a nutshell.

comment by Robin_Hanson2 · 2007-11-08T09:08:53.000Z · LW(p) · GW(p)

Humans seem to gain social status by persuading others to agree with them. This is one of the reasons we resist being persuaded by good arguments. So a selfish person who wanted social status could want you to be selfish also in order to gain social status, showing our dominance via the submission of others.

Replies from: EniScien
comment by EniScien · 2022-05-14T10:15:23.985Z · LW(p) · GW(p)

But won't a real egoist try to convince you for self-affirmation in a position that is beneficial to him and disadvantageous to you, and not of egoism?

comment by Luciano_Dondero · 2007-11-08T10:49:06.000Z · LW(p) · GW(p)

It seems to me that this discussion is somewhat misleading. Each one of us members of the Homo Sapiens species operates to pursue his/her own interests, as dictated by her/his genetic code. But in order to do so, we have got to cooperate with some of our fellow human beings. Each and every one of our actions is in some ways a combination of these two typical aspects of our behavior. There is no such thing as a totally selfish or a totally unselfish behavior/action/activity, much less can we talk of a totally selfish or a totally unselfish person -- only extreme psychopaths (of the kind that become serial killers) may get close to represent an exception to this. However, we define selfish or unselfish behavior in others with respect to ourselves, either directly or indirectly, and our perception is inevitably biased. This is particularly obvious in personal relationships, where the pull of the distinct genetic programs mandates both cooperation and conflict -- and we may perceive selfishness or unselfishness in our partner, at times in ways that are somewhat contradictory to his/her intent and/or his/her deeper interests/needs/whatever. Whether we chose to declare ourselves selfish or unselfishness, and try to govern our actions to implement that self-description, or not, again, this is in part related to our genetic pull to fulfill our "destiny" mediated by our experience, culture, material interests, sexual inclinations, and so on. But it seems to me that it would be wrong to actually take for good any such self-description, and it's worse still to actually demand that people be consistent with that. In the end, that's not very far from judging someone's character from his/her zodiac sign...

comment by Luciano_Dondero · 2007-11-08T10:52:38.000Z · LW(p) · GW(p)

In my above comment at 5:49 AM, the sentence "Whether we chose to declare ourselves selfish or unselfishness" should actually read: "Whether we chose to declare ourselves selfish or unselfish"

comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2007-11-08T12:21:40.000Z · LW(p) · GW(p)

I said, "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

The objection you press against your interlocutor was anticipated by Max Stirner, the renowned champion of egoism, who replied as follows:

Do I write out of love to men? No, I write because I want to procure for my thoughts an existence in the world; and, even if I foresaw that these thoughts would deprive you of your rest and your peace, even if I saw the bloodiest wars and the fall of many generations springing up from this seed of thought — I would nevertheless scatter it. Do with it what you will and can, that is your affair and does not trouble me. You will perhaps have only trouble, combat, and death from it, very few will draw joy from it.

If your weal lay at my heart, I should act as the church did in withholding the Bible from the laity, or Christian governments, which make it a sacred duty for themselves to 'protect the common people from bad books'. But not only not for your sake, not even for truth's sake either do I speak out what I think. No —

I sing as the bird sings That on the bough alights; The song that from me springs Is pay that well requites

I sing because — I am a singer. But I use you for it because I — need ears.

comment by Caledonian2 · 2007-11-08T12:28:19.000Z · LW(p) · GW(p)

Selfishness - or to avoid confusion, let's call the concept 'self-interest' - takes on rather a different appearance when it's realized that the 'self' is not something necessarily limited to the boundaries of the physical form that embodies the distinction.

To the degree that we identify with and value the rest of humanity, sacrificing one's own existence to preserve the rest of humanity can be in the self-interest. To the degree that we don't, or that we negatively value the rest of humanity, that action can be against self-interest. If we disliked humanity enough, we'd choose to destroy it even if it cost us our own lives (which presumably we'd value) in the process.

comment by Caledonian2 · 2007-11-08T12:37:21.000Z · LW(p) · GW(p)
So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me".

Communities form because they repress the incompatible. Particularly on the Internet, where it's easy to restrict who can participate, people tend to agree not because they persuade one another but because they seek out and associate with like-minded people.

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

comment by ehj2 · 2007-11-08T14:12:54.000Z · LW(p) · GW(p)

If we posit an ideal world where every person has perfect and complete knowledge, and the discipline and self-control to act consistently on that knowledge, it's possible we can equate the most self-interested act with the most ethical.

Until we have that ideal world, to posit that people always simply do what they want to do anyway, and rationalize their behavior to their philosophy of life, is to engage in a bit of the same rationalization when we conclude that "if only they knew as much as I know, they would do what I think they should do."

Too much of what we are currently discovering about actual human behavior has to get swept under the rug to equate selfishness with ethics. [i.e., Stanford Prison Experiment.]

/ehj2

comment by Brandon_Reinhart · 2007-11-08T15:02:16.000Z · LW(p) · GW(p)

Are providing answers to questions like "Would you do incredible thing X if condition Y was true" really necessary if thing X is something neither person would likely ever be able to do and condition Y is simply never going to happen? It seems easy to construct impossible moral challenges to oppose a particular belief, but why should beliefs be built around impossible moral edge cases? Shouldn't a person be able to develop a rational set of beliefs that do fail under extreme moral cases, but at the same time still hold a perfectly strong and not contradictory position?

comment by Michael_Sullivan · 2007-11-08T16:29:47.000Z · LW(p) · GW(p)

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

On the other hand, if we are truly dedicated to overcoming bias, then we should value such people even more highly than those whom we can convince to question or abandon their cherished (but wrong) arguments/beliefs.

The problem is figuring out who those people are.

But it's very difficult. If someone can correctly argue me out of an incorrect position, then they must understand the question better than I do, which makes it difficult or impossible for me to judge their information. Maybe they just swindled me, and my initial naive interpretation is really correct, while their argument has a serious flaw that someone more schooled than I would recognize?

So I'm forced to judge heuristically by signs of who can be trusted.

I tentatively believe that a strong sign of a person who can help me revise my beliefs is a person who is willing to revise their beliefs in the face of argument.

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me. The occasional mockery and disdain for those who disagree is a bit of a negative signal.

But this comment here is not a negative signal at all, for me. Why? Because even if Eliezer was wrong, the other party's willingness to reexamine is a strong signal of intelligence. Confirmation bias is so strong, that the willingness to act against it is of great value, even if this sometimes leads to greater error. A limited, faulty error correction mechanism (with some positive average value) is dramatically better than no error correction mechanism in the long run.

So yes, if I can (honestly) convince a person to question something that they previously deeply held, that is a sign of intelligence on their part. Agreeing with me is not the signal. Changing their mind is the signal.

It would be a troubling sign for me if there were no one who could convince me to change any of my deeply held beliefs.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-08T17:00:12.000Z · LW(p) · GW(p)

It's not that the one agreed with me and declared himself no longer selfish, but that he showed nonzero reactivity in the face of an unexpected argument, a rare thing. Further conversation (not shown) did seem to show that he was thinking about it. You don't see, say, Caledonian ever updating his views, or showing nonzero dependency between what I say and his ability to comment negatively on every post.

comment by Caledonian2 · 2007-11-08T20:08:47.000Z · LW(p) · GW(p)

Ah, but I have yet to be confronted with an argument that would cause me to update my views.

So now the problem expands: if two people disagree about the worth of an argument, what criteria do we use to choose between them? What if we ARE one of the two people? How do we use our own judgement to evaluate our own judgement?

Simply, we can't. At best, we can look for simple errors that we presume we can objectively determine, and leave the subtle issues to other systems.

I don't see you admitting you were wrong in the previous threads, Eliezer. Should I interpret that as your unwillingness to admit to error, or that you're so much smarter than me that I can't even comprehend how you're actually correct?

comment by George_Weinberg · 2007-11-08T20:39:27.000Z · LW(p) · GW(p)

"Selfish" in the negative sense means not just pursuing one's own interests, but doing so heedless of the harm one's actions may be causing others. I don't think there are many proponents of "selfishness" in this sense.

There are people that are "selfless" in the sense that they not only don't act according to their direct self-interest, they even abandon their own concepts of true and false, right and wrong, trusting some external authority to make these judgments for them. Religious, political, whatever. People who praise selfishness are generally contrasting it with this kind of selflessness.

comment by Brandon_Reinhart · 2007-11-08T21:16:45.000Z · LW(p) · GW(p)

My understand is that the philosophy of rational self-interest, as forwarded by the Objectivists, contains a moral system founded first on the pursuit of maintaining a high degree of "conceptual" volitional consciousness and freedom as a human being. Anything that robs one's life or robs one's essential humanity is opposed to that value. The Objectivist favor of capitalism stems from a belief that capitalism is a system that does much to preserve this value (the essential freedom and humanity of individuals). Objectivists are classical libertarians, but not Libertarians (and in fact make much of their opposition to that party).

I believe that an Objectivist would welcome the challenges posed in the post above, but might not consider them a strong challenge to his beliefs simply because they aren't very realistic scenarios. Objectivists generally feel that ethics need not be crafted to cover every scenario under the sun, but instead act as a general guide to a principled life that upholds the pursuit of freedom and humanity.

"If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

In the long run, exploiting others seems likely to end up a dead end road. It might be rational and rewarding in the short term, but ultimately it is destructive. Furthermore, it seems to be a violation of principle. If I believe in my own freedom and that I would not want to be misled, I should not attempt to rob the freedom or mislead others without significant compelling reason. Otherwise, I'm setting one standard for my own rights and another for others. By my example, then, there would be no objective ethical standard for me to object against someone attempting to mislead or exploit me. After all, if I set a subjective standard for behavior why shouldn't they? But this isn't rigorous logic and smacks of a rationalization as referenced here:

"But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

The problem seems to be more general: argument with the intent of converting. That intent alone seems to cast suspicion on the proceedings. A rational person would, it seems to me, be willing to lay his arguments on the table for review and criticism and discussion. If, at some point in the future, others agree they are rational arguments and adopt them as beliefs then everyone should be happy because the objectives of truth and learning have been fulfilled. But "converting" demands immediate capitulation to the point of discussion. No longer is the discussion about the sharing of ideas: reward motivators have entered the room.

Self-edification that one's own view has been adopted by another seems to be a reward motive. Gratification that a challenge has been overcome seems to be a reward motive. Those motives soil the discussion.

And the one said, "You may be right about that last part," so I marked him down as intelligent.

The man is intelligent, not because he agreed with Eli's point, but because he was reviewing his beliefs in light of new information. His motive was not (at least not entirely) conversion, but genuine debate and learning.

"Intelligence is a dynamic system that takes in information about the world, abstracts regularities from that information, stores it in memories, and uses it knowledge about the world to form goals, make plans and implement them."

The speaker is doing just that. He might later choose to reject the new information, but at this time he is indicating that the new information is being evaluated.

comment by GNZ · 2007-11-09T11:30:03.000Z · LW(p) · GW(p)

like pablo I have considered whether arguing for in my case altruism/utilitarianism is always altruistic and thought "well probably" - but I dont analyse it much because in the end I don't know if it would matter - it seems I do what I do because 'that is what I am', more than 'that is what I think is right'. I guess it works the other way too eh.

comment by Daniel_Humphries · 2007-11-09T18:43:58.000Z · LW(p) · GW(p)

Michael Sullivan:

That's an exceptionally clear exegesis. Thanks!

Pablo Stafforini:

The words of Max Stirner (with whom I am admittedly unfamiliar) that you quote seem to me like so much bluster and semantic question-begging.

Do I write out of love to men? No, I write because I want to procure for my thoughts an existence in the world; and, even if I foresaw that these thoughts would deprive you of your rest and your peace, even if I saw the bloodiest wars blah blah blah

He sings not out of love for the hearer, but because he loves to sing and the hearer is useful in the act of singing? Do I have that right? That is... if his tree falls in the forest and no one is around, it does not make a sound?

Many philosophers (myself included, I believe), would argue that he is describing the functional definition of love: action and desire passing back and forth between two (or more) beings, each one depending on the other for his or her fulfilment and happiness. But it seems he wants to say that his dependence on others is a sign of isolation and not connection... I know my wording here is indefinite, but that's because Stirner's is. How is this bit of poetry anything more than blustering rationalization after-the-fact?

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor?

Replies from: buybuydandavis
comment by buybuydandavis · 2011-10-20T11:20:16.793Z · LW(p) · GW(p)

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor?

I'm a fan of Stirner, but have always found this particular passage disingenuous. Certainly he sings because he is a singer. It's possible that he is unconcerned with the effect of his song on those without the ears to hear it, but I don't find it credible that he had no hope of benefiting those with the ears to hear. So I think he writes, at least somewhat, out of love for at least some men.

If you're genuinely selfish, then why do you want me to be selfish too?

Because the usual forms of unselfishness are based in conceptual confusions that make you less useful to me, and often downright dangerous. And, it's both sad and rather distasteful to watch you live your life in such a crippled fashion. Such waste offends my sensibilities.

I would much prefer that my neighbors live for themselves, than live for God, Gaia, Allah, Evolution, Justice, the State, The Volk, The Proletariat, etc.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-10-20T13:34:53.848Z · LW(p) · GW(p)

I can see where this might be true, but I can also see where it might be mere sophistry concealing a fundamental concern for your neighbor's well-being. Can you provide some concrete examples of typical ways in which your neighbors' unselfishness lessens their usefulness to you?

Replies from: buybuydandavis
comment by buybuydandavis · 2011-10-21T00:00:18.986Z · LW(p) · GW(p)

I guess I didn't make myself clear. I'm not concealing a fundamental concern for my neighbor's well being. I have it, and I think Stirner does too, despite his disingenuous denial here. Hence my comment that he writes, at least in part, out of love for some men.

Stirner, The Ego and It's Own:

I love men, too, not merely individuals, but every one. But I love them with the consciousness of my egoism; I love them because love makes me happy, I love because loving is natural to me, it pleases me. I know no 'commandment of love'. I have a fellow-feeling with every feeling being, and their torment torments, their refreshment refreshes me too

It's the commandment of love he rejects, not his own love.

Loving other men is no more unselfish than loving your car. You like it shiny and running well, maybe even after you sell it to someone else.

It is perfectly selfish to love what you love, and hate what you hate. To care about what you care about. Why should I limit my concerns to what lies in a 1 inch bubble around myself? It's not what concerns you that makes you an egoist, it's whether you bow to an ideological compulsion to serve an alien concern over your own.

My own take on Stirner's Egoism is that it is best distinguished as the antidote to various forms of Moral Objectivism, not Altruism.

Replies from: taelor, antigonus
comment by taelor · 2011-10-21T07:08:00.585Z · LW(p) · GW(p)

It is perfectly selfish to love what you love, and hate what you hate. To care about what you care about. Why should I limit my concerns to what lies in a 1 inch bubble around myself? It's not what concerns you that makes you an egoist, it's whether you bow to an ideological compulsion to serve an alien concern over your own.

Note that Stirner believed that it is impossible to serve an alien concern over your own. That fact that you are concerned with something makes it your concern. Stirner called people who claim to serve an alien concern above their own "involuntary egoists", and found the entire state of affairs to be laughably absurd.

comment by antigonus · 2011-10-25T17:17:52.907Z · LW(p) · GW(p)

Loving other men is no more unselfish than loving your car.

This makes little sense to me. Other people, unlike cars, have interests; and loving other people tends to have the effect of causing one to adopt those interests as one's own. What exactly is unselfishness supposed to look like, if not that?

comment by TGGP4 · 2007-11-09T21:16:25.000Z · LW(p) · GW(p)

Daniel, you can read Stirner's book here. It's not really "macho", that would be more Ragnar Redbeard's "Might Makes Right".

Among the things Stirner writes about is freedom of expression. He does not care for what the state or the church say he may write, because he takes such freedom for himself (many unauthorized printings of his book were made). He does not respect the holy but instead regards taboos against blasphemy as attempted restrictions on him that he will violate. For people who say that he ought not to speak of certain things because they are horrible and upsetting, he says the uninterrupted calm of others is not of his concern.

Stirner does not reject all notions of love or even alms-giving. He just views them in an egoistic manner, imagining a Union of Egoists (which may consist in something as simple as two friends going for a walk) that find benefit in each other.

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor? Stirner can be said to offer a response (though I suppose not in a literal sense since he has been dead for so long) but you do not strike me as inclined to give it a fair reading.

comment by Daniel_Humphries · 2007-11-09T23:16:00.000Z · LW(p) · GW(p)

TGGP:

Thanks for the tip. I'll check it out.

comment by Recovering_irrationalist · 2007-11-10T02:29:43.000Z · LW(p) · GW(p)

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me.

I agree, but have a nagging doubt. When I read years-old writings where he makes some of those mistakes he sounds about as knowledgeable, just as smart, just as honest, and just as sure he's right as he is now in his new beliefs.

Although I was convinced by very few of those older mistakes (before I searched and found retractions) that could just as easily mean new arguments got super persuasive rather than super accurate.

His writings have convinced me to change many beliefs recently. How much is that down to elite arguing skills well-practiced from convincing the toughest judge of all, himself? What ratio of those beliefs are likely to be wrong, however convincing the arguments sound to me, and to him?

Believe it or not, this isn't meant to be critical. I can't fault the way he's currently guiding his belief system, and to me he seems further along the path I'm trying to start on than anyone I know. I'm just not sure how to objectively judge from back here how far along the path he's really managed to get.

Thoughts about other sources of knowledge welcome, this doesn't need to be about one person.

comment by Recovering_irrationalist · 2007-11-10T02:38:17.000Z · LW(p) · GW(p)

PS. I know arguments should be judged based on their own worth and not who made them, but there are other factors.

comment by Caledonian2 · 2007-11-10T04:17:27.000Z · LW(p) · GW(p)

Sometimes people change their minds because new evidence and novel arguments have forced them to re-evaluate their positions.

Other times, people have never thought through even the obvious arguments, or are convinced easily by weak data and unimpressive theses, so they shift from one point to another.

If someone changes their positions frequently, AND they're very confident in their positions, that's a bad sign.

comment by gutzperson · 2007-11-10T09:45:27.000Z · LW(p) · GW(p)

Eliezer: “Other mischievous questions to ask self-proclaimed Selfishes: "Would you sacrifice your own life to save the entire human species?" (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.)”

What If the person saving the entire human species wanted to commit suicide anyway or if he/she has this dream of heroism and hopes to become immortal and be rewarded in an afterlife. An apparently selfless deed can be a very selfish one.

comment by ChrisA · 2007-11-10T09:52:17.000Z · LW(p) · GW(p)

The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)

More simply, the reason I don't steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.

Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.

Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?

comment by Recovering_irrationalist · 2007-11-10T19:03:14.000Z · LW(p) · GW(p)
If someone changes their positions frequently, AND they're very confident in their positions, that's a bad sign.

I don't think he changes his mind too frequently, and being overly confident at the time of now-abandoned positions isn't unusual. My point was that in Eliezer's case a knowledgeable, smart, honest, and self-certain argument doesn't imply strong evidence of truth, because those qualities appear in arguments that turned out false.

To be honest I think I was hoping someone would leap to his defense and crush my argument, giving me permission to be as sure as he is about his beliefs that I've adopted, whereas what I should do is keep a healthy amount of skepticism and resist any urge to read "Posted by Eliezer" as "trust this".

comment by Nick_Tarleton · 2007-11-11T19:39:14.000Z · LW(p) · GW(p)

ChrisA, why are you intellectually convinced you should be selfish? Rationality doesn't demand any particular goals. A genuinely altruistic person, if uploaded, would overwrite the "brain code" (a bad analogy; evolved tendencies aren't deterministic "code") that promotes selfishness.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-11T21:38:32.000Z · LW(p) · GW(p)

Recovering,

While I wish I had something reassuring to say on this subject, you should probably be quite disturbed if you find my work from 1997 sounding as persuasive as my work from 2007.

comment by ChrisA · 2007-11-12T09:53:26.000Z · LW(p) · GW(p)

Nick

My response is, evolution! Let's say a genuinely (what ever that means) altruistic entity exists. He then is uploaded. He then observed that not all entities are fully altruistic, in other words they will want to take resources from others. In any contest over resources this puts the altruistic entity at a disadvantage (he is spending resources helping others that he could use to defend himself). With potentially mega intelligent entities any weakness is serious. He realises that very quickly he will be eliminated if he doesn't fix this weakness. He either fixes the weakness (becomes selfish) or he accepts his elimination. Note that uploaded entities are likely to be very paranoid, after all when one is eliminated, a potentially immortal life is eliminated, so they should have very low discount rates. You might be a threat to me in a million years, so if I get the chance I should eliminate you now.

If your answer is that the altruistic entities will be able to use cooperation to defend themselves against the selfish ones, you must realise there is nothing to stop a genuinely selfish entity from pretending to be altruistic. And the altruistic entities will know this.

I don't think that most people realise that the reason we can work as a society is that we have hardwired cooperation genes in us, and we know that. We are not altruistic through choice. Allow us to make the decision on whether to be altruistic and the game theory becomes very different.

comment by Recovering_irrationalist · 2007-11-12T12:05:30.000Z · LW(p) · GW(p)

While I wish I had something reassuring to say on this subject, you should probably be quite disturbed if you find my work from 1997 sounding as persuasive as my work from 2007

But I said...

Although I was convinced by very few of those older mistakes (before I searched and found retractions) that could just as easily mean new arguments got super persuasive rather than super accurate.

All my comments mean in practice is that even though once I study and investigate your (new) arguments they nearly always seem to me to be right, I won't let myself start lazily suspending critical judgment and investigation of your beliefs before adopting them. I hope you agree that's a good thing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-12T18:56:20.000Z · LW(p) · GW(p)

Fair enough, Recovering. My own point is that:

When I read years-old writings where he makes some of those mistakes he sounds about as knowledgeable, just as smart, just as honest, and just as sure he's right as he is now in his new beliefs.

Then those factors aren't very good discriminators of truth, are they? It's not just "improper" to take them into account, it actually doesn't work.

In whatever facets I sounded about as "knowledgeable", "smart", "honest", or "self-assured" then as now, you might take these facets into account in deciding whether someone's arguments are worth your time to read, but you shouldn't take them into account in deciding whether the person is right. Whatever it is that caused you to reject most of my old self's beliefs regardless, is what's doing the actual work of discriminating truth from falsehood, not those other perceptions.

comment by Recovering_irrationalist · 2007-11-12T22:00:09.000Z · LW(p) · GW(p)

In whatever facets I sounded about as "knowledgeable", "smart", "honest", or "self-assured" then as now, you might take these facets into account in deciding whether someone's arguments are worth your time to read, but you shouldn't take them into account in deciding whether the person is right.

Agreed. Having said that, I do find those facets to correlate with truth, but the correlation flattens out for high values. Besides, the first two would be hard for me to judge well between your 1997 and 2007 selves, for obvious reasons. Maybe with the right efforts my 2012 self could get close enough to tell.

Whatever it is that caused you to reject most of my old self's beliefs regardless, is what's doing the actual work of discriminating truth from falsehood, not those other perceptions.

That only works if your new self's views are true, rather than just closer to the truth, or better argued, or less alarm-bell-raising, or fitting better with how my mind works, or what I already believe, or what I want to believe, etc. etc.. That was my point.

Don't worry, it's my neurosis not yours. :-)

comment by Mark_Nau · 2007-11-15T11:28:07.000Z · LW(p) · GW(p)

When I say that I am selfish, I mean to express that I think the best model for "altruism" is that of a good consumed much like any other. I consume it for my personal enjoyment, not in proportion to the benefit received by the recipient. And, ceteris paribus, with a declining marginal utility as quantity consumed increases.

In my eyes, a "true" altruist would value the 1000th meal provided to a starving third-worlder on par with the 1st one provided, given that the beneficiaries valued the meals similarly. Nobody behaves that way. Altruism is a terrible model for human behavior.

This sort of scope insensitivity isn't a logical error for selfish people. I have every reason to value the 1000th orange I consume this week less than the 1st. It's only a conundrum for people claiming to be altruistic. And I would think it would be a killing blow.

comment by J_Thomas · 2007-11-15T13:01:32.000Z · LW(p) · GW(p)

Mark, altruists have to deal with their costs too.

It's possible for an altruist to value the thousandth altruistic meal as much as the first, but as his resources shrink the value of the alternatives rises. If I provide meals for a hundred thousand starving people and then I have nothing left and I become a starving person myself, that isn't good. At some point I want to keep enough capital to maintain my continuing ability to feed starving people.

I'm not claiming that it's true that no altruist experiences diminishing returns, or even that there is an altruist who doesn't experience diminishing returns. But the behavior doesn't prove that there couldn't be, and so this isn't a killing blow.

comment by Mark_Nau · 2007-11-15T13:25:26.000Z · LW(p) · GW(p)

J Thomas,

A non-diminishing-returns altruist would hit a point where the utility of spending a marginal resource on a "selfish" purpose dips below the best use of that resource for an "altruistic" purpose. Every single marginal resource after that should go toward altruistic purposes as well. Why? Because for anyone with non-astronomical resources, there is effectively an endless supply of altruistic options that all provide effectively the same degree of benefit to the recipient. The non-diminishing-returns altruist would increase altruistic allocation in 1:1 proportion to increases in resources.

I know of nobody like this, and it strikes me intuitively as a horrible starting point for a model of any portion of human behavior.

Goods that fall under the heading of "altruistic" are just like any other goods, with people exhibiting different personal tastes and preferences to consume them for their own benefit.

comment by James_M. · 2007-11-18T21:20:56.000Z · LW(p) · GW(p)

As a selfish prizefighter, I want to beat my opponent. If I was an altruist instead, I don't think I'd be able to win one fight. Because I am in fact selfish, fighting an opponent who is an altruist would not do much for my self-esteem. Only in fighting better fighters than I am do I learn, not by fighting someone inferior. If a superior fighter does not do his best in a given match with me for some reason, I cannot objectively pretend to be better than him just because I won once. It benefits me to beat him when he's at his best. I like to share my knowledge, so I teach others. It benefits me when someone learns a technique I teach them well, and puts their own take on it. Thus my student becomes my teacher, and I am that much better off for it. There may come a time when my student defeats me, and though I will probably be upset about getting old and slow, a part of me will be proud of him, and of myself.

Anyone I've met that's worth their salt is generally not afraid of their own shadow, and don't horde ideas or knowledge, afraid that someone will outdo them. Regardless someone always does. If in life you either sink or swim, merely floating is like compromising between life and death, and between the two, only death gains from life not vice-versa.

It's a philosophy of life, so of course there will be people who disagree, or don't really follow even if they do agree. But in terms of what kinds of people gravitate to each other, even if you disagree you're probably more likely to gravitate to people who are good at what they do and are willing to teach you. Thus I have met people who are sufficiently selfish, but not necessarily objective or good at what they do, and a load of other permutations, but I've never met someone who's exceptional at what they do who isn't selfish. You don't get good by not knowing what you want and not achieving it.

comment by Tim_Tyler · 2008-07-27T17:14:52.000Z · LW(p) · GW(p)

Re: It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior

This is almost certainly not true of conscious genetic selfishness. Such individuals can be expected to engage in various rare and unusual activities - such as donating to sperm banks.

comment by [deleted] · 2010-08-12T04:51:41.923Z · LW(p) · GW(p)

There's all kinds of incoherency with the idea of "selfishness" being defined as "acting in your own interest." What is your own interest? Can't you define it circularly as being anything you desire to do? Being a "selfish person" doesn't necessarily make sense by that definition.

Maybe it's better to go the opposite direction. Selfishness is indifference to the desires and well-being of others. Whatever else you may be doing (let's drop the question of whether it's "self-interested" or not), it's more important to you than other people. I wouldn't be surprised if 100% selfish people exist in this sense. All you'd have to do is never make a decision where the deciding factor is someone else's well-being.

comment by buybuydandavis · 2011-10-21T00:02:03.463Z · LW(p) · GW(p)

I'm relieved that at least a few people mentioned Stirner. The selfishness EY portrays is not representative of selfishness in the sense any of the literature of Philosophical Egoism of which I am aware. His scenario doesn't even correspond to what a Randian might believe about selfishness.

Max Stirner is the best and most intellectually consistent egoist I'm aware of. Less accomplished but more polemical writers in the same vein include John L. Walker, Benjamin Tucker, John Badcock, Dora Marsden (for a period of time), and Sid Parker.

Max Stirner's "The Ego and His Own" is available online at various sites, including Gutenberg. Many of the less canonical works can be found at: http://i-studies.com/journal/index.shtml.

Works by Marsden (Freewoman) and Parker (Minus One) are archived there, as well as issues of Non Serviam and i-studies, published by Svein Olav Nyberg. The Nyberg publications contain scattered articles by Prof. Lawrence Stepelevich, the one time president of the Hegel Society of America, who is the best professional philosopher on Stirner that I am aware of (most are just awful), although I've heard good things about John F. Welsh's "Max Stirner's Dialectical Egoism: A New Interpretation".

My own take on Stirner's Egoism is that it is best distinguished as the antidote to various forms of Moral Objectivism, not Altruism. The Metaethics sequence, which I have not completed yet, leaves me thinking I'll feel the urge to share a few thoughts on Stirner once I'm done with the sequence.

comment by taelor · 2011-10-21T06:50:12.502Z · LW(p) · GW(p)

I've always felt that both "selfishness" and "altruism" were results of the Fundamental Attribution Error. Some actions are deemed "selfish" according to society's mean value set, others are deemed "altruistic" or "selfless". Personally, I'm more interested in the chain of events that ultimately lead up to an action being performed and in the chain of events that occurred as a result of it than I am in applying labels of dubious and limited utility to things.

comment by Grognor · 2011-10-25T09:34:10.550Z · LW(p) · GW(p)

"Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?"

Without making a case for or against my own altruism, I'd definitely answer yes to this question, for two reasons, which are really one reason:

1) From a utilitarian standpoint, a thousand dollars benefits me much more than losing a thousand dollars harms Bill Gates. Much, much, much more.

2) I am destitute and in grave need of money

Is this fair reasoning? I'd also say the monetary status of the individual must (obviously) be taken into account.

I just brought this up because it seems no one mentioned it, but to me it stuck out like it was in size thirty alternating pink-and-gold font, with a nice background highlight of green. I don't think there's some kind of Platonic wrongness in theft, one of those things that you Absolutely Must Never Do. I'd murder Hitler if it could prevent the apocalypse.

Seriously, it seems rather weird to me that nobody in the comments even brought it up. Am I missing something? Am I overthinking? I think I'm overthinking.

comment by blacktrance · 2013-06-26T19:15:13.028Z · LW(p) · GW(p)

"If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

You're conflating selfishness with vulgar egoism. Suppose your well-being makes me happy, and I believe that making you selfish will make you happier. Then convincing you to be selfish is the self-interested thing to do. If I tried to convince you to be more altruistic (in the self-sacrificing sense, not in the benevolent sense) so I could exploit you more, that would be bad for you, which outweighs the benefit I'd get from exploiting you. Selfishness is about maximizing your hedons, which does not at all imply not caring about others - in fact, it usually means caring for some others.

comment by zackkenyon · 2013-06-26T19:36:02.355Z · LW(p) · GW(p)

If I didn't know that someone was going to be tortured, I would rather stub my toe, and I do not claim to be selfish. Otherwise I am not really sure how to interpret the question.

comment by SeanMCoincon · 2014-07-31T20:57:56.636Z · LW(p) · GW(p)

Many big-L Libertarians I've met - along with those who consider themselves to be trench-fighters for Ayn Rand-ian Objectivism - seem to want to conflate "selfishness" with "enlightened self-interest" for the positive connotations of the latter... yet their rationale for various big-L proposals (such as "let's turn over national security to corporations, who will certainly never abuse the power to force decisions upon people") tends to be of the extremely rosy, happy death spiral, declare-anything-that-doesn't-fit-an-"externality" variety. That seems somewhat removed from any meaning of "enlightened" that approaches sensibility; and that's coming from a mild, little-l, "A free society means you need a reason to make things illegal" libertarian framing.

Ultimately, I can understand the "It's So Simple! (tm)" appeal of claiming that selfishness itself is good as an absolute, but delivering that advice only appears to hold true - at either a societal OR individual level - if the scoreboard is measuring relative altruistic effects. A benefit to oneself that derives from (having helped propagate) a mutually self-interested society only qualifies as a benefit relative to 1) a society of self-sacrificial lemmings (which is a bit of a straw man); or 2) no society at all, where there really ARE no externalities and self-interest can be truly self-referent. ...I feel I may not be explaining this clearly, so I'll simply request suggestions and wrap up this comment.

It seems that, instead of trumpeting "selfishness!" as a counterintuitive moral panacea, all that's really needed for altruism to symbiotically cohabitate with "selfishness" is to use the phrase "rational self-regard" instead, since it doesn't require you to engage in Ethical-Egoism-esque displays of unnecessary dickishness towards one's fellow man. ...And I feel I may have to try to write an article on that subject if one does not yet exist.

Replies from: shminux
comment by shminux · 2014-07-31T21:22:31.240Z · LW(p) · GW(p)

conflate "selfishness" with "enlightened self-interest" for the positive connotations of the latter...

If you have a look at this blog post by one of the more famous ex-regulars, this is basically the "motte-and-bailey" tactics, where motte is "selfishness = "enlightened self-interest" and bailey is something like "let the free market rule".

comment by [deleted] · 2014-11-08T21:52:02.313Z · LW(p) · GW(p)

I think this whole egoism vs. altruism debate is too much black and white thinking.

What about evolutionary ethics? Doing what is good for your genes and the survival of the species.

Therefore, one would put self-interest over the interest of other people, but not totally disregard the latter. Also some people like your children would be more important than others. The ranking would look something like this:

  1. Yourself
  2. Your children
  3. Your family and relatives
  4. The rest of the world

Selfishness is the default, but can be overridden if the bad consequences times the number of affected people are large enough. One has to consider how high the weights for each category are.

Therefore, the answer to your questions would probably be:

  • "Would you sacrifice your own life to save the entire human species?" - Yes
  • "If you had to choose one event or the other, would you rather that you stubbed your toe, or that the stranger standing near the wall there gets horribly tortured for fifty years?" - Stub my toe
  • "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?" - Yes, if my guilty conscience doesn't bring me more harm than the money brings benefits.

Update:

As I thought about it, the well-being of your children is actually more important that your own. This is supported by lots of data. According to Dan Gilbert having children significantly decreases your overall happiness. Parents often sacrifice their life quality for their children. I suppose it's not because they care about overall well-being on earth, but more egoistically about their own genes.

comment by dimension10 · 2015-12-28T06:33:44.887Z · LW(p) · GW(p)

I used to think about self-proclaimed selfishness in this way all the time, but now I think I "get" the doctrine of selfishness. For instance, the answer to the first mischevous question is actually "because I get personal happiness from preaching selfishness, and that vastly overshadows any happiness I could instead get from a single person's altruism". The answer to own life vs. human species is "because a guilt-ridden life wouldn't maximise my personal happiness", and so on.

comment by omalleyt · 2016-09-18T01:31:56.540Z · LW(p) · GW(p)

When we weigh options in our mind, we pick the one that yields the cocktail of chemicals/neurotransmitters that induce the strongest positive response in our reward center. Or rather, the cocktail of chemicals/neurotransmitters that elicits the strongest positive response Is able to pass its signals through to the motor neurons.

A desire to be moral, a desire to avoid pain, a desire to protect kin, all release chemicals.

Seen in this light, the phrase "everything one does is selfish" appears to reduce to "all choices are weighed through one's own neural algorithm." Which is so obvious as to be trivial. The only way to get around this would be to detach your motor neurons from your reward center, and hook them up to a committee of, say, ten other people's reward centers, with the action that receives the highest average response being performed. And the detachment is crucial. You can't just willingly abide by the committee's decision, because your choice to obey would still be passing through your own neural algorithm.

Is this what people mean when they boldly assert that everything a person does is selfish? I don't think so. I think, when looked at like this, the question dissolves.

comment by Emiya (andrea-mulazzani) · 2020-12-05T14:35:29.675Z · LW(p) · GW(p)

The one replied:  "Well, if you become selfish, then you'll realize that it's in your rational self-interest to play a productive role in the economy, instead of, for example, passing laws that infringe on my private property."

 

I get that this isn't the point of the post but... what? No, just no. If I'm selfish I'm going to pass laws that don't infringe on MY private property, why should I care about yours? Indeed, I'll just go on with using any shred of political influence I have to make sure your taxes end up paying my expenses and thus increase my private property while decreasing yours, thank you very much.

And how did this amazing economical system where it's not just more convenient to exploit cheats and take away value from the others got in place if selfish people build it? Was it just an amazing coincidence that this amazingly fair set up was the most convenient for the selfish rule-makers?

comment by [deleted] · 2021-08-26T13:48:06.144Z · LW(p) · GW(p)

My answer to your first mischievous question tends to be: "if you (also) identify as selfish, this will make you more predictable and thereby more trustworthy".  

Don't give two shits about one guy's contribution to the economy. My selfish incentives are purely local.

Besides, it's always better to cooperate with a rich guy than exploit a poor one.

But of course there's no need to convince you to be something you already are

comment by guillefix · 2022-07-29T02:45:58.988Z · LW(p) · GW(p)

"f they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture."

Couldn't one argue that having preferences about things u assume u dont know, wouldn't affect your actions?

When I'm deciding on an actual action, I can only take into account things I know, and nothing else?

So the preference of the case where I would never know about the person being tortured couldn't affect my actions, so in that sense doesn't matter?