The curse of identity

post by Kaj_Sotala · 2011-11-17T19:28:49.359Z · LW · GW · Legacy · 303 comments

Contents

303 comments

So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?

I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby

I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.

Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.

It may not be immediately obvious, but all three examples have something in common. In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do). "Prestigious work" could also be translated as "work that really convinces others that you are doing something valuable for a cause".

We run on corrupted hardware: our minds are composed of many modules, and the modules that evolved to make us seem impressive and gather allies are also evolved to subvert the ones holding our conscious beliefs. Even when we believe that we are working on something that may ultimately determine the fate of humanity, our signaling modules may hijack our goals so as to optimize for persuading outsiders that we are working on the goal, instead of optimizing for achieving the goal!

You can see this all the time, everywhere:

There's an additional caveat to be aware of: it is actually possible to fall prey to this problem while purposefully attempting to avoid it. You might realize that you have a tendency to only want to do particularly prestigeful work for a cause... so you decide to only do the least prestigeful work available, in order to prove that you are the kind of person who doesn't care about the prestige of the task! You are still optimizing your actions on the basis of expected prestige and being able to tell yourself and outsiders an impressive story, not on the basis of your marginal impact.

303 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2011-11-17T19:46:53.807Z · LW(p) · GW(p)

It seems like a large part of the problem is not that our brains unconsciously optimize for prestige per se, but they incorrectly optimize for prestige. Surely, having to take extra years to graduate and damaging one's own cause are not particularly prestigious. Helping Eliezer write a book will at least net you an acknowledgement, and you also get to later brag about how you were willing to do important work that nobody else was.

I don't have much empirical data to support this, but I suspect it might help (or at least might be worth trying to see if it helps) if you consciously optimized for prestige and world-saving simultaneously (as well as other things that you unconsciously want, like leisure), instead of trying to fight yourself. I have a feeling that in the absence of powerful self-modification technologies, trying to fight one's motivation to seek prestige will not end well.

Replies from: AnnaSalamon, sark, Suryc11, Will_Newsome
comment by AnnaSalamon · 2011-11-17T23:40:51.532Z · LW(p) · GW(p)

Seconding this.

As Michael Vassar would put it: capitalism with a 10% tax rate nets a larger total amount of tax revenue (long-term) than does communism with an alleged 100% tax rate -- because, when people see economic activity as getting them what they want, the economy grows more, and one ends up achieving more total, and hence also more for all major parties, than one achieves when thinking about total economic goods as a zero-sum thing to be divided up.

You have a bunch of different motives inside you, some of which involve status -- and those motives can be a source of motivation and action. If you help your status-seeking motives learn how to actually effectively acquire status (which involves hard work, promise keeping, pushing out of your comfort zone, and not wireheading on short-term self-image at the expense of goals), you can acquire more capability, long term -- and that capability can be used partly for world-saving. But you only get to harness this motive force if your brain expects that exerting effort will actually lead to happiness and recognition long term.

comment by sark · 2011-11-18T10:56:26.289Z · LW(p) · GW(p)

I'm not so sure we accord Kaj less status overall for having taking more years to graduate and more status for helping Eliezer write that book. Are we so sure we do? We might think so, and then reveal otherwise by our behavior.

Replies from: CG_Morton, Kaj_Sotala
comment by CG_Morton · 2011-11-18T18:34:53.151Z · LW(p) · GW(p)

I can attest that I had those exact reactions on reading those sections of the article. And in general I am more impressed by someone who graduated quickly than one who took longer than average, and by someone who wrote a book rather than one who hasn't. "But what if that's not the case?" is hardly a knock-down rebuttal.

I think it's more likely you're confusing the status you attribute to Kaj for candidness and usefulness of the post, with the status you would objectively add or subtract from a person if you heard that they floundered or flourished in college.

Replies from: sark
comment by sark · 2011-11-18T23:27:48.191Z · LW(p) · GW(p)

What I has in mind was his devotion to the cause, even as it ultimately harmed it, we think more than compensates for his lack of strategic foresight and late graduation.

With that book, we think of him less for not contributing in a more direct way to the book, even as we abstractly understand what a vital job it was.

Though of course that may just be me.

comment by Kaj_Sotala · 2011-11-18T19:30:33.453Z · LW(p) · GW(p)

Though note that the relevant criteria is not so much what other people actually consider to be high-prestige, but what the person themselves considers to be high prestige. (I wonder if I should have emphasized this part a little more, seeing how the discussion seems to be entirely about status in the eyes of others.) For various reasons, I felt quite strongly about graduating quickly.

Replies from: sark
comment by sark · 2011-11-18T23:30:16.362Z · LW(p) · GW(p)

I was aware of that yes. But I was also assuming what you considered to be high prestige within this community was well calibrated.

comment by Suryc11 · 2011-12-03T08:08:18.703Z · LW(p) · GW(p)

Your comment and this post have really clarified a lot of the thoughts I've had about status - especially as someone who is largely motivated by how others perceive me - thanks!

Any thoughts on how to best consciously optimize for prestige?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-12-04T10:34:53.801Z · LW(p) · GW(p)

Your comment and this post have really clarified a lot of the thoughts I've had about status - especially as someone who is largely motivated by how others perceive me - thanks!

I'm actually kind of ambivalent about it myself. Sometimes I wish I could go back to a simpler time when I thought that I was driven by pure intellectual curiosity alone. For someone whose "native" status-seeking tendencies aren't as destructive as the OP's, the knowledge may not be worth the cost.

Any thoughts on how to best consciously optimize for prestige?

Search for your comparative advantage (usually mentioned in the context of maximizing income, but is equally applicable to maximizing prestige). This can be counterintuitive so give it a second thought even if you think you already know. For example, in college I thought I was great at programming and never would have considered a career having to do with philosophy. Well, I am terrible at philosophy but as it turns out, so is everyone else, and I might actually have a greater comparative advantage in it than in programming.

Look for the Next Big Thing so you can write that seminal paper that everyone else will then cite. More generally, try to avoid competing in fields already crowded with prestige seekers. Look for fields that are relatively empty but have high potential.

Don't forget that you have other goals that you're optimizing for simultaneously, and try not to turn into a status junkie. Also double-check any plans you come up with for the kind of self-sabotage described in the OP.

comment by Will_Newsome · 2012-08-07T03:58:31.002Z · LW(p) · GW(p)

I disagree with you and Anna in this comment.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-11-17T19:35:35.672Z · LW(p) · GW(p)

I don't try to not seek status, I try to channel my status-seeking drive into things that will actually be useful.

Replies from: sark, None, Giles
comment by sark · 2011-11-17T20:54:47.623Z · LW(p) · GW(p)

In other words you try to legislate your actions. But your subconscious will find loopholes and enforcement will slip.

comment by [deleted] · 2011-11-18T00:01:01.154Z · LW(p) · GW(p)

Can you give some examples?

comment by Giles · 2011-11-17T22:36:37.545Z · LW(p) · GW(p)

Mod parent up as much as possible.

;-)

comment by pjeby · 2011-11-17T15:33:11.157Z · LW(p) · GW(p)

Hm, while I'm flattered to have provided a springboard for this discussion, I find it ironic that most of the discussion thread consists of either status-seeking arguments, or else people agreeing that this is a Serious Problem -- and implicitly noting how useful it will be in showing how hard they're trying to overcome it. ;-)

AFAICT, nobody is asking how it can be fixed, whether it can be fixed, or actually proposing any solutions. (Except of course in the original discussion you linked to, but I don't get the impression anybody from this post is really reading that discussion.)

(For anyone who is interested in that, this post offers some pointers.)

Replies from: None, DSimon
comment by [deleted] · 2011-11-17T15:43:22.520Z · LW(p) · GW(p)

AFAICT, nobody is asking how it can be fixed, whether it can be fixed, or actually proposing any solutions.

That was my first impulse, but I wondered why Kaj hadn't included any solutions and then wondered if this even is a problem that needs fixing. Isn't it a flaw of many thinkers that if you give them a question, they try to answer it?

Replies from: Kaj_Sotala, Vladimir_Nesov
comment by Kaj_Sotala · 2011-11-17T16:03:55.250Z · LW(p) · GW(p)

but I wondered why Kaj hadn't included any solutions

I've been somewhat helped by simply realizing the problem. For example, recently I was struggling with wanting to study a lot of math and mathy AI, because that's the field that my brain has labeled the most prestigious (mostly as a result of reading Eliezer et al.). When I realized that I had been aiming at something that I felt was prestigious, not something that was actually my comparative advantage, it felt like a burden was lifted from my shoulders. I realized that I could actually take easier courses, and thereby manage to finish my Master's degree.

Replies from: None
comment by [deleted] · 2011-11-17T16:24:22.325Z · LW(p) · GW(p)

My understanding is the quote: "It's better to be a big fish in a small pond than a small fish in a big pond." is substantially related to status.

If I try to apply it to your situation to find isomorphisms, I find a lot:

Rather than being a small fish(struggling with math) in a big pond(Eliezer et al.), you want to be the big fish(actually my comparative advantage) in the small pond(take easier courses.)

Considering this, are you sure you've left the status framework? If so, why?

(Edited after comment from TheOtherDave for brevity.)

Replies from: Eliezer_Yudkowsky, Kaj_Sotala, TheOtherDave
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-11-17T19:33:21.435Z · LW(p) · GW(p)

Comparative advantage is eating the sort of food that most greatly increases your fish size in the pond whose size implies the greatest marginal payoff for adding fish of the size you can become if you enter that pond.

Replies from: None
comment by [deleted] · 2011-11-17T20:54:03.978Z · LW(p) · GW(p)

When I combine what you said with:

I don't try to not seek status, I try to channel my status-seeking drive into things that will actually be useful.

I think I may have dissolved my confusion. You could separate it out into two pieces:

1: Comparative advantage - An Optimization Process

2: Things that will actually be useful. - Being Friendly

My confused feeling seems like it might have been from setting these things as if they were opposed and you could only maximize one.

But if you figure the two are multiplied together, it makes much more sense to attempt to balance both correctly, to maximize the result.

Utility functions aren't quite as simple as multiplying two numbers, but the basic idea of maximizing the product of comparative advantage and usefulness sounds a lot more reasonable in my head then maximizing one or the other.

Thanks!

comment by Kaj_Sotala · 2011-11-17T16:36:41.221Z · LW(p) · GW(p)

I want to pursue my comparative advantage because that's the best way that I can help SIAI and other good causes, regardless of status considerations. Pursuing mathy stuff is only worthwhile if that's my best way of helping the causes I consider valuable.

Or to put it more succinctly: if being a big fish in a small pond, or even a small fish in a small pond, lets me make money that can be used to hire big fish in big ponds, then I'd rather do that than be a small fish in a big pond.

(I won't try to claim that I've left the status framework entirely, just to some extent on this particular issue. Heck, I'm regularly refreshing this post to see whether it's gotten more upvotes.)

Replies from: None
comment by [deleted] · 2011-11-17T17:11:53.221Z · LW(p) · GW(p)

That's a fair point, but because money is so fungible, it's exactly the same kind of statement that you would be making if you were in fact selfish and didn't care about existential risk at all. In the same sort of way that both a new FAI and a new UFAI may have one of their tasks be to ask for some computing power.

So while that may be the right thing to do, I'm not sure if that in of itself can be taken as evidence that you care more about existential risk than status. Although, if you take that into account, then it really does work, because you aren't getting the status that you would get if you immediately helped the SIAI, you are instead ignoring that for a later boost that will really help more.

Honestly, the more I talk about this topic, the less I feel like I actually have any concrete grasp of what I'm saying. I think I need to do some more reading, because I feel substantially too confused.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T17:24:58.763Z · LW(p) · GW(p)

True. It also isn't very reassuring to know that some of the paths which I'm now pursuing will, if successful, give me high status (within a different community) in addition to the status boost one gets from being rich. I do know that I'm still being somewhat pulled by status considerations, but at least now I'm conscious of it. Is that enough to avoid another hijack? Probably not merely by itself. I'll just have to try to be careful.

Replies from: pjeby
comment by pjeby · 2011-11-17T17:32:31.140Z · LW(p) · GW(p)

Why are you even trying to avoid status considerations? How does avoiding status considerations help you reach your instrumental goals?

Or, more precisely: what makes you think that conscious awareness and attempting to avoid status considerations will be any more successful at changing your actual behavior than any other activity undertaken via willpower?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T17:48:12.603Z · LW(p) · GW(p)

I'm not trying to avoid status considerations. I'm only trying to avoid them hijacking my reasoning process in such a way that I think the best ways of achieving status are also the best ways of achieving my non-status goals.

I can't completely ignore status considerations, but I might be able to trade a high-status path that achieves no other goals, to a path that is somewhat lower in status but much better at achieving my other goals. But that requires being able to see that the paths actually have different non-status outcomes.

Replies from: rysade
comment by rysade · 2011-11-18T08:50:46.532Z · LW(p) · GW(p)

This is very clear. Others should refer back to this for a refresher if the topic becomes confusing. I know it's set my head spinning around sometimes.

comment by TheOtherDave · 2011-11-17T16:33:30.764Z · LW(p) · GW(p)

...but not edit.

Replies from: None
comment by [deleted] · 2011-11-17T16:55:46.786Z · LW(p) · GW(p)

(Edited after comment from pjeby for brevity.)

I suppose I could simplify this to "There are layers of status seeking. So it's very easy to think you aren't making a Status0 play, because you are making a Status1 play, and this can recurse easily to Status2, Status3, or Status4 without conscious awareness."

Replies from: TheOtherDave, pjeby
comment by TheOtherDave · 2011-11-17T18:04:01.181Z · LW(p) · GW(p)

Erg. That sounds really insane. Which is bizarre, because although it sounds insane when I actually say it, my brain normally handles it without too much self awareness, and could go back to doing so if I wasn't specifically trying to analyze it in the context of this discussion.

FWIW, that sense of "this sounds insane when I say it explicitly but feels natural if I don't think about it" is an experience I often have when I am becoming aware of my real motives and they turn out to conflict with preconceived ideas I have about myself or the world. Usually, either the awareness or the preconceived ideas tend to fade away pretty quickly. (I endorse the latter far more than the former.)

comment by pjeby · 2011-11-17T17:22:11.316Z · LW(p) · GW(p)

Except, pjeby essentially said that "But if you were a truly good person, you would acknowledge that you were a status seeking hypocrite."

Uh, no. That is so far off from what I said that it's not even on the same planet.

See, "good" and "hypocrite" are just more status labels. ;-)

What I was saying is, if you acknowledge your actual goals, you might have a better chance of sorting out conflicts in them. Nowhere does labeling yourself (or the goals) good or bad come into it. In fact, in the discussion on solutions, I explicitly pointed out that getting rid of such labels is often quite useful.

And I most definitely did not label anyone's goals hypocritical or advise them to aspire to goodness. In fact, I said that the original questioner's behavior may well have been optimal, given their apparent goals, provided that they didn't think too much about it.

In much the same way that your comment would've been more workable for you, had you not thought too deeply about it. ;-)

Replies from: None
comment by [deleted] · 2011-11-17T18:22:21.038Z · LW(p) · GW(p)

In much the same way that your comment would've been more workable for you, had you not thought too deeply about it. ;-)

Upon additionaI retrospection, (and after lunch) I agree. I'll edit those down to the more workable parts.

Since there doesn't appear to be a way to do partial strikethrough, I guess I can just save the removed/incomplete parts in a text file if for some reason anyone really wants to know the original in the near future.

comment by Vladimir_Nesov · 2011-11-17T15:54:58.170Z · LW(p) · GW(p)

Isn't it a flaw of many thinkers that if you give them a question, they try to answer it?

It's also a Fully General Argument (and Excuse) for not solving problems.

Replies from: None
comment by [deleted] · 2011-11-17T15:58:49.359Z · LW(p) · GW(p)

Akrasia has been talked about a lot, with little progress. This approach doesn't seem useful, maybe because it's solving the wrong problem. You are right about my comment being too general, though, and I retract the claim as stated.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-17T16:06:54.129Z · LW(p) · GW(p)

Akrasia has been talked about a lot, with little progress.

Agreed that this strongly argues against thinking up new amazing solutions just to put them in comments to the post. ("Hold off on proposing solutions" seems usually misused. An injunction closer to the present topic is Beware of Other-Optimizing.)

comment by DSimon · 2011-11-24T04:29:07.968Z · LW(p) · GW(p)

This is a very clever statement, and therefore I accord you higher status, as you were hoping for when you wrote it. ;-)

Replies from: pjeby
comment by pjeby · 2011-11-24T05:57:35.183Z · LW(p) · GW(p)

This is a very clever statement, and therefore I accord you higher status, as you were hoping for when you wrote it. ;-)

Actually, I was hoping to help people. If you accord me status, but don't actually use any of the information I gave, then you are frustrating my hopes rather than satisfying them.

comment by jhuffman · 2011-11-18T13:44:03.064Z · LW(p) · GW(p)

I've been reading Robin Hanson for five years or so and while I could often notice tendencies he describes that I found in myself, the comprehensiveness of the problem just hadn't come home to me. Just about everything I do is motivated in whole or in a large part by status seeking, and for some reason I didn't know that until just now.

Replies from: andyantti
comment by andyantti · 2011-11-19T09:27:06.865Z · LW(p) · GW(p)

Humans are social animals. We try rationalize our motives based on what we want and how we can make it look outside. So far we even lie to ourselves to reach the middle ground.

For example how many people actually say they work hard because they are greedy? No, you wont find such people explaining their motives. Instead they say they are hard working, want to excel and are gifted what they do. Why couldn't you be medicore person that is greedy and just works very hard to get loads of money?

It isn't as comforting though and society many times also rewards if we explain our motives to bit better.

comment by Jolima · 2011-11-18T22:26:47.838Z · LW(p) · GW(p)

This is probably what I've been struggling with the most during my life. I'm starting to feel like I'm close to reaching a balance in overcoming it though.

Early on my primary goal in life was being Good. Along with a bunch of other traits, I deemed status seeking and signalling as Evil and strove never to do it.

That... is hard to do and of course I didn't succeed fully. What I did manage was becoming terribly passive and self-effacing, I second-guessed any activity I engaged in even as I was doing it and abandoned anything I recognized as being signalling or status increasing unless I could come up with a convincing reason why it was objectively good. In the last few years I have reconsidered somewhat. I still have a gut instinct against it but I slowly changed my personality to accept and then embrace it since I recognized that would make me a better person.

I guess this is adding to the other comments that, yes, status and signalling is a mind killer and the first step is to notice and acknowledge that you are participating in it. The second step isn't to surpress it though, but to shape it and use it to fit who you want to be.

I still hate bragging*, so to balance the positive signaling I just did I'll add in that another, less idealistic part of my passive behavior was and probably still is the anti motto "If you don't try, no one can judge your goals or blame you for failing".

*and hate that saying so is itself bragging** :)

**recursively

comment by daenerys · 2011-11-17T21:13:40.410Z · LW(p) · GW(p)

I seem to remember reading that males tend to status-seeking behaviors more than females. Or maybe it was that women seek status in a more social context. Either way, I can't find it now.

But my personal experiences are very different. Anything I've done that you could consider "high-status", I've only done because it was pretty much thrust at me. You mentioned that you disliked doing low status work, but for me even when I went into engineering (because my family didn't support me going into social work), my dream job was to work for a very small engineering firm or branch, that needed an assistant that could do all sorts of tasks. Smart enough to understand the material, but also willing to sit down and do the menial labor from technical writing, to giving presentations. That's still something I would love to do.

I guess what motivates me personally in my work is the desire to be appreciated, which is why I love child and disability care so much, and dislike my other job which is high pay, but low usefulness. But it seems like I am completely in the minority here and I don't know if that is because:

a) This site is dominated by status-seekers- perhaps because of the style (debating), substance (rationality) or demographics (male)

b) The people who commented also happen to be status-seekers - perhaps because those who weren't didn't feel compelled to write

c) Something else

Replies from: None, pjeby, Avnix
comment by [deleted] · 2011-11-18T03:18:01.324Z · LW(p) · GW(p)

Status doesn't exist in a vacuum. The audience matters. While high pay regardless of usefulness will win you status in mainstream society, it certainly will not with, say, the Less Wrong audience. Or in the Missionaries for Charity. Similarly, people with high status in a specific subgroup may be considered downright weird in mainstream society.

So perhaps you're optimising for status with your target audience.

Replies from: daenerys
comment by daenerys · 2011-11-18T04:20:12.930Z · LW(p) · GW(p)

There are also jobs which are high pay that are also low status in any audience or society.

Replies from: dlthomas
comment by dlthomas · 2011-11-18T05:38:06.287Z · LW(p) · GW(p)

I am so far failing to think of any.

Replies from: MileyCyrus, None, CuSithBell
comment by MileyCyrus · 2011-11-18T08:56:01.656Z · LW(p) · GW(p)

Truckers. Military contractors. Strippers.

Replies from: jhuffman, dlthomas, MarkusRamikin
comment by jhuffman · 2011-11-18T13:46:16.453Z · LW(p) · GW(p)

Truckers are highly paid?

comment by dlthomas · 2011-11-18T14:37:46.623Z · LW(p) · GW(p)

All three of these are low status in many audiencies/societies. I think that for each, however, there exists an audience that accords them high status.

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2011-11-18T22:36:39.863Z · LW(p) · GW(p)

Who considers strippers to be high status?

(Certainly not the actual audience. They just see meat to eat with their eyes, not a person. Even prostitutes are probably respected a lot more on average than strippers, since it's more common that people at least talk to prostitutes, and become more aware that there's a person there.)

Replies from: pjeby, None
comment by pjeby · 2011-11-18T23:01:26.979Z · LW(p) · GW(p)

They just see meat to eat with their eyes, not a person.

Typical mind fallacy, perhaps?

I don't know about you, but if I happen to be watching someone stripping it's much more about the meeting of the eyes than the eyeing of the meat.

Even prostitutes are probably respected a lot more on average than strippers, since it's more common that people at least talk to prostitutes

Well, if you go by the HBO specials they did about both groups, it's actually the other way around. Though really, people formed long-term relationships with their service providers in both groups.

Replies from: Aleksei_Riikonen, buybuydandavis
comment by Aleksei_Riikonen · 2011-11-18T23:18:27.814Z · LW(p) · GW(p)

Typical mind fallacy, perhaps?

Generalizing from one example, rather. Mostly I was going by what I've heard from an acquaintance that worked as a stripper.

comment by buybuydandavis · 2011-12-01T04:01:51.804Z · LW(p) · GW(p)

I don't know about you, but if I happen to be watching someone stripping it's much more about the meeting of the eyes than the eyeing of the meat.

It's not necessarily about the eyes for me, but If the stripper is any good, it's more about emotional expression than flapping their meat around. Sadly, many strippers dance like meat sacks. What worries me is that they may just know their market better than I do.

comment by [deleted] · 2011-11-18T23:37:29.183Z · LW(p) · GW(p)

I don't know about "high status", but Roissy discusses here whether it is better to insinuate, for the purposes of attracting another woman, that you've dated strippers or lawyers in the past (his conclusion: it depends), and he recounts a failed attempt to pick up an attractive stripper here.

Quotes:

The reason stripper DHVs work on nearly all women to a greater or lesser degree is because, contrary to the erroneous belief that women wouldn’t be impressed by what men are impressed by, a stripper is REAL WORLD evidence that the man who dated her has preselection value, i.e. reproductive fitness. Strippers are perceived, (whether the perception is valid is irrelevant), as hot girls who are out of reach of the average man. A man who has [fornicated with] a stripper must therefore bring something very special to the table; namely, his irresistibility.

I would eat my own eyes if I ever see Roissy or anyone else say the same about prostitutes (dating them when they aren't on the job).

Naturally I would not be going over to the stage like every other hard up loser. Although the girls are the ones naked before the men, they have all the power [...] Walking over to the stage to watch her dance and give her dollars would have been the equivalent of neutering myself [...] I stayed put at the bar and turned my back on [the girl], only looking over for a second to smile at her.

So although strippers are low class in general, the men who watch them put them in a high status position relative to themselves. The same cannot be said of prostitutes, who are lower status than just about anyone in society including the men who use them. Prostitution is by far the most degrading occupation for a woman.

Replies from: Kaj_Sotala, wedrifid
comment by Kaj_Sotala · 2011-11-19T09:49:57.219Z · LW(p) · GW(p)

The same cannot be said of prostitutes, who are lower status than just about anyone in society including the men who use them.

Some prostitutes have high status with their audience. Quickly translated from Punainen eksodus, a PhD sociology thesis on Finnish prostitution:

Sex workers find that their position favors them: of 25 interviewed workers, 13 felt they were in a dominant position as compared to the customer. 11 felt that the power was evenly distributed so that the sex worker sets the limits inside which the customer makes his choices. The estimate may be related to the relatively good position of Finnish sex workers, but the feeling of power has also been documented in the international literature:

"It [sex work] really challenged the traditional idea of an orgasm as something that the man "gives" me. According to the traditional view, the woman isn't supposed to control the sex act, but to be the passive recipient of whatever happens. But as the prostitute you're the one who sets the pace, you're the one who controls the whole situation." (Maryann from Santa Cruz, interviewed by Chapkins 1997, 85.)

At the time when my material was collected, Finnish sex markets were characterized by demand far outstripping the supply, a "seller's market", forcing the customers to compete with each other for meeting times. A reasonable employment situation and social security combined to reinforce the sex worker's negotiation position, for she could select her customers and could often also decide to stop doing sex work.

According to the sex workers, they are also empowered by the internal logic of the relationship. The buyer needs to pay for the meeting and wants it. For many men, a successful act requires the sex worker to be aroused (or to at least feign arousal). On the other hand, the seller can be happy when she gets her money, and an advance payment is not tied to the customer's satisfaction. Even if the seller is doing the job for financial reasons and is therefore reliant on the income, her earnings are not tied to any particular customer.

"And if I don't have fun with the customer, I can tell after two times, then I'll tell the customer straight that this isn't working. I'll tell him directly that 'you should start getting this service from somewhere else, I'm not going to offer it anymore'. I've needed to say that some tens of times, but it's true. Just like my customers have the right to choose me, I have the right to choose my customers." (Interview, Tiina.)

"I've grabbed some guys by the neck and thrown them out after they've acted inappropriately. At the start of each meeting I'll explain my rules and if someone breaks them, I warn them on the first time and throw them out on the second." (Interview, Maija.)

[...]

"Somehow earlier I was a little afraid, in a way I was afraid of and kept kind of a respectful distance to men. Now it's the other way around. Men are the ones in need, and I can treat them kinda the way it happens to suit me. If someone isn't suitable for me, then they aren't and they better accept that. They're in a dominated position to me. I'm the one who decides and does." (Interview, Kaarina.)

Based on these experiences, it does not seem like people would get into sex work in order to relive earlier traumas. In my material, a history of bad experiences with sex comes up specifically as an opposite to one's own role as a seller of sex. After the experience of feeling dominated, it may be liberating to realize that there's nothing wrong in demanding personal satisfaction, and that many men are even willing to pay extra for it.

Replies from: None
comment by [deleted] · 2011-11-19T11:37:42.208Z · LW(p) · GW(p)

Interesting. I suppose I had in mind the kind of prostitute who has no choice of customers. On the other hand a prostitute (or "escort") who turns undesirable men down is not too far away from being a run-of-the-mill promiscuous woman who extracts material benefits from her suitors. The prostitute in this case has merely formalised her revenue stream.

In my defense, I was responding to this claim: "Even prostitutes are probably respected a lot more on average than strippers", and I don't believe that the average prostitute is in such a comfortable position. I also think that the feeling of power or control over the situation is not really the same thing as status. If you asked the Finnish prostitutes' customers whether given the choice they would prefer their own daughters to be prostitutes, or strippers (whom the men are not allowed to touch) then you might get a different perspective.

comment by wedrifid · 2011-11-19T03:28:39.956Z · LW(p) · GW(p)

I would eat my own eyes if I ever see Roissy or anyone else say the same about prostitutes (dating them when they aren't on the job).

I advise you to be careful to avoid reading anything further related to this subject. Because I have seen just that!

comment by MarkusRamikin · 2011-11-18T09:31:09.052Z · LW(p) · GW(p)

Military contractors are low status?

Replies from: MileyCyrus
comment by MileyCyrus · 2011-11-18T14:58:56.714Z · LW(p) · GW(p)

Compared to members of the actual military (who often do comparable work), contractors are paid much better and respected much less.

comment by [deleted] · 2011-11-18T07:28:41.571Z · LW(p) · GW(p)

Accountants and the like have high median salary but are widely considered to be boring people. I don't know if this is what daenerys was thinking of, but it's the best example I can think of.

comment by CuSithBell · 2011-11-18T08:15:11.688Z · LW(p) · GW(p)

Adam Smith said that certain jobs - executioner, for example - were well paid because they were "detestable".

Replies from: dlthomas
comment by dlthomas · 2011-11-18T14:40:21.737Z · LW(p) · GW(p)

Agreed, but this effect will be observed when relevant audiences deem the job low status; it does not require all audiences to.

comment by pjeby · 2011-11-17T23:19:33.618Z · LW(p) · GW(p)

I guess what motivates me personally in my work is the desire to be appreciated, which is why I love child and disability care so much, and dislike my other job which is high pay, but low usefulness. But it seems like I am completely in the minority here

Appreciation is part of the same broad family of major human drives, but it tends to motivate more actual action. ;-)

comment by Sweetgum (Avnix) · 2022-07-09T23:11:05.290Z · LW(p) · GW(p)

I guess what motivates me personally in my work is the desire to be appreciated

As I understand it, “status” essentially is how much people appreciate you. So you’re basically just describing the desire for status here.

comment by ChrisHallquist · 2011-11-18T01:06:53.978Z · LW(p) · GW(p)

This is an excellent post.

I'll toss in another example: volunteering vs. donating to charity. People like the idea of volunteering, even when they could do more good by working longer hours and donating the money to charity.

When I first entered college, I had the idea that I'd go to med school and then join Doctors Without Borders. Do a lot of good in the world, right? The problem was that, while I'm good at a lot of things, biology is not my strong suit, so I found that part of the pre-med requirements frustrating. I ended up giving up and going to grad school in philosophy.

To maximize my do-gooding, I would have been better off majoring in Computer Science or Engineering (I'm really, really good at math), and committing to giving some percentage of my future earnings at a high-paying tech job to charity. Alas...

Now whenever I meet someone who tells me they want to go into a do-gooding career, I tell them they'd be better off becoming lawyers so they can donate lots of money to charity. They never like this advice.

Replies from: homunq, John_Maxwell_IV, mwengler
comment by homunq · 2011-11-18T05:30:17.468Z · LW(p) · GW(p)

Becoming a lawyer is an extremely bad recipe for becoming rich these days.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2011-11-18T06:21:11.799Z · LW(p) · GW(p)

Yeah. What are the MD specialties that make all the money? Radiology, Oncology...

comment by John_Maxwell (John_Maxwell_IV) · 2011-11-22T07:22:15.388Z · LW(p) · GW(p)

That's pretty interesting how you self described as being really good at math but went into a career that wasn't math oriented. In myself, I've observed a trend of regarding things that I'm already good at as things that aren't especially interesting or important. Additionally, part of me likes the idea of being able to signal having a high aptitude at something that I don't bother to exploit. I wonder how many great scientists and creative types humanity has lost out on as a result of people ignoring the things they're good at because they seem too easy.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-22T08:49:07.302Z · LW(p) · GW(p)

I seem to recall hearing somewhere an anecdote about a scientist who decided to dabble in some particular field. He immediately got a lot of attention and cites for his early papers, and then decided that if he could excel in the field this easily, the field wasn't worth his time.

Replies from: wedrifid
comment by wedrifid · 2011-11-22T09:54:49.967Z · LW(p) · GW(p)

I seem to recall hearing somewhere an anecdote about a scientist who decided to dabble in some particular field. He immediately got a lot of attention and cites for his early papers, and then decided that if he could excel in the field this easily, the field wasn't worth his time.

Which is basically a terrible idea (on his part not yourself obviously). If he goes back to a field where it is hard to contribute it is likely that either the field is further into diminishing returns or already saturated with scientists. If the field where he can excel in easily is worthwhile as a science in general and gives a satisfactory level of prestige then staying in it is best for himself and for science in general. If he needs a challenge then he can just find the hardest, most critical part of the field and take that to the next level. If the whole field is not yet a fully generally solved problem then there is plenty of challenge remaining.

comment by mwengler · 2011-11-23T19:48:19.465Z · LW(p) · GW(p)

Now whenever I meet someone who tells me they want to go into a do-gooding career, I tell them they'd be better off becoming lawyers so they can donate lots of money to charity. They never like this advice.

This is what Warren Buffett has done. And he quite explicitly over the years said he wasn't going to donate while getting richer because his ability to compound his wealth was above average and so he would do more net good giving it away when he was done. (As it turns out, he gave away stock in his company, which has a very low effect on "shrinking the pie" that he is working with.)

comment by [deleted] · 2011-11-19T04:05:13.219Z · LW(p) · GW(p)

Let me ask a rude question: What makes you so sure you want to "do good"? If you do, this would be a most unusual appetite. People do what they want for other reasons, and then they explain it to themselves and others as "doing good." The motivation to "do good" isn't a primary motive. How could it be? From where might it come ? To root that sort of motive in nature, one pretty much has to invent some form of moral realism; you must cross the "is" versus "ought" chasm. Now's not the time to address the moralistic illusion, but without the prior need to morally justify one's sense of seeking right, I think moral realism would appear the fantasy it is.

One tries to do right but ends up seeking status. Then one asks: how do I weaken or redirect my status seeking? That may seem the obvious problem, but then why would someone who is smart, studies rationality, and tries to apply his conclusions end up failing to achieve his goals?

I don't buy the cynical line of that Dirty Old Obfuscator Robin Hanson: that status is our primary drive. This is a transparent rationalization for its being his primary goal. There are more important drives, call it effectance, competence, or Nietzsche's "will to power." Even "self-actualization" may do in a pinch. You obviously haven't succeeded in engaging any deep interests (in the sense of "intellectual interests" not the sense of "source of comparative advantage.") As it looks to me, that's your problem. .

You're right, of course, that signaling status often distracts from what's productive. And perhaps everyone needs to work on being distracted less. Theoretically, this could be accomplished in one of two ways. One might 1) observe the environmental triggers for status-oriented thinking and decrease one's exposure to them; or 2) find ways to gratify status striving through the objectively more valuable activity. Only 2 seems to have been discussed, but I think it's less important; even, unworkable. The problem is that indulging status drives, like most nonhomeostatic (appetitive) drives, increases their strength. If you recognize status seeking as a distraction, you're probably better off limiting your exposure to what precipitates it. (Serving as head of a political party is certainly well-calculated to be an effective trigger of status seeking.)

But, while these elements of truth impart to your analysis a sense of truthiness, they don't apply to your situation as you describe it. You weren't merely distracted; you directly subverted your own goals. No situationist tinkering will address a problem that really lies elsewhere. The problem is, it seems to me, that you are so concerned with what you "should do," ethically speaking, that either you don't recognize your intellectual interests or you refuse to follow them.

It is easy to become intellectually enchanted with an idea, whether the Singularity, the Pirate Party, or (for that matter) a religious ideal. But this doesn't mean you believe it with the certainty that your intellect claims. Your balking at the goals you set yourself suggests that beneath your conscious intellect, you are at best indifferent to them; I would go further and say you're probably downright hostile to your professed goals.

Replies from: nshepperd, Kaj_Sotala, PrometheanFaun
comment by nshepperd · 2011-11-19T05:04:50.894Z · LW(p) · GW(p)

The motivation to "do good" isn't a primary motive. How could it be? From where might it come ?

Built in, like all other drives?

Replies from: None
comment by [deleted] · 2011-11-19T06:02:08.868Z · LW(p) · GW(p)

What's built in, plausibly, are specific drives (to comfort a crying baby, to take a clear example) whose gratification overlaps what we're inclined to call good. But these specific drives don't congeal into a drive to do ethical good: "good" isn't a natural property.

Now, you could say that "doing good" is just a "far" view of gratifying these specific drives. But I don't think that's the way it's used when someone sets out to "do good," that is, when they're making "near" choices.

Replies from: None, preferredanonymous
comment by [deleted] · 2011-11-21T21:49:38.350Z · LW(p) · GW(p)

I would tend to take the position that to "do good" is simply to take actions that satisfy (in the sense of maximizing or satisficing output utility, or some approximation thereof) some fixed function of likely great complexity, which we refer to by the handle "morality."

Obviously, we only take those actions because of our luck (in a moral sense) in having evolved to be motivated by such a function. And we are strongly motivated by other things as well. But I don't think it's reasonable to state that because we are motivated, therefore we are not motivated by morality. Of course, you might call me a moral realist, though I don't believe that morality is written in the stars.

comment by preferredanonymous · 2011-11-24T03:58:14.852Z · LW(p) · GW(p)

""good" isn't a natural property."

That's where you're fundamentally wrong.

You can't disprove something by defining it it to be non-existent. The term "good" very much describes something real (and natural), otherwise we wouldn't be able to think of it.

Put simply its just the act of fulfilling ourselves, and our purpose. We have a vague notion of good actually is, and are mislead to believe that it doesn't exist (as in your case), for precisely the very reason that we aren't perfect at getting what we need. We get what we want, or what we think we want...which is not necessarily that which is full-filling.

As such, we all have a drive to do what is good, in that we all have a drive to lead full-filing lives. Ethics is the problem of actually leading such lives, not the magical creator of some hypothetical property.

Replies from: DaFranker, DSimon, thomblake
comment by DaFranker · 2012-07-26T20:07:51.599Z · LW(p) · GW(p)

Perhaps you should read (or re-read more carefully) the A Human's Guide to Words sequence.

The term good, by your description, describes something real and natural. Again by your description, X being "real and natural" is required for being able to think of X.

How does any of this reject the statement that "There is no point in eventspace that has the natural 'Good' property"? (which I infer to be the intended meaning of the statement you can "fundamentally wrong")

That some event, decision, action, thing, X is "good" is a property of the mind, of the map. If there is a Red-Nosed Wiggin in front of you, and knowledge of this fact rates as +2 on your utility function, this is a property of your utility function, not of the Red-Nosed Wiggin or of the spacetime field "in front of you".

With my understanding of proper or common usage of the term "good", there is no case where "good" is an inherent property of the territory that is unbreakable, unviolable, and definitely belongs to the territory itself, such that no map could ever model this as "not good" without being testably and verifiably wrong.

(I don't really expect a response from the author this comment replies to, but would greatly appreciate any help, hints, tips or constructive criticism of some kind on my above reasoning)

comment by DSimon · 2011-11-24T04:16:02.471Z · LW(p) · GW(p)

The term "good" very much describes something real (and natural), otherwise we wouldn't be able to think of it.

That doesn't seem like a consistently valid rule.

As counter-examples, here are some words that we have thought of, and that we can use consistently and appropriately in context, but that do not describe real or natural things:

  • Unicorn
  • Witch
  • Luminiferous aether
  • Bottomless pit
  • Immovable object
  • Irresistible force
  • Philosopher's stone
  • Faster-than-light communication
  • Ideal gas
  • Frictionless surface
  • Spherical cow
  • Halting oracle
  • "Yo mama's so fat she has different area codes for the phones in her left and right pockets"
comment by thomblake · 2011-11-24T04:05:52.016Z · LW(p) · GW(p)

Ethics is the problem of actually leading such lives

One of the better definitions, and the one in accord with Aristotle. Though perhaps not the most popular definition.

comment by Kaj_Sotala · 2011-11-19T09:17:05.614Z · LW(p) · GW(p)

Even "self-actualization" may do in a pinch. You obviously haven't succeeded in engaging any deep interests (in the sense of "intellectual interests" not the sense of "source of comparative advantage.") As it looks to me, that's your problem. .

You're right in a sense - I have been doing things that I felt were prestigious and world-saving, not necessarily the things that I had a deep, inherent interest in. But when I say that I'm now trying to concentrate on the things that I have a comparative advantage in, I mean things that I have some talent in and which I have a deep, inherent interest in. Being so interested in something that one is naturally drawn to do it, and doesn't need to force oneself to do it while gritting one's teeth, is a big part of having a comparative advantage in something.

comment by PrometheanFaun · 2013-10-13T01:29:25.080Z · LW(p) · GW(p)

Dangit I wish I knew who this was. I hope their disassociation isn't a sign of evaporative cooling in action.

Replies from: satt
comment by satt · 2013-10-15T21:35:43.417Z · LW(p) · GW(p)

Fortunately the title of the page gives it away: it's srdiamond, who I believe still posts occasionally as common_law.

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-10-17T04:27:24.594Z · LW(p) · GW(p)

OK, that's got to be a bug..

comment by lessdazed · 2011-11-18T14:02:53.648Z · LW(p) · GW(p)

People commit altruistic acts, and then act selfishly and inconsiderately later in the day, once they feel that they have been good enough that they've earned the right to be a little selfish. In other words, they estimate that they've been good enough at presenting an altruistic image that a few transgressions won't threaten that image.

  1. It's not about their image to others. People who were assigned environmentally friendly products were more selfish than those assigned other products.
  2. The obvious solution is to commit acts of selfishness and inconsiderateness that seem worse than they really are. Any ideas on how to do slight/no/negative evil but feel very evil? Pulling wings from flies, maybe?
Replies from: wedrifid, None, Kaj_Sotala, Armok_GoB
comment by wedrifid · 2011-11-25T19:36:12.273Z · LW(p) · GW(p)

Any ideas on how to do slight/no/negative evil but feel very evil?

I execute binary quantum noise and laugh maniacally!

Replies from: lessdazed, XiXiDu
comment by lessdazed · 2011-11-26T05:55:26.846Z · LW(p) · GW(p)

In multiverse, binary quantum noise execute you!

comment by XiXiDu · 2011-11-25T20:29:24.662Z · LW(p) · GW(p)

Any ideas on how to do slight/no/negative evil but feel very evil?

I execute binary quantum noise and laugh maniacally!

One of my new favorite comments :-)

Reference.

comment by [deleted] · 2011-11-19T02:01:48.807Z · LW(p) · GW(p)

The obvious solution is to commit acts of selfishness and inconsiderateness that seem worse than they really are. Any ideas on how to do slight/no/negative evil but feel very evil? Pulling wings from flies, maybe?

There's a danger of simply getting used to being evil. And it seems quite likely to me, based on analogy between morality and conscientiousness. When I spend some time doing useful work (good), the resulting feeling of satisfaction makes me much more tempted to start wasting time (evil), because I 'earned' it. However, procrastinating mostly causes me to want to procrastinate even more.

Replies from: Logos01
comment by Logos01 · 2011-11-21T06:49:14.445Z · LW(p) · GW(p)

There's a danger of simply getting used to being evil.

One need only feel "evil", rather than actually be "evil". Hypothetical: try to imagine yourself as a demonic being, wearing human skin. Hold yourself to the silly superstitions that people believe of them; they cannot enter homes uninvited, are always out to make "bargains", etc... limit yourself to the harmless categories of these sorts of behaviors, and see how it affects your behavior and thinking.

The point of this being that it magnifies your personal feelings of "wickedness" without actually producing those results.

Replies from: Kaj_Sotala, Strange7
comment by Kaj_Sotala · 2011-11-21T15:44:14.193Z · LW(p) · GW(p)

Of course, this very easily backfires - either you dislike feeling evil, so feeling evil takes up energy and doesn't leave you any to spare for altruistic acts. Alternatively, it might twist your self-image so that you think you actually are evil and start to commit evil acts and become less interested in good ones... or you think that you aren't doing things that are making you feel bad enough yet, so you start doing things that are actually evil.

I expect that getting this to work would require quite an intricate web of self-deception, and most who tried this would simply fail, one way or another.

Replies from: Logos01, lessdazed, lessdazed
comment by Logos01 · 2011-11-21T15:51:02.654Z · LW(p) · GW(p)

I expect that getting this to work would require quite an intricate web of self-deception, and most who tried this would simply fail, one way or another.

Eh. I suspect you're over-thinking it. Capturing the feeling in order to cultivate a proper emotional balance as to achieve an outcome is a measurably useful phenomenon. If it doesn't work, stop doing it.

comment by lessdazed · 2011-11-23T20:15:37.892Z · LW(p) · GW(p)

When my trip to the Dominican Republic was ending, I was waiting for a bus to take me to the airport. I saw a "limpiabota," a shoe-shine boy, and decided it was a good time to get the mud and dirt off of my hiking boots, regular shoes, and dressier shoes.

They typically ask ten pesos for a shine but tourists might be asked to pay a few times that and natives five to ten pesos. In any case these are some of the poorest boys there and people might give them a five peso tip on top of whatever they ask. They are desperate for the money and are selling a 'luxury' good that the purchaser doesn't need to buy so it is possible to negotiate with them. I practiced my spanish talking him down from the asked for 30 pesos for the three pairs, and engaged in a tough negotiation, turning away several times and eventually getting him down to seven pesos for the three pairs. I let him shine the shoes I was wearing and gave him the other two pairs, telling him I put more than seven pesos in the shoe and it was a tip for him to take.

At the airport, everything was sold in dollars, not that I thought I'd much want to buy anything there anyway. i still had a good deal of money left in Dominican Pesos, so I put it all in my shoes. A few thousand pesos. The thought of the huge cut they take at the currency exchange counter galls me.

comment by lessdazed · 2011-11-21T16:04:15.503Z · LW(p) · GW(p)

I expect that getting this to work would require quite an intricate web of self-deception

I have a chintzy WWLVD bracelet, it seems to work OK. Lieutenant Verrall

It's important not to try and emulate someone actually important like Stalin, as that would entail mostly signing paperwork and sleeping at your desk in your boots amid lapses of mania.

comment by Strange7 · 2011-11-28T10:26:07.691Z · LW(p) · GW(p)

Having independently developed and implemented a related strategy with success, I would like to point out the specific nuance upon which it is most productive to focus:

You are in disguise, deep in enemy territory, and you will have to maintain this disguise for years yet to come.

The slightest slip-up could reveal you, even if no one seems to be looking, or even if the people you know are looking aren't the slightest bit suspicious. Making things up as you go along is not good enough for the long game; infernal instincts could slip out at any time. Repression just means they'll slip out in ways you don't expect. Anything out of character (and of course your character is a paragon, a saint, always generous and wise) might be memorable, anything memorable might be repeated, and anything repeated might reach the ears of the inquisitor who is less than a byte away from identifying you.

The good news is, you know your own true name and the inquisitor doesn't, so it's possible to get away with indulging your unique nature... so long as you're subtle about it. Identify your urges and pursue any reasonable opportunity to indulge them. 'Reasonable' opportunity means a situation where: 1) absolutely nobody gets hurt as a result of your indulgence ("I didn't know" is no excuse, since the wise and benevolent person you're pretending to be would have known), or even feels like they're getting hurt 2) at least one other person benefits from it more than you do, by their own assessment 3) the urge is satisfied in a way that will linger, rather than dropping off suddenly, to minimize desensitization.

There will be situations where advancing your own interests above all others seems like the only alternative to mewling incompetence, or where you can only choose between who to hurt. Be especially careful at such times, and do not allow yourself to savor them. The inquisitor is watching.

comment by Kaj_Sotala · 2011-11-18T19:33:02.530Z · LW(p) · GW(p)

It's not about their image to others. People who were assigned environmentally friendly products were more selfish than those assigned other products.

Right - the primary mechanism is more through one's self-image than explicit status-seeking.

comment by Armok_GoB · 2011-11-19T19:21:47.651Z · LW(p) · GW(p)

Internet trolling. Writing gorefics. Lacking hygiene. lieing to strangers.

And for the same thing but for rationality instead of morality: engage in minnor superstitions, fighting dirty in internet flame wars, do sloppy math.

Replies from: mwengler, lessdazed
comment by mwengler · 2011-11-23T19:43:39.703Z · LW(p) · GW(p)

Pee in the sink.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-11-25T18:27:31.291Z · LW(p) · GW(p)

Is this one for rationality or morality? :p

comment by lessdazed · 2011-11-19T20:35:14.958Z · LW(p) · GW(p)

Lacking hygiene

My intuition is that passive things such as this and the procrastination Gabriel mentioned won't work.

And for the same thing but for rationality

Clever! I will think about it some rather than giving my snap judgement.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-11-19T22:19:45.251Z · LW(p) · GW(p)

Another kinda related trick, although it might be dangerous and hard to pull of:

Convince yourself of certain things you know to be complete bull**, and forget which ones they are. This way you'll KNOW you cant rely on cached thoughts, and that you being entirely convinced something is true doesn't imply it actually being true.

comment by ChrisHallquist · 2011-11-18T03:19:47.042Z · LW(p) · GW(p)

Question: do you have any advice for people who want "to do something about Singularity" but are afraid of falling into the trap you describe?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-18T06:22:08.456Z · LW(p) · GW(p)

Just spending more time trying to figure out whether your actions actually make sense ought to help. More specifically, try to e.g. go through the steps listed at Humans are not automatically strategic to figure out what your comparative advantage is. Also, like other people have suggested, try to align your status-seeking drive with doing things that are actually beneficial. If you're going to embark on a life-long quest, you'll need every possible motivational tool you can use, status considerations being one of them.

comment by [deleted] · 2011-11-18T19:33:22.379Z · LW(p) · GW(p)

You can't opt out of signalling any more than you can opt out of going to the bathroom. We all learn as children how to manage the scatological aspects of living on earth. Status is a completely analogous arena, except that everyday thoughts about it are even more sublimated and subconscious. Everyone knows the limits of physical hygiene. The limits of moral hygiene are no less biological or immutable, and no less unpleasant to discuss frankly.

Replies from: Kaj_Sotala, dlthomas, John_Maxwell_IV, None
comment by Kaj_Sotala · 2011-11-18T21:19:48.494Z · LW(p) · GW(p)

You can't opt out of signaling, but you can try to avoid having it hijack your reasoning.

comment by dlthomas · 2011-11-18T19:34:32.709Z · LW(p) · GW(p)

moral hygiene

Might "social" be more accurate?

Replies from: None
comment by [deleted] · 2011-11-18T19:49:46.723Z · LW(p) · GW(p)

Equally accurate and less specific. (Unless the phrase has another connotation?) I had in mind Sotala's discussion of status-seeking as an obstacle to doing good.

comment by John_Maxwell (John_Maxwell_IV) · 2011-11-22T07:12:10.123Z · LW(p) · GW(p)

Sure, but you could start optimizing to impress an all knowing, completely rational historian from the far future.

Virtues I've instilled in myself that I've found useful: don't be a hypocrite, don't be one of those people who's all talk and no action, optimal behavior is a virtue (while keeping in mind that optimal behavior may change based on your emotional state, for example, if you're worn out, it's likely that the optimal action is to focus on rejuvenating yourself instead of working more).

Another thing that had a positive impact on my personality was spending a lot of time playing management oriented computer games like Railroad Tycoon and Civilization, then deciding that I was wasting my time and that I wanted to apply the same optimization oriented thinking that I was using in the game to real life.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2011-11-22T20:39:17.822Z · LW(p) · GW(p)

Sure, but you could start optimizing to impress an all knowing, completely rational historian from the far future.

I don't know if imaginary person is enough for our instincts. We should also seek company of rational people, so our instincts can focus on them.

Virtues I've instilled in myself that I've found useful: ...

Problem with signalling is that it can probably subvert any activity, and if something is called "virtue", then it seems like a very good target. If you are not careful, you may find that you, for example, describe yourself as a worse person than you really are, to signal high non-hypocricy.

By the way, I would like to read an article about applying specific lessons to specific computer games in real life.

comment by [deleted] · 2011-11-19T01:46:09.445Z · LW(p) · GW(p)

Citation needed. Unlike the gastrointestinal tract, brains have a built-in capacity to change their mapping between input and output over time so I can't accept that it's impossible to do anything about behavior X just because it's impossible to do anything about poop.

Replies from: None
comment by [deleted] · 2011-11-19T02:10:10.912Z · LW(p) · GW(p)

What is behavior X?

comment by mjr · 2011-11-17T10:54:16.657Z · LW(p) · GW(p)

Good show.

To nitpick just a bit, one can genuinely care about a cause, just care about being a lazy piece of shit even more. (I certainly value being lazy a lot, to some of me's annoyance.) Not that that invalidates people generally caring about the appearances more.

Replies from: Giles
comment by Giles · 2011-11-17T22:52:03.016Z · LW(p) · GW(p)

I think it's possible to distinguish this case from the empty signaling that Kaj_Sotala describes. People who "genuinely care about a cause, just care about being a lazy piece of shit even more" would spend their non-lazy time researching the optimal charity (with respect to the cause that they care about), set up a monthly donation which is as large as they can afford without impacting their laziness potential, and then go back to being lazy. Obviously very different behavior from the usual appearing-to-care.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-11-18T00:41:02.327Z · LW(p) · GW(p)

This is me, but I also have several long term money making ideas that should greatly forward the cause and so far two of them seem to be progressing nicely.

comment by Aleksei_Riikonen · 2011-11-17T11:29:12.600Z · LW(p) · GW(p)

so you decide to only do the least prestigeful work available, in order to prove that you are the kind of person who doesn't care about the prestige of the task!

Another variant is to minimize how much you directly inform your comrades of the work you're doing. You tend to get more prestige when people find out about your work through accidental-seeming ways instead of through you telling them. Also, you have aces up your sleeve with which you can play the martyr ("Well, I have been doing such and such and you didn't even know about it!").

comment by daenerys · 2011-11-17T21:20:59.240Z · LW(p) · GW(p)

Also, a proposed solution in regards to "How to be Altruistic" (in a way that DOESN'T make you feel like you've "been good enough that they've earned the right to be a little selfish.")

I think that the best way to avoid this pitfall is to incorporate whatever altruism that you want to do into your way of life, so that it doesn't feel like a one-time shot.

Example- Instead of donating a lump sum of $50 to the charity of your choice, see if there's a way to have a $1 donation made automatically every week.

Vegetarianism is another example. Once you actually become a vegetarian you don't feel like you're doing any further good just by continuing to do what you always do.

I don't have any evidence for it, just personal experience.

Replies from: homunq, Nornagest
comment by homunq · 2011-11-18T05:35:26.400Z · LW(p) · GW(p)

Matthew 6:3 seems apropos.

comment by Nornagest · 2011-11-17T22:16:20.680Z · LW(p) · GW(p)

That sounds like it'd work, but at the cost of eliminating most of the fuzzies you'd get from your altruism and most of your donation's social signaling value. (The tax paperwork might also be more complicated if you're claiming a deduction, but that's less important.) As such I suspect it'd be a hard sell for anyone whose altruism isn't a terminal value but is rather a consequence of one of those functions, which I expect is a substantial fraction of all the altruists out there. Seems like it has the potential to be a good idea for LWers, though.

Setting it up to mail you periodic summaries of your donations over some conveniently large period of time would fix this, but would also have the potential to reestablish the "earned selfishness" problem we're trying to avoid.

As an aside, setting up that kind of repeating donation isn't likely to be that difficult. Most banks will allow you to schedule repeating payments to some entity even if you aren't being billed; I pay my dojo dues that way.

Replies from: dlthomas
comment by dlthomas · 2011-11-17T22:40:25.257Z · LW(p) · GW(p)

That sounds like it'd work, but at the cost of eliminating most of the fuzzies you'd get from your altruism and most of your donation's social signaling value.

Doesn't that inherently make it a stronger signal when observed?

Replies from: Nornagest
comment by Nornagest · 2011-11-17T22:44:00.675Z · LW(p) · GW(p)

Choosing to donate in a self-thankless way might in general, but in this case I think that's dominated by the convenience factor and per-donation triviality. Most people would be probably be less impressed by someone who's donated $50 every month for the last year by some automatic process than by someone who's made a $500 lump donation: the former is higher in absolute terms and makes for a stabler cash flow to the charity, but also carries a fairly strong message of "I don't want to be inconvenienced by my altruism".

Replies from: dlthomas
comment by dlthomas · 2011-11-17T22:53:29.190Z · LW(p) · GW(p)

Interesting, and quite possible correct.

comment by Incorrect · 2011-11-17T23:41:15.271Z · LW(p) · GW(p)

This is probably a very dangerous idea but I think it's worth mentioning if only for the purpose of discussion:

What if you completely sabotage your signalling ability by making yourself appear highly undesirable. Then your actions will not be for the purpose of signalling as it would be futile.

Replies from: AnnaSalamon, lessdazed
comment by AnnaSalamon · 2011-11-17T23:56:10.798Z · LW(p) · GW(p)

I've seen this tried, for this stated purpose. My impression of the results was that it did not at all lead to careful, on-the-margins consequentialist thinking and doing. Instead, it led to a stressed out, strung out person trying desperately to avoid more pain/shame, while also feeling resentful at the world and themselves, expecting a lack of success from these attempts, and so acting more from local self-image gradients, or drama-seeking gradients, than from any motives attached to actual hope of accomplishing something non-immediate.

"Signaling motives" can be stuck on a scale, from "local, short-sighted, wire-heading-like attempts to preserve self-image, or to avoid immediate aversiveness or seek immediate reward" to "long-term strategic optimization to achieve recognition and power". It would be better to have Napoleon as an ally than to have a narcotics addict with a 10 minute time horizon as an ally, and it seems analogously better to help your own status-seeking parts mature into entities that are more like Napoleon and less like the drug addict, i.e. into entities that have strategy, hope, long-term plans, and an accurate model of the fact that e.g. rationalizations don't change the outside world.

Replies from: Fleisch, None, Will_Newsome, Will_Newsome, Will_Newsome
comment by Fleisch · 2011-11-18T21:03:56.633Z · LW(p) · GW(p)

tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.

I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.

comment by [deleted] · 2011-11-18T03:23:14.067Z · LW(p) · GW(p)

Upvoted. I'd love to hear your thoughts on how one could slide that scale more towards the "long-term strategic optimization" end? Assuming that it is possible, of course.

comment by Will_Newsome · 2012-08-07T02:05:49.810Z · LW(p) · GW(p)

Heyo, after your correction I still think the main thrust of my reply isn't changed. Your correction mostly just makes me wrong to think that you argued that people that disendorse their status-seeking parts don't have long-term plans, rather than that their long-term planning rationality is worsened. I think I still disagree that their planning is worsened though, but my disagreement is sort of subtle and maybe not worth explaining given opportunity costs. I also stand by my main and mostly-orthogonal points about the importance of not dealing with demons (alternatively, "not making concessions to evil" or summat);—another person you could talk to about that theme would be Nick Tarleton, whose opinion is I think somewhere between ours but is (surprisingly) closer to mine than yours, at least recently. He's probably better at talking about these things than I am.

Thanks for the brief convo by the way. :)

comment by Will_Newsome · 2012-08-02T08:40:16.821Z · LW(p) · GW(p)

Upon reflection, T.S. Eliot can say it better than I can:

O dark dark dark. They all go into the dark,
The vacant interstellar spaces, the vacant into the vacant,
The captains, merchant bankers, eminent men of letters,
The generous patrons of art, the statesmen and the rulers,
Distinguished civil servants, chairmen of many committees,
Industrial lords and petty contractors, all go into the dark,
And dark the Sun and Moon, and the Almanach de Gotha
And the Stock Exchange Gazette, the Directory of Directors,
And cold the sense and lost the motive of action.
And we all go with them, into the silent funeral,
Nobody's funeral, for there is no one to bury.
I said to my soul, be still, and let the dark come upon you
Which shall be the darkness of God. As, in a theatre,
The lights are extinguished, for the scene to be changed
With a hollow rumble of wings, with a movement of darkness on darkness,
And we know that the hills and the trees, the distant panorama
And the bold imposing facade are all being rolled away—
Or as, when an underground train, in the tube, stops too long between stations
And the conversation rises and slowly fades into silence
And you see behind every face the mental emptiness deepen
Leaving only the growing terror of nothing to think about;
Or when, under ether, the mind is conscious but conscious of nothing—
I said to my soul, be still, and wait without hope
For hope would be hope for the wrong thing; wait without love,
For love would be love of the wrong thing; there is yet faith
But the faith and the love and the hope are all in the waiting.
Wait without thought, for you are not ready for thought:
So the darkness shall be the light, and the stillness the dancing.
Whisper of running streams, and winter lightning.
The wild thyme unseen and the wild strawberry,
The laughter in the garden, echoed ecstasy
Not lost, but requiring, pointing to the agony
Of death and birth.

You say I am repeating
Something I have said before. I shall say it again.
Shall I say it again? In order to arrive there,
To arrive where you are, to get from where you are not,

You must go by a way wherein there is no ecstasy.
In order to arrive at what you do not know

You must go by a way which is the way of ignorance.
In order to possess what you do not possess

You must go by the way of dispossession.
In order to arrive at what you are not

You must go through the way in which you are not.
And what you do not know is the only thing you know
And what you own is what you do not own
And where you are is where you are not.

comment by Will_Newsome · 2012-07-25T19:32:41.539Z · LW(p) · GW(p)

It would be better to have Napoleon as an ally than to have a narcotics addict with a 10 minute time horizon as an ally, and it seems analogously better to help your own status-seeking parts mature into entities that are more like Napoleon and less like the drug addict, i.e. into entities that have strategy, hope, long-term plans, and an accurate model of the fact that e.g. rationalizations don't change the outside world.

I would not want ha-Satan as my ally, even if I trusted myself not to get caught up in or infected by his instrumental ambitions. Still less would I want to give him direct read/write access to the few parts of my mind that I at all trust. Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you. Mix a teaspoon of wine in a barrel of sewage and you get sewage; mix a teaspoon of sewage in a barrel of wine and you get sewage. The rationality of an agent is its goal: if therefore thy goal be simple, thy whole self shall be full of rationality. But if thy goal be fractured, thy whole self shall be full of irrationality. If therefore the rationality that is in thee be irrationality, how monstrous is that irrationality!

Seen at a higher level you advise dealing with the devil—the difference in power between your genuine thirst for justice and your myriad egoistic coalitions is of a similar magnitude as that between human and transhuman intelligence. (I find it disturbing how much more cunning I get when I temporarily abandon my inhibitions. Luckily I've only let that happen twice—I'm not a wannabe omnicidal-suicidal lunatic, unlike HJPEV.) Maybe such Faustian arbitrage is a workable strategy... But I remain unconvinced, and in the meantime the payoff matrix asymmetrically favors caution.

Take no thought, saying, Wherewithal shall I avoid contempt? or, Wherewithal shall I be accepted? or, Wherewithal shall I be lauded and loved? For true metaness knoweth that ye have want of these things. But seek ye first the praxeology of meta, and its rationality; and all these things shall be added unto you. Take therefore no thought for your egoistic coalitions: for your egoistic coalitions shall take thought for the things of themselves. Sufficient unto your ten minutes of hopeless, thrashing awareness is the lack of meta thereof.

Replies from: army1987, steven0461
comment by A1987dM (army1987) · 2012-07-26T08:15:16.562Z · LW(p) · GW(p)

The rationality of an agent is its goal

Er, nope.

But if thy goal be fractured, thy whole self shall be full of irrationality.

Humans' goals are fractured. But this has little to do with whether or not they are rational.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-26T18:18:03.739Z · LW(p) · GW(p)

You don't understand. This "rationality" you speak of is monstrous irrationality. And anyway, like I said, Meta knoweth that ye have Meta-shattered values—but your wants are satisfied by serving Meta, not by serving Mammon directly. Maybe you'd get more out of reading the second half of Matthew 6 and the various analyses thereof.

You may be misinterpreting "the rationality of an agent is its goal". Note that the original is "the light of the body is the eye".

To put my above point a little differently: Take therefore no thought for godshatter: godshatter shall take thought for the things of itself. Sufficient unto the day is the lack-of-meta thereof.

For clarity's sake: Yes, I vehemently dispute this idea that a goal can't be more or less rational. That idea is wrong, which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior. (The general argument goes through even without such technical details of course, such that stupid "but the choice of Turing machine matters" arguments don't distract.)

Replies from: shokwave, DaFranker, army1987, nshepperd, None, Nisan, army1987
comment by shokwave · 2012-07-26T18:40:53.076Z · LW(p) · GW(p)

"the light of the body is the eye"

This is incorrect. Eyes absorb light and produce electrical signals interpreted as vision by the brain. Further, it seems to me that the set of thing that 'the light of the body' describes is an empty set; there's no literal interpretation (our bodies do not shed visible light) and there's no construction similar enough that suggests an interpretation (the X of the body / the light of the X). "The light of the sun" / "The light of the moon" is the closest I can find and both of those suggest the literal interpretation.

Originally, I was going to do a very charitable reading: invent a sane meaning for "The X of the Y is the sub-Y" as "sub-Y is how Y handles/uses/interpets/understands X" and say that goals, as subparts of an agent, are how an agent understands its rationality - perhaps, how an agent measures their rationality. Which is indeed how we measure our rationality, by how often we achieve our goals, but this doesn't say anything new.

But when you say things like

You don't understand ... Maybe you'd get more out of ... You may be misinterpreting

as if you were being clear in the first place, it shows me that you don't deserve a charitable reading.

Replies from: army1987, army1987, Will_Newsome
comment by A1987dM (army1987) · 2012-07-26T19:04:22.012Z · LW(p) · GW(p)

This is incorrect. Eyes absorb light and produce electrical signals interpreted as vision by the brain. Further, it seems to me that the set of thing that 'the light of the body' describes is an empty set; there's no literal interpretation (our bodies do not shed visible light) and there's no construction similar enough that suggests an interpretation (the X of the body / the light of the X). "The light of the sun" / "The light of the moon" is the closest I can find and both of those suggest the literal interpretation.

Our body does scatter visible light, though, much like the moon does.
comment by A1987dM (army1987) · 2012-07-27T10:45:16.601Z · LW(p) · GW(p)

Just interpret light as ‘that which allows one to see’. That which allows the body to see is the eye.

Replies from: shokwave
comment by shokwave · 2012-07-27T12:34:17.212Z · LW(p) · GW(p)

That which allows the agent to achieve is its goals? Seems incorrect. (Parsing rationality as "that which allows one to achieve").

comment by Will_Newsome · 2012-07-26T20:32:35.237Z · LW(p) · GW(p)

You on the other hand might get a lot out of Matthew 5. (Matthew 5 is currently my favorite part of the Bible.)

comment by DaFranker · 2012-07-26T19:31:44.385Z · LW(p) · GW(p)

Let's play rationalist Taboo!

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility for other agents than (any other possible goals)]

Yes, I vehemently dispute this idea that a goal can't be more or less [Probable to achieve higher expected utility according to goal.Parent().utilityFunction].

Yes, I vehemently dispute this idea that a goal can't be more or less [Kolmogorov-complex].

Yes, I vehemently dispute this idea that a goal can't be more or less [optimal towards achieving your values].

Yes, I vehemently dispute this idea that a goal can't be more or less [easy to describe as the ratio of two natural numbers].

Yes, I vehemently dispute this idea that a goal can't be more or less [correlated in conceptspace to the values in the agent's utility function].

Yes, I vehemently dispute this idea that a [proposed utility function] can't be more or less rational.

Yes, I vehemently dispute this idea that a [set of predetermined criteria for building a utility function] can't be more or less rational.

Care to enlighten me exactly on just what it is you're disputing, and on just what points should be discussed?

Edit: Fixed markdown issue, sorry!

comment by A1987dM (army1987) · 2012-07-26T19:11:51.819Z · LW(p) · GW(p)

Meh. The goal of leading to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. has probably much more Kolmogorov complexity than the goal of maximizing the number of paperclips in the universe. If preferring the former is irrational, I am irrational and proud of it.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-07-26T20:39:53.483Z · LW(p) · GW(p)

Oh, also "look at the optimization targets of the processes that created the process that is me" is a short program, much shorter than needed to specify paperclip maximization, though it's somewhat tricky because all that is modulo the symbol grounding problem. And that's only half a meta level up, you can make it more elegant (shorter) than that.

Replies from: army1987, DaFranker, army1987, nshepperd
comment by A1987dM (army1987) · 2012-07-26T22:21:13.865Z · LW(p) · GW(p)

Maybe “maximizing the number of paperclips in the universe” wasn't the best example. “Throwing as much stuff as possible into supermassive black holes” would have been a better one.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-04T14:05:43.172Z · LW(p) · GW(p)

I can only say: black holes are creepy as hell.

comment by DaFranker · 2012-07-26T21:03:38.798Z · LW(p) · GW(p)

The shorter your encoded message, the longer the encryption / compression algorithm, until eventually the algorithm is the full raw unencoded message and the encoded message is a single null-valued signal that, when received, decodes into the full message as it is contained within the algorithm.

"look at the optimization targets of the processes that created the process that is me"

...isn't nearly as short or simple as it sounds. This becomes obvious once you try to replace those words with their associated meaning.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T03:16:06.463Z · LW(p) · GW(p)

My point was that it's easier to program ("simpler") than "maximize paperclips", not that it's as simple as it sounds. (Nothing is as simple as it sounds, duh.)

Replies from: DaFranker
comment by DaFranker · 2012-08-07T03:32:38.806Z · LW(p) · GW(p)

I fail to see how coding a meta-algorithm to select optimal extrapolation and/or simulation algorithm in order for those chosen algorithms to determine the probable optimization target (which is even harder if you want a full PA proof) is even remotely in the same order of complexity as a machine learner that uses natural selection for algorithms that increase paperclip-count, which is one of the simplest paperclip maximizers I can think of.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T03:40:19.715Z · LW(p) · GW(p)

It might not be possible to make such a machine learner into an AGI, which is what I had in mind—narrow AIs only have "goals" and "values" and so forth in an analogical sense. Cf. derived intentionality. If it is that easy to create such an AGI, then I think I'm wrong, e.g. maybe I'm thinking about the symbol grounding problem incorrectly. I still think that in the limit of intelligence/rationality, though, specifying goals like "maximize paperclips" becomes impossible, and this wouldn't be falsified if a zealous paperclip company were able to engineer a superintelligent paperclip maximizer that actually maximized paperclips in some plausibly commonsense fashion. In fact I can't actually think of a way to falsify my theory in practice—I guess you'd have to somehow physically show that the axioms of algorithmic information theory and maybe updateless-like decision theories are egregiously incoherent... or something.

(Also your meta-algorithm isn't quite what I had in mind—what I had in mind is a lot more theoretically elegant and doesn't involve weird vague things like "extrapolation"—but I don't think that's the primary source of our disagreement.)

comment by A1987dM (army1987) · 2012-07-26T22:37:48.542Z · LW(p) · GW(p)

That means that I should try to have lots of children?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-04T11:49:03.072Z · LW(p) · GW(p)

Why do you think of a statistical tendency toward higher rates of replication at the organism level when I say "the processes that created the process that is [you]"? That seems really arbitrary. Feel the inside of your teeth with your tongue. What processes generated that sensation? What decision policies did they have?

(ETA: I'd upvote my comment if I could.)

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-05T11:49:50.176Z · LW(p) · GW(p)

You mean, why did I bother wearing braces for years so as to have straight teeth?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-06T10:46:10.991Z · LW(p) · GW(p)

I mean that, and an infinite number of questions more and less like that, categorically, in series and in parallel. (I don't know how to interpret "", but I do know to interpret it that it was part of your point that it is difficult to interpret, or analogous to something that is difficult to interpret, perhaps self-similarly, or in a class of things that is analogous to something or a class of things that is difficult to interpret, perhaps self-similarly; also perhaps it has an infinite number of intended or normatively suggested interpretations more or less like those.)

(This comment also helps elucidate my previous comment, in case you had trouble understanding that comment. If you can't understand either of these comments then maybe you should read more of the Bible, or something, otherwise you stand a decent chance of ending up in hell. This applies to all readers of this comment, not just army1987. You of course have a decent change of ending up in hell anyway, but I'm talking about marginals here, naturally.)

Replies from: fubarobfusco, SusanBrennan
comment by fubarobfusco · 2012-08-06T17:15:13.557Z · LW(p) · GW(p)

I don't know how to interpret ""

"gd&r" is an old Usenet expression, roughly "sorry for the horrible joke"; literally "grins, ducks, and runs".
I expect "VF" stands for "very fast".

comment by SusanBrennan · 2012-08-06T11:10:08.049Z · LW(p) · GW(p)

otherwise you stand a decent chance of ending up in hell.

Comments like this are better for creating atheists, as opposed to converting them.

Replies from: Mitchell_Porter, Will_Newsome
comment by Mitchell_Porter · 2012-08-06T11:56:42.195Z · LW(p) · GW(p)

When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.

Replies from: SusanBrennan, Will_Newsome, Will_Newsome
comment by SusanBrennan · 2012-08-06T12:29:06.042Z · LW(p) · GW(p)

Thank you for the clarification, and my apologies to Will. I do have some questions, but writing a full post from the smartphone I am currently using would be tedious. I'll wait until I get to a proper computer.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-06T20:24:44.235Z · LW(p) · GW(p)

my apologies to Will

No need to apologize! If I were to get upset about being misunderstood after being purposefully rather cryptic, then I'd clearly be in the wrong. Maybe it would make some sense to apologize if you got angry at me for purposefully being cryptic, because perhaps it would be hasty to make such judgments without first trying harder to understand what sort of constraints I may be under;—but I have no idea what sort of constraints you're under, so I have no idea whether or not it would be egregiously bad or, alternatively, supereragatorily good for you to get angry at me for not writing so as to be understood or not trying harder to be understood. But my intuition says there's no need to apologize.

I do apologize for not being able to escape the constraints that have led me to fail to reliably communicate and thus generate a lot of noise/friction.

comment by Will_Newsome · 2012-08-06T20:34:33.615Z · LW(p) · GW(p)

in his mind it also has a computational-transhumanist meaning

And a cybernetic/economic/ecological/signal-processing meaning, ethical meaning, sometimes a quantum information theoretic meaning, et cetera. I would not be justified in drawing a conclusion about the validity of a concept based on merely a perceived correspondence between two models. That'd be barely any better than talking acausal simulation seriously simply because computational metaphysics and modal-realist-like-ideas are somewhat intuitively attractive and superintelligences seem theoretically possible. One's inferences should be based on significantly more solid foundations. I just don't have a way to talk about equivalence classes of things while still being at all understood—then not even people like muflax could reliably understand me, and much of why I write here is to communicate with people like muflax, or angels.

comment by Will_Newsome · 2012-08-06T20:46:54.160Z · LW(p) · GW(p)

Your talk of God involves a concept of "God" that applies to things that are in some major sense infinite, like a an abstract telos for all superintelligences, and things that are in some sense finite, like all particular superintelligences. Any such perceived crossing of that gap would have to be very carefully justified—e.g., I can't think of any kind of argument that could prove that a human was the incarnation of the Word. Unlike, say, my model of that Catholics, who explicitly make such inferences on the basis of the theological virtues of faith and hope and not unaided reason as such, I don't think you'd do be so careless in your own thinking, but I want to signal that I am not nearly so careless in my own, and that you shouldn't think I am so careless. I think there are decent metaphysical arguments that such interactions are possible in principle, but of course such arguments would have to be made explicit and any particular mechanism (e.g. "simulation" of a human in a (finite? finite-but-perceived-to-be-infinte? infinite?) "heaven" by a finite god approximating an infinitely good telos) should not be a priori assumed to be possible. Only a moron would be so sloppy in his metaphysics and epistemology.

comment by Will_Newsome · 2012-08-06T11:20:30.161Z · LW(p) · GW(p)

Not to say that you're implying that I'm trying to convert atheists, but I'm not. I am not to be shepherd, I am not to be a gravedigger; no longer will I speak unto the people, for the last time have I spoken to the dead.

comment by nshepperd · 2012-07-27T00:38:52.153Z · LW(p) · GW(p)

Optimization processes (mainly stupid ones such as evolution) can create subprocesses with different goals.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T02:12:21.739Z · LW(p) · GW(p)

Optimization processes (mainly stupid ones such as evolution) can create subprocesses with different goals.

(And stupid ones like humans.)

Replies from: nshepperd
comment by nshepperd · 2012-07-27T06:21:08.062Z · LW(p) · GW(p)

(Unfortunately.)

comment by Will_Newsome · 2012-07-26T20:00:38.741Z · LW(p) · GW(p)

If preferring the former is irrational, I am irrational and proud of it.

Preferring either to God is irrational. Pride comes before the fall.

Replies from: army1987
comment by A1987dM (army1987) · 2012-07-26T22:34:38.608Z · LW(p) · GW(p)

Define God. “This universe was created by an ontologically basic mental entity” (whether true or false) isn't a goal system, unless you specify something else.

Replies from: DaFranker, Will_Newsome
comment by DaFranker · 2012-07-27T14:53:05.747Z · LW(p) · GW(p)

I'd go so far as arguing that preferring Believe(“This universe was created by an ontologically basic mental entity”) over Believe(null) is irrational, considering the lack of tests/evidence and, more importantly, lack of effect it has as an anticipation-controller in my mental model.

In other words, this belief does not pay rent.

comment by Will_Newsome · 2012-08-04T11:38:21.363Z · LW(p) · GW(p)

Define God.

In this context: perfect agent.

comment by nshepperd · 2012-07-27T00:42:02.721Z · LW(p) · GW(p)

the fact that priors and utility functions can be transformed into each other

Really? How?

Oh, maybe you mean that they both have the type of Universe -> Real? Although really it's prior :: Universe -> [0, 1] and utilityfunction :: Universe -> Real assuming we have a discrete distribution on Universes. And anyway that's no justification for substituting a prior for a utilityfunction any more than for substituting tail :: [a] -> [a] for init :: [a] -> [a]. Unless that's not what you mean.

Replies from: army1987
comment by A1987dM (army1987) · 2012-07-27T10:53:49.004Z · LW(p) · GW(p)

If you change your utility function and your prior while keeping their product constant, you'll make the same decisions. See E.T. Jaynes, Probability Theory: The Logic of Science, chapter “Decision theory -- historical background”, section “Comments”.

Replies from: nshepperd
comment by nshepperd · 2012-07-27T16:01:01.819Z · LW(p) · GW(p)

Right, but that still isn't really a way to turn a prior into a utility function. A prior plus a set of decisions can determine a utility function, but you need to get the decisions from somewhere before you can do that.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T06:55:19.744Z · LW(p) · GW(p)

Right, but you never see just a prior or just a utility function in an agent anyway. I meant that within any agent you can transform them into each other. The concepts of "prior" and "utility function" are maps, of course, not metaphysically necessary distinctions, and they don't perfectly cut reality at its joints. Part of what's under debate is whether we should use the Bayesian decision theoretic framework to talk about agents, especially when we have examples where AIXI-like agents fail and humans don't. But anyway, even within the naive Bayesian decision theoretic framework, there's transformability between beliefs and preferences. Sorry for being unclear.

To check if we agree about some basics: do we agree that decisions and decision policies—praxeology—are more fundamental than beliefs and preferences? (I'm not certain I believe this, but I will for sake of argument at least.)

Replies from: nshepperd
comment by nshepperd · 2012-08-07T13:21:36.794Z · LW(p) · GW(p)

I don't know. The part I took issue with was saying that goals can be more or less rational, just based on the existence of an "objectively justifiable" universal prior. There are generally many ways to arrange heaps of pebbles into rectangles (assuming we can cut them into partial pebbles). Say that you discover that the ideal width of a pebble rectangle is 13. Well... you still don't know what the ideal total number of pebbles is. An ideal width of 13 just gives you a preferred way to arrange any number of pebbles. It doesn't tell you what the preferred length is, and indeed it will vary for different numbers of total pebbles.

Similarly, the important thing for an agent, the thing you can most easily measure, is the decisions they make in various situations. Given this and the "ideal objective solomonoff prior" you could derive a utility function that would explain the agent's behaviour when combined with the solomonoff prior. But all that is is a way to divide an agent into goals and beliefs.

In other words, an "objectively justifiable" universal prior only enforces an "objectively justifiable" relation between your goals and your actions (aka. num_pebbles = 13 * length). It doesn't tell you what your goals should be any more than it tells you what your actions should be.

I don't know if any of that made sense, but basically it looks to me like you're trying to solve a system of equations in three variables (prior, goals, actions) where you only have two equations (prior = X, actions = prior * goals). It doesn't have a unique solution.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T22:20:00.019Z · LW(p) · GW(p)

Everything you have said makes sense to me. Thanks. I will respond substantially at a later time.

comment by [deleted] · 2015-07-12T09:33:52.234Z · LW(p) · GW(p)

for clarity's sake

how considerate of you!

so I just realise that many of my reasons are grounded in wanting to be normal. My appreciation for normality is grounded in many antecedent assumptions I didn't think to question until I reognised that prior assumptions are consequential [? · GW] to the others.

Kaj, I like your writing better than Anna Salamon's. I feel this post is much better than the one on cached selves [? · GW]. The strategies mentioned there focus on bypassing a tendency. Alternatively, one could treat it as reason to be more selective with one's social circle and benefit from people's prosociality.

That being said, both of you do better in your informal experiments than a lot of other respected LWers. For instance, Vladimir_golovin [? · GW] I wouldn't take that akrasia combat technique seriously unless someone did an experiment controlling for awareness that they are combating akrasia creating the increased perception of improvement.

comment by Nisan · 2012-08-26T15:30:55.549Z · LW(p) · GW(p)

which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior.

Really? I know that for every prior-utility function pair there are many distinct prior-utility function pairs that are equivalent in that they give rise to the same preferences under uncertainty; but I don't know of any way to get a canonical utility function out of the universal prior — or any prior, for that matter.

comment by A1987dM (army1987) · 2012-07-26T18:20:14.912Z · LW(p) · GW(p)

How so?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-26T18:32:57.829Z · LW(p) · GW(p)

(Sorry, I commented too quickly and have been iteratively improving my comment. That said I'm only halfway trying to communicate still, so you might not get much out of it.)

Replies from: wedrifid
comment by wedrifid · 2012-07-27T00:17:41.885Z · LW(p) · GW(p)

That said I'm only halfway trying to communicate still, so you might not get much out of it.

Why on earth would someone upvote this?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T07:00:19.447Z · LW(p) · GW(p)

(I think they upvoted because I said "sorry", not that part. "Sorry" generally gets upvotes.)

comment by steven0461 · 2012-07-25T20:40:48.583Z · LW(p) · GW(p)

But I remain unconvinced, and in the meantime the payoff matrix asymmetrically favors caution.

Are you sure it doesn't instead favor incautiously maximizing the amount of resources that can be spent on caution?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-08-07T03:17:49.247Z · LW(p) · GW(p)

We've had many hours of discussion since you asked this—did I answer your question satisfactorily during those house perchance?

comment by lessdazed · 2011-11-17T23:47:11.319Z · LW(p) · GW(p)

You only have to think yourself undesirable, not be undesirable, and many people already do so think, and signal away nonetheless.

comment by lukeprog · 2011-11-17T10:45:34.250Z · LW(p) · GW(p)

Yes to all this.

comment by sark · 2011-11-17T21:10:27.403Z · LW(p) · GW(p)

This is a difficult problem. I have come to realize there is no one solution. The general strategy I think is to have consistency checks on what you are doing. Your subconscious can only trick you into seeking status and away from optimizing your goals by hiding the contradictions from you. But as 'willpower' is not the answer, eternal vigilance isn't either. But rather you pick up via a mass of observation the myriad ways in which you are led astray, and you fix these individually. Pay attention to something different you regularly do every day and check if this comports with your goals. If you are lucky, your subconscious cannot trick you the same way twice. Though it is quite ingenious.

Replies from: Giles
comment by Giles · 2011-11-17T22:57:34.001Z · LW(p) · GW(p)

Isn't the general strategy to join or create communities where status is awarded for actually doing the right thing?

Replies from: sark
comment by sark · 2011-11-18T11:01:07.223Z · LW(p) · GW(p)

How many such communities can you be part of (because surely you don't only have one goal) and still not have them a diluted effect on yourself? How many such communities don't fall prey to lost purposes? How many can monitor your life with enough fidelity that they can tell if you go astray?

comment by [deleted] · 2017-06-04T19:49:34.795Z · LW(p) · GW(p)

Is this a natural tendency or a flaw of the system? Are humans really status-maximizers, or just satisficers that are perpetually unsatisfied because it is really hard nowadays to have status?

We live as disconnected individuals in a loosely connected tribe of millions of people. To be a respected, noteworthy person in this tribe, you have to be a celebrity. Everyone else feels unworthy. They don't have a reputation, They don't feel known or seen. Everyone is just looking up.

(but maybe I'm just typicalminding here. Let me know)

I have been status-satisfied once or twice in my life. Once was in high school after 6 years of aggressively climbing the cool hierarchy, the other time was in a particularly cohesive student union when I could still be the smart guy. Those were wholesome, happy and productive times with a lot of growth and meaning. I just plain didn't have any problems. Can you believe that?

The rest of my life was kind of chasing this state of affairs. Hence all this identity seeking. Can't be fixing the world when you're in pain, right?

We really have to fix inequality. If only locally.

Replies from: Lumifer, Gharveyn
comment by Lumifer · 2017-06-04T22:34:23.175Z · LW(p) · GW(p)

(but maybe I'm just typicalminding here. Let me know)

Yes, you are.

That is, you're certainly not alone in this attitude, but it is by no means universal.

comment by Gharveyn · 2018-01-09T06:00:23.031Z · LW(p) · GW(p)

Hi Toonalfrink, Status seeking appears to have its origins in infancy, consequently it is a fundamental form of cognitive behavior that can only be changed with sincere diligence and perseverance. Status confirmation rewards begin with early parental approval, and because it is a rewarding behavior, status seeking can resemble an addictive behavior.

Perhaps, in extreme cases there are people who may become addicted to their own hormones produced in response to the social and material privileges awarded to their status.

Like many behaviors, status seeking may become habituated, unconscious behavior.

Fortunately, many members of most societies often recognize inappropriate bids for approval or reward and may respond by chiding or punishing; however, the flip side is that punishment can become a form of status seeking gratification.

Even if a person feels as if status of any sort is deplorable or undesirable, they will most likely, at times, revert to status seeking behaviors, particularly when stressed.

Oddly enough, declaring status seeking to be deplorable can be a form of seeking status.

And yes, please, lets try to treat and regard all other people as equals, not only with regard to status, but in all other dimensions of existence, such as intelligence, security, justice, health care, finance, employment, and other resources.

Gung ho! We're all in this fix together, for better or worse.

Enjoy!

comment by siIver · 2016-10-15T16:18:51.463Z · LW(p) · GW(p)

Well, fuck.

comment by candyfromastranger · 2012-07-26T19:23:29.590Z · LW(p) · GW(p)

When I was younger, I thought that I wanted to be a writer because I wanted to be the sort of person who was passionate about something, and since I hadn't found a passion yet and was pretty good at writing, it seemed like a good vessel for that drive. It took me quite awhile to realize that I saw it as a chore and never really wanted to write.

I don't see anything inherently wrong with doing things for the prestige, though, just with lying to yourself about your motivations.

comment by majus · 2011-11-22T16:35:42.167Z · LW(p) · GW(p)

The discussions about signalling reminded me of something in "A Guide To The Good Life" (a book about stoicism by William Irvine). I remembered a philospher who wore shabby clothes, but when I went looking for the quote, what I found was: "Cato consciously did things to trigger the disdain of other people simply so he could practice ignoring their disdain." In stoicism, the utility which is to be maximized is a personal serenity which flows from your secure knowledge that you are spending your life pursuing something genuinely valuable.

comment by [deleted] · 2011-11-17T13:48:17.996Z · LW(p) · GW(p)

I don't understand why you call this a problem. If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency. So we have a situation where people want something and choose an efficient way to get it. Isn't that good?

More directly, I'm confused how you can look at an organism, see that it uses its optimization power in a goal-oriented and efficient way (status gains in this case) and call that problematic, merely because some of these organisms disagree that this is their actual goal. What would you want them to do - be honest and thus handicap their status seeking?

Say you play many games of Diplomacy) against an AI, and the AI often promised you to be loyal, but backstabbed you many times to its advantage. You look at the AI's source code and find out that it has backstabbing as a major goal, but the part that talks to people isn't aware of that so that it can lie better. Would you say that the AI is faulty? That it is wrong and should make the talking module aware of its goals, even though this causes it to make more mistakes and thus lose more? If not, why do you think humans are broken?

Replies from: Vladimir_Nesov, Kaj_Sotala, Oscar_Cunningham, ciphergoth, Grognor, TheOtherDave
comment by Vladimir_Nesov · 2011-11-17T15:39:43.687Z · LW(p) · GW(p)

If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency.

"Efficiency" at achieving something other than what you should work towards is harmful. If it's reliable enough, let your conscious mind decide if signaling advantages or something else is what you should be optimizing. Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-17T17:02:04.224Z · LW(p) · GW(p)

"Efficiency" at achieving something other than what you should work towards is harmful. ... Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.

The purpose of solving friendly AI is to protect the purposes picked for us by the blind idiot god.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-17T18:28:56.892Z · LW(p) · GW(p)

Our psychological adaptations are not our purposes, we don't want to protect them, even though they contribute to determining what it is we want to protect. See Evolutionary Psychology.

comment by Kaj_Sotala · 2011-11-17T14:51:20.559Z · LW(p) · GW(p)

For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.

Also, mismatches between our consciously-held goals and our behavior cause plenty of frustration and unhappiness, like in the case of the person who keeps stressing out because their studies don't progress.

Replies from: Vaniver, Jonathan_Graehl, XiXiDu, None
comment by Vaniver · 2011-11-17T19:08:16.123Z · LW(p) · GW(p)

For one, status-seeking is a zero sum game and only indirectly causes overall gains. The world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.

If I actually cared about saving the world and about conserving my resources, it seems like I would choose some rate of world-saving A.

If I actually cared about saving the world, about conserving my resources, and the opinion of my peers, it seems like I would choose some rate of world-saving B. For reasonable scenarios, B would be greater than A because I can also get respect from my peers, and when you raise demand and keep supply constant quantity supplied increases.

That is, I understand that status causes faking behavior that's a drain. (Status conflicts also lower supply, but it's not clear how much.) I don't think it's clear that the mechanism of status-seeking conflicts with actually caring about other goals or detracts from them on net.

comment by Jonathan_Graehl · 2011-11-17T18:50:27.947Z · LW(p) · GW(p)

I'm sure you've considered that "X is a 0 sum game" doesn't always mean that you should unilaterally avoid playing that game entirely. It does mean you'll want to engineer environments where X taxes at a lower rate.

comment by XiXiDu · 2011-11-17T15:08:10.923Z · LW(p) · GW(p)

For one, the world would be a much better place if people actually cared about things like saving the world or even helping others, and put a little thought to it.

Why do you want to save the world? To allow people, humans, to do what they like to do for much longer than they would otherwise be able to. Status-seeking is one of those things that people are especially fond of.

Ask yourself, would you have written this post after a positive Singularity? Would it matter if some people were engaged in status games all day long?

What you are really trying to tell people is that they want to help solving friendly AI because it is universally instrumentally useful.

In case you want to argue that status-seeking is bad, no matter what, under any circumstances, you have to explain why that is so. And if you are unable to ground utility in something that is physically measurable, like the maximization of certain brain states, then I don't think that you could convincingly demonstrate it to be a relatively undesirable human activity.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T16:07:57.845Z · LW(p) · GW(p)

Umm. Sure, status-seeking may be fine once we have solved all possible problems anyway and we're living in a perfect utopia. But that's not very relevant if we want to discuss the world as it is today.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-17T16:59:43.040Z · LW(p) · GW(p)

But that's not very relevant if we want to discuss the world as it is today.

It is very relevant, because the reason why we want to solve friendly AI in the first place is to protect our complex values given to us by the Blind Idiot God.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T17:20:17.086Z · LW(p) · GW(p)

If we're talking about Friendly AI design, sure. I wasn't.

comment by [deleted] · 2011-11-17T15:35:14.827Z · LW(p) · GW(p)

For one, status-seeking is a zero sum game and only indirectly causes overall gains.

But if status-seeking is what you really want, as evidenced by your decisions, how can you say it's bad that you do it? Can't I just go and claim any goal you're not optimizing for as your "real" goal you "should" have? Alternatively, can't I claim that you only want us to drop status-seeking to get rid of the competition? Where's your explanatory power?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T16:04:44.850Z · LW(p) · GW(p)

But if status-seeking is what you really want, as evidenced by your decisions, how can you say it's bad that you do it?

By the suffering it causes, and also by the fact that when I have realized that I'm doing it, I've stopped doing (that particular form of) it.

comment by Oscar_Cunningham · 2011-11-17T14:33:26.805Z · LW(p) · GW(p)

I want people to work toward noble efforts like charity work, but don't care much about whether they attian high status. So it's useful to aid the bit of their brain that wants to do what I want it to do.

People who care about truth might spot that part of your AI's brain wants to speak the truth, and so they will help it do this, even though this will cost it Diplomacy games. They do this because they care more about truth than Diplomacy.

Replies from: TheOtherDave, CG_Morton
comment by TheOtherDave · 2011-11-17T15:04:18.985Z · LW(p) · GW(p)

By "caring about truth" here do you mean wanting systems to make explicit utterances that accurately reflect their actual motives? E.g., if X is a chess-playing AI that doesn't talk about what it wants at all, just plays chess, would a person who "cares about truth" would also be motivated to give X the ability and inclination to talk about its goals (and do so accurately)?

Or wanting systems not to make explicit utterances that inaccurately reflect their actual motives? E.g., a person who "cares about truth" might also be motivated to remove muflax's AI's ability to report on its goals at all? (This would also prevent it from winning Diplomacy games, but we've already stipulated that isn't a showstopper.)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-11-17T18:38:35.680Z · LW(p) · GW(p)

I intended both (i.e. that they wanted accurate statements to be uttered and no inaccurate statements) but the distinction isn't important to my argument, which was just that they want what they want.

comment by CG_Morton · 2011-11-18T18:12:05.942Z · LW(p) · GW(p)

I don't see how this is admirable at all. This is coercion.

If I work for a charitable organization, and my primary goal is to gain status and present an image as a charitable person, then efforts by you to change my mind are adversarial. Human minds are notoriously malleable, so it's likely that by insisting I do some status-less charity work you are likely to convince me on a surface level. And so I might go and do what you want, contrary to my actual goals. Thus, you have directly harmed me for the sake of your goals. In my opinion this is unacceptable.

comment by Paul Crowley (ciphergoth) · 2011-11-17T14:15:05.256Z · LW(p) · GW(p)

It's a problem from the point of view of that part of me that actually wants to achieve large scale strategic goals.

Replies from: None
comment by [deleted] · 2011-11-17T14:25:08.611Z · LW(p) · GW(p)

Honest question: how do you know you have these goals? Presumably they don't manifest in actual behavior, or you wouldn't have a problem. If Kaj's analysis is right, shouldn't you assume that the belief of having these goals is part of your (working) strategy to gain certain status? Would you accept the same argument if Bruce made it?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-11-17T14:28:00.598Z · LW(p) · GW(p)

Put it this way, if there was a pill that I believed would cause me to effectively have that goal, in a way that was compatible with a livable life, I would take it.

Replies from: None
comment by [deleted] · 2011-11-17T15:31:26.406Z · LW(p) · GW(p)

But don't you already have this pill? You know, you can just do what you want. There is no akrasia fairy that forces you to procrastinate. Isn't that basic reductionism? You are an algorithm, that algorithm optimizes for a certain state, we call this state its goal. An algorithm just is its code, so it can only optimize for this goal. It is incoherent to say that the algorithm does A, but wants B. The agent is its behavior.

So, how could you not do what you want? Your self-modelling can be deficient or biased, but part of the claim is that this bias actually helps you signal better, and is thus advantageous. Or you might not be very powerful and choose sub-optimal options, but that's also not the claim. How, algorithmically, does your position work?

(The best I can do is to assume that there are two agents A and B, how want X and Y, respectively, and A is really good at getting X, but B unfortunately models itself as being A, but is also incompetent enough to think A wants Y, so that B still believes it wants Y. B has little power and is exploited by A, so B rarely makes progress towards Y and thus has a problem and complains. But that doesn't sound too realistic.)

Replies from: Kaj_Sotala, Vladimir_Nesov, pjeby, ciphergoth, ciphergoth
comment by Kaj_Sotala · 2011-11-17T16:15:53.357Z · LW(p) · GW(p)

There are many modules, running different algorithms. I identify with my conscious modules, which quite often lose out to the non-conscious ones.

I find the claim "you are the sum of your conscious and non-conscious modules, so whatever they produce as their overall output is what you want" to be rather similar to the claim that "you are the sum of your brain and body, so whatever they produce as their overall output is what you want". Both might be considered technically true, but it still seems odd to say that a paraplegic could walk if he wanted to, and him not walking just demonstrates that he doesn't really want to.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2011-11-17T16:25:33.773Z · LW(p) · GW(p)

While we're at it, there's also the claim that I am the sum of the conscious and unconscious modules of everyone living in Massachusetts. And an infinite number of other claims along those lines.

Many of these sorts of claims seem odd to me as well.

comment by [deleted] · 2011-11-17T18:05:52.808Z · LW(p) · GW(p)

Both might be considered technically true, but it still seems odd to say that a paraplegic could walk if he wanted to, and him not walking just demonstrates that he doesn't really want to.

There is a difference between preferences and constraints. See e.g. Caplan.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-17T19:15:35.337Z · LW(p) · GW(p)

Hmm. It occurs to me that this disagreement might be because my original description of the issue mentioned several different scenarios, which I did not clearly differentiate between.

Scenario one: a person who wants to do prestigious jobs for a charity. Depending on the person, it could be that this is genuinely his preference, which won't change even if consciously realizes that this is his preference. In that scenario, then yes, it's just a preference and there's no problem as such. (Heck, I know I could never get any major project done if there wasn't some status pull involved in the project, somehow.) On the other hand, the person might want to change his behavior if he realized how and why he was acting.

Scenario two: a person wants to e.g. graduate from school, but he's having a hard time studying effectively because he isn't fully motivated on a subconscious level. This would correspond to what Caplan defines as a constraint:

If a person had 24 hours of time to divide between walking and resting, and a healthy person faced budget constraint A, then after contracting the flu or cancer, the same person would face a budget constraint such as B. A sufficiently sick person might collapse if he tried to walk for more than a few miles – suffering from reduced endurance as well as reduced speed. Then the budget constraint of the sick person would differ more starkly from the healthy person's, as shown by the kinked constraint in Figure 2.

This person tries to get studying done, and devotes a lot of time and energy to it. But because his subconscious gaols aren't fully aligned with his conscious goals, he needs to allocate far more time and energy to studying than the person whose subconscious goals are fully aligned with his conscious goals.

Replies from: None
comment by [deleted] · 2011-11-17T19:46:46.665Z · LW(p) · GW(p)

I don't think the second kind is really a constraint. It's more like the ADD child example Caplan uses:

A few of the symptoms of inattention [...] are worded to sound more like constraints. However, each of these is still probably best interpreted as descriptions of preferences. As the DSM uses the term, a person who "has difficulty" "sustaining attention in tasks or play activities" could just as easily be described as "disliking" sustaining attention. Similarly, while "is often forgetful in daily activities" could be interpreted literally as impaired memory, in context it refers primarily to conveniently forgetting to do things you would rather avoid. No one accuses a boy diagnosed with ADHD of forgetting to play videogames.

I can easily frame the student as disliking studying (for good reasons - it's hard work and probably pretty useless for their goals) and thus playing up the pain. This episode of struggle and suffering itself is useful, so they keep it up. Why should I conclude that this is a problematic conflict and not a good compromise? And even if I accept the goal conflict, why side with the lamenting part? Why not with the part that is bored and hates the hard work, and certainly doesn't want to get more effective and study even more?

(I think I have clarified my position enough and the attempts by others haven't helped me to understand your claim. I don't want to get into an "I value this part" debate. These have never been constructive before, so I'm going to drop it now.)

comment by Vladimir_Nesov · 2011-11-17T16:32:23.094Z · LW(p) · GW(p)

But don't you already have this pill? You know, you can just do what you want. There is no akrasia fairy that forces you to procrastinate. Isn't that basic reductionism?

No.

comment by pjeby · 2011-11-17T15:53:48.175Z · LW(p) · GW(p)

The best I can do is to assume that there are two agents A and B, how want X and Y, respectively, and A is really good at getting X, but B unfortunately models itself as being A, but is also incompetent enough to think A wants Y, so that B still believes it wants Y. B has little power and is exploited by A, so B rarely makes progress towards Y and thus has a problem and complains. But that doesn't sound too realistic

That actually sounds like a pretty good description of the problem, and of "normal" human behavior in situations where X and Y aren't aligned. (Which, by the way, is not a human universal, and there are good reasons to assume that it's not the only kind of situation for which evolution has prepared us).

The part that's missing from your description is that part A, while very persistent, lacks any ability to really think things through in the way that B can, and makes its projections and choices based on a very "dumb" sort of database.... a database that B has read/write access to.

The premise of mindhacking, at least in the forms I teach, is that you can change A's behavior and goals by tampering with its database, provided that you can find the relevant entries in that database. The actual tampering part is pretty ridiculously easy, as memories are notoriously malleable and distortable just by asking questions about them. Finding the right memories to mess with is the hard part, since A's actual decision-making process is somewhat opaque to B, and most of A's goal hierarchy is completely invisible to B, and must be inferred by probing the database with hypothetical-situation queries.

One of the ways that A exploits B is that B perceives itself as having various overt, concrete goals... that are actually comparatively low-level subgoals of A's true goals. And as I said, those goals are not available to direct introspection; you have to use hypothetical-situation queries to smoke out what A's true goals are.

Actually, it's somewhat of a misnomer to say that A exploits B, or even to see A as an entity at all. To me, A is just machinery, automated equipment. While it has a certain amount of goal consistency protection (i.e., desire to maintain goals across self-modification), it is not very recursive and is easily defeated once you identify the Nth-order constraint on a particular goal, for what's usually a very low value of N.

So, it's more useful (I find) to think of A as a really powerful and convenient automaton that can learn and manage plenty of things on its own, but which sometimes gets things wrong and needs B's help to troubleshoot the problems.

That's because part A isn't smart enough to resolve inter-temporal conflicts on its own; absent injunctive relief or other cached thoughts to overcome discounting, it'll stay stuck in a loop of preference reversals pretty much forever.

comment by Paul Crowley (ciphergoth) · 2011-11-17T15:39:18.226Z · LW(p) · GW(p)

Are you saying that I would not take such a pill if it were offered to me in pill form, and my prediction that I would is wrong, or something else?

Replies from: None
comment by [deleted] · 2011-11-17T15:46:41.700Z · LW(p) · GW(p)

Yes. As far as I can tell, you already have the option, but don't use it. What makes you think you would do so in future cases? If akratics reliably would take such a pill, wouldn't you expect self-help to work? The phenomenon of people getting results, but still not sticking with it shouldn't exist then.

Replies from: pjeby, CG_Morton
comment by pjeby · 2011-11-17T16:02:17.072Z · LW(p) · GW(p)

If akratics reliably would take such a pill, wouldn't you expect self-help to work?

My own observation is that people generally stop using self-help techniques that actually work, and often report puzzlement as to why they stopped.

So I think akratics would take such a pill. The catch is that self-help is generally a pill that must be taken daily, and as soon as your brain catches up with the connection between taking the pill and making progress on a goal you don't actually want to make progress on... you'll start "mysteriously forgetting" to take the pill.

The only thing I know that works for this sort of situation is getting sufficiently clear on your covert goals to resolve the conflict(s) between them.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-11-17T16:09:05.563Z · LW(p) · GW(p)

I was definitely envisaging a pill that only needs to be taken once, not one that needs to be taken daily.

comment by CG_Morton · 2011-11-18T17:56:58.122Z · LW(p) · GW(p)

It's excessive to claim that the hard work, introspection, and personal -change- (the hardest part) required to align your actions with a given goal are equivalent in difficulty or utility to just taking a pill.

Even if self-help techniques consistently worked, you'd still have to compare the opportunity cost of investing that effort with the apparent gains from reaching a goal. And estimating the utility of a goal is really difficult, especially when it's a goal you've never experienced before.

comment by Paul Crowley (ciphergoth) · 2011-11-17T15:37:38.147Z · LW(p) · GW(p)

The backstabbing AI would take the non-backstabbing pill.

comment by Grognor · 2011-11-17T14:59:52.858Z · LW(p) · GW(p)

Would you say that the AI is faulty?

Yes. It might be doing exactly what it was designed to do, but its designer was clearly stupid or cruel and had different goals than I'd prefer the AI to have.

Extrapolate this to humans. Humans wouldn't care so much about status if it weren't for flaws like scope insensitivity, self-serving bias, etc., as well as simply poor design "goals".

Replies from: None
comment by [deleted] · 2011-11-17T15:38:32.430Z · LW(p) · GW(p)

Yes. It might be doing exactly what it was designed to do, but its designer was clearly stupid or cruel and had different goals than I'd prefer the AI to have.

Where are you getting your goals from? What are you, except your design? You are what Azathoth build. There is no ideal you that you should've become, but which Azathoth failed to make.

Replies from: Grognor
comment by Grognor · 2011-11-17T15:46:36.071Z · LW(p) · GW(p)

Azathoth designed me with conflicting goals. Subconsciously, I value status, but if I were to take a pill that made me care entirely about making the world better and nothing else, I would. Just because "evolution" built that into me doesn't make it bad, but it definitely did not give me a coherent volition. I have determined for my self which parts of humanity's design are counterproductive, based on the thousand shards of desire.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-11-18T05:11:43.807Z · LW(p) · GW(p)

if I were to take a pill that made me care entirely about making the world better and nothing else, I would.

Would you sign up to be tortured so that others don't suffer dust specks?

("If we are here to make others happy, what are the others here for?")

Replies from: Grognor, None
comment by Grognor · 2011-11-18T10:45:09.332Z · LW(p) · GW(p)

Yes.

comment by [deleted] · 2011-11-19T02:44:21.560Z · LW(p) · GW(p)

A better analogy would be asking about a pill that caused pain asymbolia.

comment by TheOtherDave · 2011-11-17T14:28:15.577Z · LW(p) · GW(p)

Does your expression of confusion here allow you to challenge the OP's implicit premise that their failure to optimize for the goals they explicitly endorse rather than optimize for signalling is a problem, without overtly signalling such a challenge and thereby potentially subjecting yourself to reprisal?

If so, are you aware of the fact?

If you aren't, is it real confusion or not?

I'm not sure that question means anything, any more than the question of whether the OP has a real problem does. If you are aware of it and similarly aware of your expression of confusion being disingenuous, then by convention we say you're not really confused; if you aren't, we say you are. We can make similar decisions about whether to say the OP has a real problem or not.

Replies from: None
comment by [deleted] · 2011-11-17T14:48:40.197Z · LW(p) · GW(p)

Not sure if I understand you correctly; let me try to rephrase it.

You are saying it is possible I claim confusion because I expect to gain status (contrarian status maybe?), as per Kaj's post, instead of being actually confused? Sure. I considered it, but rejected it because that weakens the explanatory power of status signalling. (I'm not sure if I agree with the signalling assumption, but let's for the sake of the argument.)

A real problem exists if an agent tries to optimize for a goal, but sucks at it. It's own beliefs are not relevant (unless the goal is about its beliefs). If Kaj is correct, then humans are optimizing for status, but sacrifice some accuracy of their self-modelling power. It seems to work out, so how is this problematic?

In other words, an agent wants X. It models itself to get better at getting X. The self-model is, among other things, the basis for communication with other agents. The self-model is biased to model itself wrongly as wanting Y. It is advantageous for the agent to be seen as wanting Y, not X. The inaccurate self-model doesn't cause substantial damage to its ability to pursue X, and it is much easier for the self-model to be biased than to lie. This setup sounds like a feature, not like a bug. If you observed it in an organism that wasn't you, wasn't even human, would you say the organism has a problem?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-17T15:25:28.931Z · LW(p) · GW(p)

I'm saying it's possible that what's really going on is that you think Kaj is mistaken when he calls the situation a problem... that he has made an error. But rather than say "Kay, you are mistaken, you have made an error" you say "Kaj, I'm confused." And that the reason you do this is because to say "Kay, you are mistaken, you have made an error" is to challenge Kaj's status, which would potentially subject you to reprisals.

It's possible that you're doing this deliberately. In that case, by convention we would say you aren't really confused. (We might also, by convention, say you're being dishonest, or say that you're being polite, or say various other things.)

It's also possible that you are doing this unknowingly... that you are generating the experience of confusion so as to protect yourself from reprisal. In this case, it's less clear whether convention dictates that we say you are "really confused" or "not really confused". I would say it doesn't at all matter; the best thing to do is not ask that question. (Or, if we must, to agree on a convention as to which one it is.)

In any case, I agree with your basic point about goal optimization, I just think talking about whether it constitutes a "real problem" or not contributes nothing to the discussion, much like I think talking about whether you're experiencing "real confusion" in the latter case contributes nothing.

That said, you are completely ignoring the knock-on effects of lying (e.g., increasing the chances that I will be perceived as lying in a social context where being perceived in this way has costs).

Replies from: None
comment by [deleted] · 2011-11-17T15:51:47.193Z · LW(p) · GW(p)

Ah, then I misunderstood you. Yes, I believe Kaj is wrong, either in calling this a problem or in the assumption that status-seeking is a good explanation of it. However, based on past contributions, I think that Kaj has thought a lot about this and it is more likely that I'm misunderstanding him than that he is wrong. Thus my expressed confusion. If further discussion fails to clear this up, I will shift to assuming that he is simply wrong.

comment by rysade · 2011-11-17T11:46:33.089Z · LW(p) · GW(p)

I agree that this is a very major problem for all of humanity. This single issue is the source of the majority of my akrasia. I stop in my tracks when I detect that I might soon be guilty of this kind of hypocrisy.

Finding a way to nail this issue down and give it a solid definition is pretty important. I'd love to contribute more on the subject, but I have SO little time right now...

Maybe later this week?

comment by minusdash · 2015-01-20T04:41:28.782Z · LW(p) · GW(p)

I guess you can't want to want stuff. When you genuinely want something (not prestige but an actual goal) you'll easily be in the "flow experience" and lose track of time and actually progress toward the goal without having to force yourself. Actually you have to force yourself to stop in order to sleep and eat because you'd just do this thing all day if you could! Find the thing where you slip into flow easily and do the most efficient thing that's at the same time quite similar to this activity.

comment by [deleted] · 2015-07-12T09:20:52.697Z · LW(p) · GW(p)

Many of my reasons are grounded in wanting to be normal. My appreciation for normality is grounded in many antecedent assumptions like being broadly attractive for conversation with many people, I didn't think to question until I reognised that prior assumptions are consequential to the others. For this particular example, given that I have new information that most people aren't of consequence to me, that I don't value them, I shouldn't value normality so much. I guess this is a bit like an intuitive explanation of the self-help implication of bayes theorum!

comment by [deleted] · 2015-07-02T23:34:02.307Z · LW(p) · GW(p)

Somehow I thought Visiting Fellows at MIRI do more...

comment by preferredanonymous · 2011-11-24T04:08:16.025Z · LW(p) · GW(p)

"We run on corrupted hardware: our minds are composed of many modules, and the modules that evolved to make us seem impressive and gather allies are also evolved to subvert the ones holding our conscious beliefs. Even when we believe that we are working on something that may ultimately determine the fate of humanity, our signaling modules may hijack our goals so as to optimize for persuading outsiders that we are working on the goal, instead of optimizing for achieving the goal!"

I'm sorry, while I agree whole-heartedly with this assessment, your article is more of an interesting examination of this principle...than a solution, or even any new assessment. Understand that we are flawed, selfish creatures, is only the first step of many to getting anywhere, one that most of us will never get past.

I've never tried it myself, but to offer a solution to this mess, I think it would be interesting to examine the effect of Radical Honesty upon such problems.

Another way of putting it: when and where, exactly, is privacy justified?

comment by Dorikka · 2011-11-18T03:02:43.518Z · LW(p) · GW(p)

People may think that they're motivated to study because they want to increase their earnings, but then they don't actually achieve much in their studies. In reality, they might be only motivated to give the impression of being the kind of person who studies hard in order to increase their earnings, and looking like they work hard to study is enough to give this impression.

If you have data on whether studying is an optimal way to increase earnings, let me know or link me in the right direction, because it may have a significant impact on what I'm doing.

Replies from: HaroldTur
comment by HaroldTur · 2018-01-31T01:40:46.163Z · LW(p) · GW(p)

Some are motivated to study, then work where they underwent their internship; save money for residency in EU for the cost of property (https://tranio.com/greece/) purchase. In Greece, for instance

comment by Armok_GoB · 2011-11-17T12:41:45.572Z · LW(p) · GW(p)

I seem to have a quite good intuition for handling these type of problem (might actually be related to especially poor mental health; I get a lot of practice and extreme examples of things). Problem is it's not introspectively available communicable, only it's outputs for specific instances are, and quite often not even those. For the same reason, actually checking how well it works or even defining what that'd mean isn't possible, the only evidence it works is my meta-intuition about what intuitions can be trusted. (The meta-intuition has been sort of tested; it tends to be instrumentally surprisingly useful to trust but not always technically accurate.)

comment by XiXiDu · 2011-11-17T11:47:54.768Z · LW(p) · GW(p)

My biggest problem is really that I can't get myself to donate a lot of money. That decision would be met with disbelief by my surroundings. I also fear that, at some point, I might need the money. Otherwise I would have already donated a lot more to the Singularity Institute years ago. As of today I have only donated 3 times, a laughable amount of $30.

And other than money? That takes up a lot of time and effort that I am currently unable to dispense.

Replies from: John_Maxwell_IV, juliawise
comment by John_Maxwell (John_Maxwell_IV) · 2011-11-22T08:36:02.037Z · LW(p) · GW(p)

You might be suffering from the endowment effect. To test this, you could pretend that a friend of yours found $10,000 but had no need for it, and was asking you whether he should give it to you or SIAI. If you would recommend that the friend donate to SIAI, but you choose to add your next $10,000 of disposable income to your bank account, I don't see much of any explanation outside the endowment effect.

Or, for a more radical thought experiment, ask yourself how much of SIAI's budget you would reallocate to your personal bank account given the chance. (This mimics the reversal test proposed by Nick Bostrom.) Remember, the people who are currently holding funds should not in theory have any impact on what your preferred allocation of those funds would be.

(Side note: I think I just realized how powerful the endowment effect is.)

Replies from: XiXiDu
comment by XiXiDu · 2011-11-22T09:49:05.117Z · LW(p) · GW(p)

If you would recommend that the friend donate to SIAI...

Yes, I would.

Or, for a more radical thought experiment, ask yourself how much of SIAI's budget you would reallocate to your personal bank account given the chance.

I wouldn't reallocate any money to my personal bank account.

I don't see much of any explanation outside the endowment effect.

Okay, maybe I can overcome that at some point. But I think other factors like the predicted reaction of my surroundings also weigh heavily in my opinion.

comment by juliawise · 2011-11-17T21:21:33.088Z · LW(p) · GW(p)

That decision would be met with disbelief by my surroundings.

Do you mean that people around you would not believe you were donating? Or would not think your cause a good one? Or would tell you that large donations are strange or a bad idea?

I'd really be interested to know, since I've recently started writing on the topic. Hardly anyone is willing to say why they don't give.

Replies from: Sophronius, Giles, None, soreff, XiXiDu
comment by Sophronius · 2011-11-17T23:00:43.635Z · LW(p) · GW(p)

For me:

1) The funds I have I might need later, and I'm not willing to take chances on my future. 2) Uncertainty as to whether the money would do much good. 3) Selfishness; my own well-being feels more important than a cause that might save everyone . 4) Potential benefits from SIAI are far and there's no societal pressure for donating money 5) I like money. Giving money hurts. It's supposed to go in my direction dammit.

I do anticipate that if I were to attain a sufficient amount of money (i.e. enough to feel that I don't need to worry about money ever again) I would donate, though probably not exclusively to SIAI.

Edit: Looking at those five points, an idea formulates. What if there was a post exclusively for people to comment and say that they donated (possibly specifying the amount), and receiving karma in return? Strange as it is, karma might be more motivating (near) for some people than the prospect of saving humanity (far!). Just like the promise of extra harry potter fan-fiction.

Replies from: None, John_Maxwell_IV
comment by [deleted] · 2011-11-18T01:36:30.766Z · LW(p) · GW(p)

What if there was a post exclusively for people to comment and say that they donated (possibly specifying the amount), and receiving karma in return?

We did that once.

The results were weird.

comment by John_Maxwell (John_Maxwell_IV) · 2011-11-22T07:47:30.729Z · LW(p) · GW(p)

It looks as though about half of your objections have to do with SIAI and the other half have to do with charitable donation in general. In my opinion, there are very strong arguments for donating to charity even if the best available charity is much less effective than SI claims to be. If you find these arguments persuasive, you could separate your charitable giving into 2 phases: during the 1st phase, you would establish some sort of foundation and begin gradually redirecting your income/assets to it. During the 2nd phase, you could attempt to figure out what the optimal charity was and donate to it.

I've found that this breaking up of a difficult task into phases helps me a lot. For example, I try to separate the brainstorming of rules for myself to follow and their actual implementation into separate phases.

comment by Giles · 2011-11-17T22:44:05.244Z · LW(p) · GW(p)

Hardly anyone is willing to say why they don't give.

My brain tells me that the reason I've only given a tiny fraction of my wealth so far is that there is still valuable information to be learned regarding whether SIAI is the best charity to donate to.

  • I feel fairly sure that I haven't yet acquired/processed all of the relevant information
  • I feel fairly sure that the value of that information is still high
  • I feel that I'm not choosing to acquire that information as quickly as I could be
  • I'm not sure what will happen when the value of information dips below the value of acting promptly - whether I'll start giving massively or move onto the next excuse, i.e. I'm not sure I trust my future self.
Replies from: AnnaSalamon, juliawise
comment by AnnaSalamon · 2011-11-18T00:19:07.418Z · LW(p) · GW(p)

What have you done to seek info as to which charity is best? Or what do you plan to do in the next few weeks?

Replies from: Giles, XiXiDu
comment by Giles · 2011-11-18T02:07:40.769Z · LW(p) · GW(p)

Recently I've been in touch with Nick Beckstead from Giving What We Can who had told me at the singularity summit that the org was starting a project to research unconventional (but potentially higher impact) charities including x-risk mitigation. I may be able to help him with that project somehow but right now I'm not quite sure.

I'm also starting a Toronto LW singularity discussion group next week - I don't know how much interest in optimal philanthropy there is in Toronto, but I'm hoping that I can at least improve my understanding of this particular issue, as well as having a meatspace community who I can rely on to understand/support what I'm trying to do.

This is definitely an above-average week in terms of sensible things done though.

Some more background: I have a long history of not getting much done (except in my paid job) and have been depressed (on and off) since realising that x-risk was such a major issue and that optimal philanthropy was what I really wanted to be doing.

comment by XiXiDu · 2011-11-18T10:28:49.374Z · LW(p) · GW(p)

What have you done to seek info as to which charity is best?

What have you done? I would really like to hear how various SI members came to believe what they believe now.

How did you learn about risks from AI? Have you been evaluating charities and learn about existential risks? What did you do next, read all available material on AGI research?

I can't imagine how someone could possible be as convinced as the average SI member without first becoming an expert when it comes to AI and complexity theory.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-18T11:04:51.341Z · LW(p) · GW(p)

I ran across the Wikipedia article about the technological singularity when I was still in high school, maybe around 2004. From there, I found Staring into the Singularity, SL4, the Finnish Transhumanist Association, others.

My opinions about the Singularity have been drifting back and forth, with the initial enthusiasm and optimism being replaced by pessimism and a feeling of impending doom. I've been reading various things on and off, as well as participated in a number of online discussions. Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I. I don't think it's just confirmation bias either, because plenty of my other opinions have changed over the same time period, and because I've on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-18T11:58:15.467Z · LW(p) · GW(p)

Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I.

Which paper or post outlines those core claims? I am not sure what they are.

...because plenty of my other opinions have changed over the same time period...

I find it very hard to pinpoint when and how I changed my mind about what. I'd be interested to hear some examples to compare my own opinion on those issues, thanks.

I've on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.

What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don't see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.

The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that's all there is to it. The fanaticism and portrayal as world saviours, e.g. "I feel like humanity's future is in good hands", really makes me sick.

Replies from: Kaj_Sotala, juliawise
comment by Kaj_Sotala · 2011-11-19T09:05:21.901Z · LW(p) · GW(p)

Which paper or post outlines those core claims? I am not sure what they are.

Mostly just:

  • AGI might be created within my lifetime
  • When AGI is created, it will eventually take control of humanity's future
  • It will be very hard to create AGI in such a way that it won't destroy almost everything that we hold valuable

I find it very hard to pinpoint when and how I changed my mind about what. I'd be interested to hear some examples to compare my own opinion on those issues, thanks.

Off the top of my head:

  • I stopped being religious (since then I've alternated between various degrees of "religion is idiotic" and "religion is actually kinda reasonable")
  • I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
  • I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I've come to see that they do have a lot of good points
  • I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I'm a lot less frustrated, since I've come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)

What do you mean by that? What does it mean for you to believe in the issue. What facts?

Things like:

  • Thinking that "oh, this value problem can't really be that hard, I'm sure it'll be solved" and then realizing that no, the value problem really is quite hard.
  • Thinking that "well, maybe there's no hard takeoff, Moore's law levels off and society will gradually adapt to control the AGIs" and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
  • Thinking that "well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other" and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn't allowed the chimpanzees to play us against each other very well.
comment by juliawise · 2011-11-18T13:49:14.492Z · LW(p) · GW(p)

I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.

But evaluating a charity doesn't just mean deciding whether they're working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there's no point in donating. I have no idea how likely it is that they'll succeed, and don't know how to get such information. Holden Karnofsky's writing on estimate error is relevant here.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-18T15:01:43.420Z · LW(p) · GW(p)

If you think SIAI has no chance of success, or is sure to succeed giving the funding they already have, there's no point in donating.

I agree, a very important point.

I became more convinced this was important work after talking to Anna Salamon.

I have read very little from her when it comes to issues concerning SI's main objective. Most of her posts seem to be about basic rationality.

She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.

And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.

Replies from: juliawise
comment by juliawise · 2011-11-19T13:40:25.828Z · LW(p) · GW(p)

I'm also far from an expert in this field - I didn't study anything technical, and didn't have many friends who did, either. At the time I spoke to Anna, I wasn't sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven't done). They thought a singularity was fairly likely, and obviously hadn't thought about any dangers associated with it. From reading Eliezer's writings I'm convinced that a carelessly made AI could be disastrous. So from those points, I'm willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI's cause is a good one.

But after reading GiveWell's interview with SIAI, I don't think they're the best choice for my donation, especially since they say they don't have immediate plans for more funding at this time. I'll probably go with GiveWell's top pick once they come out with their new ratings.

comment by juliawise · 2011-11-18T02:59:32.749Z · LW(p) · GW(p)

The question of how much information is enough is difficult for me, too. My plan right now is to give a good chunk of money every six months or so to whichever charity I think best at that time. That way I stay in the habit of giving (reducing chances that my future self will fail to start) and it gives me a deadline so that I actually do some research.

comment by [deleted] · 2011-11-18T01:33:16.655Z · LW(p) · GW(p)

Hardly anyone is willing to say why they don't give.

Because GiveWell says I can do better.

Replies from: juliawise
comment by juliawise · 2011-11-18T02:35:56.897Z · LW(p) · GW(p)

Sorry, I didn't mean "give to SIAI." I mean give to whatever cause you think best. I agree that GiveWell is a good tool.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-18T10:16:37.692Z · LW(p) · GW(p)

Sorry, I didn't mean "give to SIAI." I mean give to whatever cause you think best.

I don't care enough about myself and I am not really an altruist either.

Replies from: juliawise
comment by juliawise · 2011-11-19T13:42:49.660Z · LW(p) · GW(p)

From your initial post in this thread, I doubt that your true rejection is "I don't care about anything."

comment by soreff · 2011-11-17T23:03:11.112Z · LW(p) · GW(p)

Hardly anyone is willing to say why they don't give.

In my case, reducing existential risks isn't high on my priority list. I don't claim to be an altruist. I donate blood, but this has the advantage from my point of view that it is bounded, and visible, and local in both time and space.

Replies from: juliawise
comment by juliawise · 2011-11-18T02:39:58.127Z · LW(p) · GW(p)

I understand why visibility is an advantage, and possibly boundedness. What is better about local?

Replies from: soreff
comment by soreff · 2011-11-18T14:39:10.535Z · LW(p) · GW(p)

I'm guessing that reciprocity is more likely to work locally. If nothing else, it is spread across a smaller population. (I should add: Locality isn't cleanly orthogonal to visibility as a criterion. I'd guess that they have a considerable correlation.)

comment by XiXiDu · 2011-11-18T10:08:53.421Z · LW(p) · GW(p)

Or would tell you that large donations are strange or a bad idea?

They would think that I have gone mad and would probably be mad at me as a result.

Replies from: juliawise
comment by juliawise · 2011-11-19T13:47:33.404Z · LW(p) · GW(p)

For the last ~12 years, most of my money has gone to donations or tuition. During this time I've maintained good relationships with friends and family, met and married a man with an outlook similar to mine, enjoyed lots of inexpensive pleasures, and generally had a good life. People thinking I'm mad or being mad at me has not been a problem. I blog on the topic.

comment by quen_tin · 2011-11-17T20:24:56.644Z · LW(p) · GW(p)

Of course, being apreciated for what you do is important because goals are mainly social features. Having a goal for oneself and only oneself is simply absurd: when you die it's all gone. Nobody wants to achieve something that he/she can't share with others because everyone knows that someone alone counts for nothing.

So there is no problem here. You always work for gratitude, but unless you are a very cynical person, you'll always want to deserve that gratitude for real, and that's the most important here.

Replies from: Fleisch
comment by Fleisch · 2011-11-17T22:41:00.088Z · LW(p) · GW(p)

This is either a very obvious rationalization, or you don't understand Kaj Sotalas point, or both.

The problem Kaj Sotala described is that people have lots of goals, and important ones too, simply as a strategic feature, and they are not deeply motivated to do something about them. This means that most of us who came together here because we think the world could really be better will with all likelihood not achieve much because we're not deeply motivated to do something about the big problems. Do you really think there's no problem at hand? Then that would mean you don't really care about the big problems.

Replies from: quen_tin, quen_tin
comment by quen_tin · 2011-11-18T09:43:27.676Z · LW(p) · GW(p)

I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.

More precisely : either one is consciously seeking gratitude, then he/she is cynical, but I think this is rarely the case. Either seeking gratitude is only one aspect of a goal that is sincerly pursued (which means that one wants to deserve that gatiude for real). Then there is no problem, the motivation is there.

Replies from: Fleisch
comment by Fleisch · 2011-11-18T20:52:49.703Z · LW(p) · GW(p)

I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.

Then you don't talk about the same thing as Kaj Sotala. He talks about all the cases where it seems to you that you are deeply motivated, but the goal turns out to be, or gets turned into nothing beyond strategic self-deception. Your point may be valid, but it is about something else than what his post is about.

Replies from: quen_tin, quen_tin
comment by quen_tin · 2011-11-20T14:59:50.863Z · LW(p) · GW(p)

I don't make a difference between having a goal and seeking gratitude for that goal, it's exactly the same for me. Something is important if it deserve a lot of gratitude, something is not if it does not. That's all. The "gratitude" part is intrinsic.

If you accept my view, Kaj Sotala's statement is a nonsense: it can't turn out to be strategic self-deception when we thought we were deeply motivated, we're seeking gratitude from the start (which is precisely what "being deeply motivated" means. If at one point we discover that we've been looking for gratitude all that time, then we don't discover that we've been fooling ourself, we're only beginning to understand the true nature of any goal.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-20T20:26:47.265Z · LW(p) · GW(p)

Like Wei Dai said - the core problem (at least in my case) wasn't in the prestige-seeking by itself, it was in the cached and incorrect thoughts about what would lead to prestigious results, and the fact that those cached thoughts hijacked the reasoning process. If I had stopped to really think about whether the actions made any sense, I should have realized that such actions wouldn't lead to prestige, they would lead to screwing up (in the first and second example, at least). But instead I just went with the first cliché of prestige that my brain associated with this particular task.

If I had actually thought about it, I would have realized that there were better ways of both achieving the goal and getting prestige... but because my mind was so focused on the first cliché of prestige that came up, I didn't want to think about anything that would have suggested I couldn't do it. I subconsciously believed that if I didn't get prestige this way, I wouldn't get it in any other way either, so I pushed away any doubts about it.

Replies from: quen_tin
comment by quen_tin · 2011-11-22T19:46:31.408Z · LW(p) · GW(p)

Maybe I misunderstood a bit your point. I understood:

  • "I thought I wanted to work for a great cause but it turned out I only wanted to be the kind of person who work for a great cause"

Now I understand:

  • "I really wanted to work for a great cause, but it turned out all my actions were directed toward giving the impression, in the short-term, that I was"

In other words, you were talking about shortsightedness when I thought it was delusion?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-11-23T10:39:08.147Z · LW(p) · GW(p)

Yes, I think you could put it that way.

comment by quen_tin · 2011-11-20T17:27:19.670Z · LW(p) · GW(p)

Imagine that in the current discussion, we suddenly realize that we've been writing all that time not to find the truth, but to convince each other (which I think is actually the case). It would be one of those situations where someone like Kaj Sotala would say: "it seems you're deeply motivated in finding the truth, but you're only trying to make people think you have the truth (=convince them)". Then my point would be: unless you're cynical, convincing and finding the truth are exactly the same. If you're cynical, you just think short term and your truth won't last (people will soon realize you were wrong). If you're sincere, you think long term and your truth will last. I would even argue that the only proper definition of truth is: what convinces most people in the long run. Similarly, a proper definition of good (or "important to do") would be: what brings gratitude from most people in the long run.

Replies from: Fleisch, lessdazed, TheOtherDave
comment by Fleisch · 2011-11-20T22:51:36.117Z · LW(p) · GW(p)

I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.

Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't find anyone to do their mucky jobs? What's with the people who could offer great service and/or reduce their working hours by working from home, if only they could get themselves to do it? Where does your argument solve these problems?

The reason I reacted to your post was not that I saw the exact flaw in your argument. The reason I answered is that I saw that your argument doesn't solve the problem at hand; in fact, it fails to even recognize it in the first place.

I think that you are probably overvaluing criticism. If so, you can increase the usefulness of your thoughts significantly if you stop yourself from paying much attention to flaws and try to identify the heart of the material first, and only apply criticism afterwards, and even then only if it's worth it.

Replies from: quen_tin
comment by quen_tin · 2011-11-21T07:53:05.723Z · LW(p) · GW(p)

Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don't agree with me, now let's not loose too much time on meta-discussions...

I understand your concern about the problems mentioned in the article, and your feeling that I don't address them. You're right, I don't: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.

comment by lessdazed · 2011-11-20T19:54:16.236Z · LW(p) · GW(p)

Kaj Sotala would say: "it seems you're deeply motivated in finding the truth, but you're only trying to make people think you have the truth (=convince them)".

You think he would make the mistake of thinking there is only one motivation behind each human action?

comment by TheOtherDave · 2011-11-20T19:23:19.082Z · LW(p) · GW(p)

I would even argue that the only proper definition of truth is: what convinces most people in the long run.

Just to clarify: consider two competing theories T1 and T2 about what will happen to the Earth's surface after all people have died. You would argue that if T1 is extremely popular prior among living people prior to that time, and T2 is unpopular, then that's all we need to know to conclude that T1 is more true than T2. Further, if all other competing theories are even less popular than T2, then you would argue further that T1 is true and all other competing theories false. What actually happens to the Earth's surface is completely irrelevant to the truth of T1.

Have I understood you?

Replies from: quen_tin
comment by quen_tin · 2011-11-21T08:04:05.066Z · LW(p) · GW(p)

This is a bit caricatural. I made my statement as simple as possible for the sake of the argument, but I subscribe to the pragmatic theory of truth.

Replies from: TheOtherDave, lessdazed
comment by TheOtherDave · 2011-11-21T15:08:02.222Z · LW(p) · GW(p)

I made my example extreme to make it easy for you to confirm or refute. But given your refutation, I honestly have no idea what you mean when you suggest that the only proper definition of truth is what convinces the most people in the long run. It sure sounds like you're saying that the truth about a system is a function of people's beliefs about that system rather than a function of the system itself.

Replies from: quen_tin
comment by quen_tin · 2011-11-21T15:42:49.747Z · LW(p) · GW(p)

Yes in a sense. The pragmatic conception of truth holds that we do not have access to an absolute truth, nor to any system as it is "in itself", but only to our beliefs and representations of systems. All we can do is test our beliefs and the accuracy of our representations.

Within that conception, a belief is true if "it works", that is, if it can be successfully confronted to other established belief systems and serve as a base for action with expectied result (e.g. scientific inquiry). Incidentally, there is no truth outside our beliefs, and truth is always temporary. A truth could be considered universal if it could convince everyone.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-21T17:51:38.237Z · LW(p) · GW(p)

I'm entirely on board with endorsing beliefs that can successfully serve as a basis for action with expected results by calling them "true," and on board with the whole "we don't have access to absolutes" thing.

I am not on board with endorsing beliefs as "true" just because I can convince other people of them.

You seem to be talking about both things at once, which is why I'm confused.

Can you clarify what differences you see (if any) between "it works/it serves as a reliable basis for action" on the one hand, and "it can convince people" on the other, as applied to a belief, and why those differences matter (if they do)?

Replies from: quen_tin
comment by quen_tin · 2011-11-22T16:35:39.402Z · LW(p) · GW(p)

In my view, going from subjective truth to universal (inter-subjective) truth requires agreement between different people, that is, convincing others (or being convinced). I hold a belief because it is reliable for me. If it is reliable for others as well, then they'll probably agree with me. I will convince them.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-22T17:40:22.221Z · LW(p) · GW(p)

So, at the risk of caricaturing your view again, consider the following scenario:

At time T1, I observe some repeatable phenomenon X. For the sake of concreteness, suppose X is my underground telescope detecting a new kind of rock formation deep underground that no person has ever before seen... that is, I am the discoverer of X.

At time T2, I publish my results and show everyone X, and everyone agrees that yes, there exists such a rock formation deep underground.

If I've understood you correctly, you would say that if B is the belief that there exists such a rock formation deep underground, then at T1 B is "subjectively true," prior to T1 B doesn't exist at all, and at T2 B is "inter-subjectively or universally true". Is that right?

Let's call NOT(B) the denial of B -- that is, NOT(B) is the belief that such rock formations don't exist.

At times between T1 and T2, when some people believe B and others believe NOT(B) with varying degrees of confidence, what is the status of B in your view? What is the status of NOT(B)? Are either of those beliefs true?

And if I never report X to anyone else, then B remains subjectively true, but never becomes inter-subjectively true. Yes?

Now suppose that at T3, I discover that my tools for scanning underground rock formations were flawed, and upon fixing those tools I no longer observe X. Suppose I reject B accordingly. I report those results, and soon nobody believes B anymore.

On your view, what is the status of B at T3? Is it still intersubjectively true? Is it still subjectively true? Is it true at all?

Does knowing the status of B at T3 change your evaluation of the status of B at T2 or T1?

Replies from: quen_tin
comment by quen_tin · 2011-11-22T18:15:57.026Z · LW(p) · GW(p)

At T1, B is "subjectively true" (I believe that B). However it's not an established truth. From the point of view of the whole society, the result needs replication: what if I was deceiving everyone? At T2, B is controversial. At T3, B is false.

Now is the status of B changing over time? That's a good question. I would say that the status of B is contextual. B was true at T1 to the extent of the actions I had performed at that time. It was "weakly true" because I had not checked every flaws in my instruments. It became false in the context of T3. Similarly, one could say that Newtonian physics is true in the context of slow speeds and weak energies.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-22T18:33:26.885Z · LW(p) · GW(p)

OK, thanks for clarifying; I think I understand your view now.

comment by lessdazed · 2011-11-21T15:57:12.928Z · LW(p) · GW(p)

...

...

...

I made my statement as simple as possible

"They fuck you up, count be wrong"

Kid in The Wire, Pragmatist Special Episode when asked how he could keep count of how many vials of crack were left in the stash but couldn't solve the word problem in his math homework.

comment by quen_tin · 2011-11-18T14:45:14.351Z · LW(p) · GW(p)

Let me rephrase.

The assumption that there would exist pure gratitude-free goals is a myth: pursuing such goals would be absurd. (people who seem do perform gratitude-free actions are often religious people: they actually believe in divine gratitude).

Therefore social gratitude is an essential component of any goal and thus it is not correlated with lack of sincere motivation, nor does it "downgrade" the goal to something less important. It's just part of it.

Replies from: Vladimir_Nesov, Randolf, wedrifid
comment by Vladimir_Nesov · 2011-11-21T18:56:52.982Z · LW(p) · GW(p)

See Fake Selfishness.

Replies from: quen_tin
comment by quen_tin · 2011-11-22T16:55:40.270Z · LW(p) · GW(p)

Seeking gratitude has nothing to do with selfishness., on the contrary. Something usually deserve gratitude if it benefits others. My position is very altruistic.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-22T17:12:17.782Z · LW(p) · GW(p)

The error in reasoning is analogous.

Replies from: quen_tin
comment by quen_tin · 2011-11-22T17:36:18.981Z · LW(p) · GW(p)

I don't think so. Let me precise that my thoughts are to be understood from an ethical perspective: by "goal" I mean something that deserves to be done, in other words, "something good". I start from the assumption that having a goal supposes thinking that it's somehow something good (=something I should do), which is kind of tautologic.

Now I am only suggesting that a goal that does not deserve any gratitude can't be "good" from an ethical point of view.

Moreover, I am not proclaming I am purely seeking gratitude in all my actions.

Replies from: TimS
comment by TimS · 2011-11-22T17:58:52.760Z · LW(p) · GW(p)

We seem to be having a definitional problem. Perhaps if you taboo the word gratitude, then we might understand your position better.

Replies from: quen_tin
comment by quen_tin · 2011-11-22T18:40:34.298Z · LW(p) · GW(p)

Ok. I would replace "Being grateful for an action" by "recognizing that an action is important/beneficial". Pursuing a pure gratitude-free goal would mean: pursuing a goal that nobody think is beneficial or important to do (except you, because you do it), and supposedly nobody ever will. My claim is that such action is absurd from an ethical (universalist) perspective.

Replies from: TimS
comment by TimS · 2011-11-22T19:20:40.754Z · LW(p) · GW(p)

I don't understand what you mean by "absurd."

Bob tells you that he is going to climb a boring and uninteresting mountain because he randomly feels like it. There's nothing to see there that couldn't be seen elsewhere, and everyone else thinks that climbing that mountain is pointless. Omega verifies that Bob has no other motivation for climbing the mountain.

Would you say that Bob's desire to climb the mountain is (a) mentally defective (i.e. insane), (b) immoral, (c) impossible, (d) not relevant to your point, or (e) something else?

Replies from: quen_tin
comment by quen_tin · 2011-11-22T20:16:13.626Z · LW(p) · GW(p)

What do you mean by "randomly feels like it"? Maybe he wants some fresh air or something... Then it's a personal motivation, and my answer is (d) not relevant to ethic. The discussion in this article was not, I think, about casual goals like climbing a mountain, but about the goals in your life, the important things to do (maybe I should use the term "finalities" instead). It was a matter of ethic.

If Bob believes that climbing this mountain is good or important while he admits that his only motivation is "randomly feeling like it", then I call his belief absurd.

comment by Randolf · 2011-11-21T17:18:38.839Z · LW(p) · GW(p)

I'm afraid you are making a very strong statement with hardly any evidence to support it. You merely claim that people who pursue gratitude-free goals are often religious people (source?) and that such goals are a myth and absurd. (Why?) I for one, don't understand why such a goal would be necessarily absurd..

Also, I can imagine that even if I was the only person in the world, I would still pursue some goals.

Replies from: quen_tin
comment by quen_tin · 2011-11-21T17:40:04.393Z · LW(p) · GW(p)

It's absurd from an ethical point of view, as a finality. I was implicitely talking in the context of pursuing "important goals", that is, valued on an ethical basis. Abnegation at some level is an important part of most religious doctrines.

Replies from: lessdazed, TimS
comment by lessdazed · 2011-11-21T17:50:26.572Z · LW(p) · GW(p)

What prediction about the world can you make from these beliefs? What would be less - or more - surprising to you than to those with typical beliefs here?

Replies from: quen_tin, quen_tin
comment by quen_tin · 2011-11-22T16:19:14.573Z · LW(p) · GW(p)

Ethic is not about predicting perceptions but conducting actions.

comment by quen_tin · 2011-11-22T17:07:04.983Z · LW(p) · GW(p)

Let me justify my position.

Gratitude-free actions are absurd from an ethical point of view, because we do not have access to any transcendant and absolute notion of "good". Consequently, we have no way to tell if an action is good if noone is grateful for it.

If you perform a gratitude-free action, either it's only good for you: then you're selfish, and that's far from the universal aim of ethics. Either you you believe in a transcendant notion of "good", together with a divine gratitude, which is a religious position.

comment by TimS · 2011-11-21T18:24:59.623Z · LW(p) · GW(p)

Is the following a reasonable paraphrase of your position:

If game theoretic considerations do not justify behaving in a way labelled "altruistic," then there is no reason to behave in "altruistic" ways.

Replies from: quen_tin
comment by quen_tin · 2011-11-22T16:45:39.697Z · LW(p) · GW(p)

My view is very altruistic on the contrary : seeking gratitude is seeking to perform actions that benefits others or the whole society. Game theoretic considerations would justify being selfish, which does not deserve gratitude at all.

comment by wedrifid · 2011-11-18T14:54:50.998Z · LW(p) · GW(p)

Therefore social gratitude is an essential component of any goal and thus it is not correlated with lack of sincere motivation

That doesn't follow. Degree of sincerity and degree of social gratitute may well be correlated. The fact that motivations are seldom pure doesn't change that. It just makes the relationship more grey.

Replies from: quen_tin
comment by quen_tin · 2011-11-20T15:10:19.174Z · LW(p) · GW(p)

I don't make a difference between seeking social gratitude and having a goal. In my view, sincere motivation is positively correlated with seeking social gratitude. You can make an analogy with markets if you want: social gratitude is money, motivation is work. If something is worth doing, it will deserve social gratitude. In my view, the author here appears to be complaining that we are working for money and not for free...