A Challenge for LessWrong
post by simplicio · 2010-06-29T23:47:39.284Z · LW · GW · Legacy · 172 commentsContents
"Rationalists should win" Nonrandom acts of rationality What about LessWrong acting as a group? Potential objection None 172 comments
The user divia, in her most excellent post on spaced repetition software, quotes Paul Buchheit as saying
"Good enough" is the enemy of "At all"!
This is an important truth which bears repetition, and to which I shall return.
"Rationalists should win"
Many hands have been wrung hereabouts on the subject of rationality's instrumental value (or lack thereof) in the everyday lives of LWers. Are we winning? Some consider this doubtful.1
Now, I have a couple of issues with the question being framed in such a way.
- Benefits of rationality are often negative benefits - in the sense that they will involve not being stupid as opposed to being especially smart. But "Why I didn't take on that crippling mortgage" doesn't make a very good post.
- Weapons-grade rationality à la LessWrong is a refinement to the reactor-grade rationality practiced by self-described skeptics - for most cases, it is not a quantum leap forward. The general skeptic community is already winning in certain senses (e.g., a non-religious outlook correlates strongly with income and level of education), although causal direction is hard to determine.
- Truth-seeking is ethical for its own sake.
- I, for one, am having a hell of a good time! I count that as a win.
Nonrandom acts of rationality
The LessWrong community finds itself in the fairly privileged position of being (1) mostly financially well-off; (2) well-educated and articulate; (3) connected; (4) of non-trivial size. Therefore, I would like to suggest a project for any & all users who might be interested.
Let us become a solution in search of problems.
Perform one or more manageable & modest rationally & ethically motivated actions between now and July 31, 2010 (indicate intent to participate, and brainstorm, below). These actions must have a reasonable chance of being an unequivocal net positive for the shared values of this community. Finally, post what you have done to this thread's comments, in as much detail as possible.
Some examples:
- Write a letter on behalf of Amnesty International in support of their anti-torture campaigns.
- Make an appointment to give blood.
- Contact and harangue one of your elected representatives. For example, I may write to my Minister of Health about the excellent harm-reduction work being done in Vancouver by Insite, a safe-injection site for IV drug users whose efficacy in decreasing public drug use and successfully referring patients to detox has been confirmed in published articles in the Lancet and New England Journal of Medicine. (Insite is controversial, with people like the previous minister opposing it for purely ideological reasons. Politics is the people-killer.)
- Donate a one-time amount somewhere around 10% of your weekly disposable income to a reputable charity - I may go with Spread the Net - or to an organization promoting our values in your own area (e.g., the NCSE, or indeed the SIAI).
- Give your Air Miles to the Amanda Knox Defense Fund.
What about LessWrong acting as a group?
I would love to see a group-level action on our part occur; however, after some time spent brainstorming, I haven't hit upon any really salient ones that are not amenable to individual action. Perhaps a concerted letter-writing campaign? I suspect that is a weak idea, and that there are much better ones out there. Who's up for world-optimization?
Potential objection
These actions are mostly sub-optimal, consequentially speaking. The SIAI/[insert favourite cause here] is a better idea for a donation, since it promises to solve all the above problems in one go. These are just band-aids.
172 comments
Comments sorted by top scores.
comment by wedrifid · 2010-06-30T13:14:07.968Z · LW(p) · GW(p)
The examples listed are not rational. They are examples of 'altruism' for the sake of a 'warm feeling' and signalling. Writing a letter, ringing a politician or giving blood are not actions that maximise your altruistic preferences!
You have responded to this 'Potential Objection' with the "better than nothing" argument but even with that in mind this is not about being rational. It is just a bunch of do-gooders exhorting each other to be more sacrificial. When we used to do this at church we would say it was about God... and premising on some of the accepted beliefs that may have been rational. But it definitely isn't here.
I make a call for a different response. I encourage people to resist the influence, suppress the irrational urge take actions that are neither optimal signals nor an optimal instrument for satisfying their altruistic values.
This isn't a religious community and 'rational' is not or should not be just the local jargon for 'anything asserted to be morally good'.
If my preferences were such that I valued eating babies then it would be rational for me to eat babies. Rational is not nice, good, altruistic or self sacrificial. It just is.
Replies from: komponisto, DSimon, Mass_Driver, simplicio↑ comment by komponisto · 2010-07-01T02:01:41.370Z · LW(p) · GW(p)
They are examples of 'altruism' for the sake of a 'warm feeling' and signalling.
Look, scoffing at less-than-optimal philanthropy is ultimately just another form of counterproductive negativity. If you're really serious about efficacy, you should be adding to the list of causes, not subtracting from it. That is, instead of responding to a post like this by encouraging people to
resist the influence, suppress the... urge [to] take actions
(!)
how about answering with "hey, you know what would be really, really helpful?" and proceeding to list some awesome utility-maximizing charity.
Warm feelings are good. Someone who donates a few spare frequent-flyer miles to help Curt Knox and Edda Mellas visit their daughter imprisoned 6,000 miles away doesn't need to feel ashamed of themselves for not being "rational" -- except in the extremely unlikely event that that action actually prevented them from doing something better. Does anyone honestly, seriously believe that discouraging people from doing things like this is a way of making the world a better place?
Speaking of challenges for LW, I propose a new rule: anybody who comes across an ostensibly good cause, but scoffs at its suboptimality, or thinks "well, it's not that I'm not willing to sacrifice $10, but surely there are better uses of that money" should be immediately required, right then and there, to donate that $10 to the Singularity Institute -- no ifs, ands, or buts.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T02:40:21.281Z · LW(p) · GW(p)
That is, instead of responding to a post like this by encouraging people to ...
how about answering with "hey, you know what would be really, really helpful?" and proceeding to list some awesome utility-maximizing charity.
No, no, NO! I desire to correct a fundamental mistake that is counter to whatever good 'rationality' may happen to provide. Raising the sanity waterline is an important goal in itself and particularly applicable in rare communities that have some hope of directing their actions in a way that is actually effective. Not only that, but seeing the very concept of rationality abused to manipulate people into bad decision making is something that makes me feel bad inside. Yes, it is the opposite of a warm fuzzy.
Look, scoffing at less-than-optimal philanthropy is ultimately just another form of counterproductive negativity. If you're really serious about efficacy, you should be adding to the list of causes, not subtracting from it.
You are fundamentally wrong and the use of labeling things that disagree with you as 'negative' is non-rational influence technique that works in most places but should be discouraged here. It is not counterproductive to not do things that are stupid. It is not intrinsically better to add things to a list of normatively demanded behaviors while never removing them. If the list is wrong (for a given value of wrong) then it should be fixed by adding to it or removing from it in whatever way necessary.
Warm feelings are good. Someone who donates a few spare frequent-flyer miles to help Curt Knox and Edda Mellas visit their daughter imprisoned 6,000 miles away doesn't need to feel ashamed of themselves for not being "rational" -- except in the extremely unlikely event that that action actually prevented them from doing something better. Does anyone honestly, seriously believe that discouraging people from doing things like this is a way of making the world a better place?
People being manipulated into actions by the inclusion of irrelevant things in the definition of 'rational' is what I am discouraging. Tell people that Knox is a good way to purchase warm fuzzies, that's fine. But don't dare try to call it a 'challenge for rationality', piggybacking on the human instinct to avoid the shame of not supporting the tribal value ('rational').
Speaking of challenges for LW, I propose a new rule: anybody who comes across an ostensibly good cause, but scoffs at its suboptimality, or thinks "well, it's not that I'm not willing to sacrifice $10, but surely there are better uses of that money" should be immediately required, right then and there, to donate that $10 to the Singularity Institute -- no ifs, ands, or buts.
No ifs and buts? Not everyone here needs to consider the SIAI to be the best use of their money. That's not required by 'rationality' either. You're in the wrong place if you think that approach is at all appropriate. Don't try to force your obsession with Knox on everyone else. It's not my priority and for most people it just isn't the rational way to maximise their preferences either.
Replies from: WrongBot, komponisto↑ comment by WrongBot · 2010-07-01T03:09:56.220Z · LW(p) · GW(p)
While I agree with pretty much all of your points here, you may have better luck persuading those who do not if you take a less confrontational approach (I still fail at this occasionally, despite much effort). It's easier for me to accept a line of reasoning if that line of reasoning does not include the conclusion that I, personally, am evil. This would not be true if I were a perfect rationalist, but unfortunately it is not yet possible for me to escape my existence as a sack of neurons. And so it is with everyone.
The most persuasive arguments are the ones we want to believe. If you believe you are right (and you should), you should make it as easy as possible for people to agree with you.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T03:34:37.172Z · LW(p) · GW(p)
Optimal persuasion was not my priority, emphasizing the nature of disagreement was. If deceptive use 'rational' was not a violation of both terminal and instrumental values then accusations of 'counterproductive negativity' or generally poor thinking, etc could be taken to have some approximation credibility - "it doesn't matter so you should shut up" vs "Direct attack on core values! Destroy!". I did originally explicitly explain things in these terms in the comment, including reference to ethical theory but that was turning into a seriously long winded tangent.
As for optimal persuasion, it would not involve a complete reply, a targeted paragraph responding to a cherry picked quote would be more effective. In fact, better still would be to make no reply at all, but create a new (much needed) post on the subject of ethics and rationality in order to direct all attention away from this one. Arguing with something gives it the credibility of 'something worth arguing with'.
↑ comment by komponisto · 2010-07-01T03:34:49.955Z · LW(p) · GW(p)
I perceive the tone of the parent comment as needlessly inflammatory (constituting a violation of niceness) and will therefore take some time out before replying to the substance (no concession on which is to be inferred from my temporary silence).
ETA: The above was written before the sentence calling me "evil" was removed. I continue to take exception to the part about an alleged "obsession with Knox" that I am attempting to "force" on anyone. I defy anyone to justify such a characterization; my charitable interpretation is that wedrifid has misunderstood something I said, and/or forgotten that my comment and the original post were written by two different people.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T04:33:23.083Z · LW(p) · GW(p)
It should be noted that I observe the tone of the parent of my rebuttal to be aggressive, with vigorous use of shaming to present a position that undermines a core value of this community. A vigorous response should be expected.
At WrongBot's suggestion I have removed the sentence containing the word 'evil'. Since almost nobody except myself uses that word in a technical sense it was foolish of me to include it here. I went through planning to edit out anything else that I wrote in haste that I would remove on reflection but I was surprised to find that was the only edit I needed to make. What remains has my reflective endorsement.
Replies from: komponisto, randallsquared↑ comment by komponisto · 2010-07-01T05:29:03.315Z · LW(p) · GW(p)
It should be noted that I observe the tone of the parent of my rebuttal to be aggressive, with vigorous use of shaming to present a position that undermines a core value of this community
No, I cannot let you get away with that. The position I was presenting was that small good deeds should not be discouraged. If you are going to assert that that undermines a core value of this community (which one?), you are going to have to present a serious (and almost certainly novel) argument before you get to call me "evil".
Absolutely no "shaming" was used in presenting this position. The charge is an ironic one, because I am in fact attempting to defend myself and any other warm-fuzzy-enthusiasts who may happen to consider themselves members of this community from being "shamed" by those who would regard with contempt any activity not (e.g.) calculated to minimize the expected number of deaths.
Epistemic rationality (which, by the way, is what I presented the Knox case as a lesson in in the first place) is, as you know, not an end in itself. At least, it isn't the ultimate end. There has to be something to protect. And, at least in my own case, part of what I protect is that part of myself that is capable of caring about specific, individual humans, apart from "humanity" as an aggregate.
For the sake of cutting to the chase, let me now present what I think this disagreement is really about, and you can correct me if necessary. I think what is going on here is that you perceive the kind of "caring" I described above as an obstacle to epistemic rationality, which should therefore be Destroyed. Is that right, or am I being unfair?
At WrongBot's suggestion I have removed the sentence containing the word 'evil'...What remains has my reflective endorsement.
See my ETA.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2010-07-01T07:55:56.936Z · LW(p) · GW(p)
The position I was presenting was that small good deeds should not be discouraged.
An inefficient small good deed is a negated greater good deed requiring the same effort. In this framing, the "small good deed" is actually a bad deed, and should be discouraged.
Replies from: pjeby, komponisto↑ comment by pjeby · 2010-07-01T13:50:21.196Z · LW(p) · GW(p)
An inefficient small good deed is a negated greater good deed requiring the same effort.
False. Time isn't fungible, and humans demonstrably don't make decisions that way.
Among other things, when humans are faced with too many alternatives, we usuallly choose "none of the above"... which means that the moment you complicate the question by even considering what those "greater good deeds might be", you dramatically reduce the probability that anything whatsoever will be accomplished.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T14:22:54.494Z · LW(p) · GW(p)
False. Time isn't fungible, and humans demonstrably don't make decisions that way.
False (at least I reject the incorrect generalization you use to contradict Vladmir). People who do small goods are less inclined to do subsequent goods. Given that the instincts evaluate 'good' more or less independently of any achievement fake 'good deeds' can prevent subsequent good deeds that make a difference. (This has been demonstrated.)
(Incidentally, Vladimir did not mention time at all.)
Replies from: pjeby↑ comment by pjeby · 2010-07-01T18:35:02.496Z · LW(p) · GW(p)
People who do small goods are less inclined to do subsequent goods.
Oh really? What about the FITD effect?
Vladimir did not mention time at all.
He didn't have to. If time were unlimited, one could do any number of good deeds, and it would literally not matter how many of them you did, you could always do more... and thus there would be no competition between choices of how to use that time.
The assumption that not doing something now lets you do more later is false, however, because the time is already passing -- if you choose not to do something now, this doesn't give you any more time to do it later. Thus, a real thing done now beats an imaginary thing to be done later (which, given human psychology, probably won't actually be done).
Replies from: Nick_Tarleton, WrongBot, wedrifid, mattnewport↑ comment by Nick_Tarleton · 2010-07-01T18:36:34.735Z · LW(p) · GW(p)
People who do small goods are less inclined to do subsequent goods.
Oh really? What about the FITD effect?
On the other hand, see Doing your good deed for the day (presumably what wedrifid was referring to). Figuring out which effect dominates under which circumstances seems like an important open problem. (My first, simple, guess would be that altruism is depleted in the short term and strengthened in the long term by use, like willpower or muscular strength.)
Replies from: wedrifid↑ comment by wedrifid · 2010-07-02T02:44:41.249Z · LW(p) · GW(p)
Figuring out which effect dominates under which circumstances seems like an important open problem. (My first, simple, guess would be that altruism is depleted in the short term and strengthened in the long term by use, like willpower or muscular strength.)
That is my guess too.
↑ comment by WrongBot · 2010-07-01T19:28:16.910Z · LW(p) · GW(p)
Given evidence for both the FITD effect and the DITF effect, I wonder if both are merely special cases of a broader effect that makes people more likely to accede to a request if they've received previous requests from the same source. The low-ball effect would also fit that theory.
Either way, I don't think those wikipedia pages are very good evidence of anything at all, given that they cite work by only one researcher and do nothing but restate his conclusions with a positive slant. I suspect on those grounds that those pages are the work of a sock-puppet or someone caught in an affective death spiral; even if they're not, they're certainly not up to wikipedia's usual (fairly decent) standard.
↑ comment by wedrifid · 2010-07-02T02:49:57.835Z · LW(p) · GW(p)
He didn't have to. If time were unlimited, one could do any number of good deeds, and it would literally not matter how many of them you did, you could always do more... and thus there would be no competition between choices of how to use that time.
You'll note that what he did mention was effort, an entirely different resource, particularly as it applies to humans.
The assumption that not doing something now lets you do more later is false, however, because the time is already passing
This isn't an assumption of Vladmir's, it is yours. What we do know is that spending $10 now is $10 that you can not spend later. More importantly given what we know about how humans spend money, $10 you are spending right now on one (completely useless) charity is $10 you are unlikely to spend within this month on an altruistic act that is, in fact, useful.
↑ comment by mattnewport · 2010-07-01T18:42:02.813Z · LW(p) · GW(p)
I guessed wedrifid was referring to this story. There does seem to be some evidence for people feeling that a few virtuous acts give them license to behave badly.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-02T02:42:33.115Z · LW(p) · GW(p)
A good example Matt, I hadn't come across that one specifically but I do know that studies have reliably shown that people who have done one good act feel less obliged to do another one in the short term. This is exactly what we would expect based on signalling needs. I would be rather surprised if pj hadn't encountered such studies given his chosen occupation.
↑ comment by komponisto · 2010-07-01T08:14:30.404Z · LW(p) · GW(p)
That was apparently not the argument that wedrifid was making after all.
As for the argument itself: it says nothing more than that the good is bad because it isn't perfect. That is obviously wrong, because the good is better than nothing. It shouldn't be discouraged; rather, the better should be (separately) encouraged.
Again, see this post.
↑ comment by wedrifid · 2010-07-01T06:06:43.264Z · LW(p) · GW(p)
I think what is going on here is that you perceive the kind of "caring" I described above as an obstacle to epistemic rationality, which should therefore be Destroyed. Is that right, or am I being unfair?
That is not right. I disagree specifically with the claims which I quoted in my reply and my disagreement is limited to precisely that which is contained in said reply.
I approve, for example, of seeking warm fuzzies and this is entirely in line with my stated position.
Replies from: komponisto↑ comment by komponisto · 2010-07-01T06:27:08.379Z · LW(p) · GW(p)
I approve, for example, of seeking warm fuzzies and this is entirely in line with my stated position.
Then what, exactly, do we disagree about?
(Your earlier comment is of no help in clarifying this; in fact you explicitly described the pursuit of warm fuzzies -- as would be exemplified by contributing to the causes listed in the post -- as "bad decision making".)
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T06:34:43.226Z · LW(p) · GW(p)
in fact you explicitly described the pursuit of warm fuzzies -- as would be exemplified by contributing to the causes listed in the post -- as "bad decision making".
This is not the case. I explicitly describe the equivocation of 'rational' with any meaning apart from 'rational' (and the application of said equivocation when decision making) as 'bad decision making'.
Replies from: komponisto↑ comment by komponisto · 2010-07-01T06:51:53.809Z · LW(p) · GW(p)
Okay, I think I see what happened. Your original point was really this:
This isn't a religious community and 'rational' is not or should not be just the local jargon for 'anything asserted to be morally good
-- with which I agree. However, the following statements distracted from that point and confused me:
The examples listed are not rational. They are examples of 'altruism' for the sake of a 'warm feeling' and signalling
I make a call for a different response. I encourage people to resist the influence, suppress the irrational urge take actions that are neither optimal signals nor an optimal instrument for satisfying their altruistic values.
These made it sound like you were saying "No! Don't contribute to those causes! Doing so would be irrational, since they're not philanthropically optimal!" (I unfortunately have a high prior on that type of argument being made here.) My natural response, which I automatically fired off when I saw that your comment had 17 upvotes, is that there's nothing irrational about liking to do small good deeds (warm fuzzies) separately from saving the planet.
However, as I understand you now, you don't necessarily see anything wrong with those causes; it's just that you disapprove of the label "rationality" being used to describe their goodness -- rather than, say, just plain "goodness".
Is this right?
↑ comment by randallsquared · 2010-07-01T13:12:44.147Z · LW(p) · GW(p)
Since almost nobody except myself uses [evil] in a technical sense it was foolish of me to include it here.
What's a technical definition of "evil", then? I would say something about incompatible higher goals, but I'd find your take interesting.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-01T13:33:23.647Z · LW(p) · GW(p)
That's a decent take. But how do we account for people that are not most effectively modeled as agents with goals? Deontologists for example, can be evil even if their (alleged) preferences entirely match mine.
Replies from: Nisan↑ comment by Nisan · 2010-07-01T19:46:00.724Z · LW(p) · GW(p)
You're talking about people who, when acting in a way that they themselves morally endorse, do not pursue the exact same goals you yourself value? In that case, there are few people on Less Wrong who aren't evilwedrifid, much fewer (by proportion) people in your culture who aren't evilwedrifid, and hardly anyone at all in the world who isn't evilwedrifid. I'm not commenting on your own values, wedrifid; practically any two people will disagree about something.
Even if everyone understood your technical usage of "evil", it wouldn't convey much information.
↑ comment by DSimon · 2010-06-30T13:42:49.777Z · LW(p) · GW(p)
If my preferences were such that I valued eating babies then it would be rational for me to eat babies. Rational is not nice, good, altruistic or self sacrificial. It just is.
Well, you're right that rationality is just a system for achieving a goal; it is the same process regardless of whether that goal is making the world a better place or turning it into a desert wasteland.
But, the OP is asking us to use rationality in a practical way and report back. That means we have to pick a goal, or there's nothing to point our rationality at. Making the world a better place for the people living in it (or to use a more utilitarian phrasing, reducing the net amount of potential and actual suffering in the world) seems like a pretty good one. It matches my own personal goals, at any rate.
Therefore: if you don't think the specific steps outlined in the OP are optimal for achieving that goal, please describe your alternative! I'm not being sarcastic; to use the chant, if the OP's steps are effective, I want to believe they're effective, and if they're not, I want to believe they're not.
But, please don't confuse that practical matter with the issue of choosing a goal; that argument is outside the bounds of rationality (except for the specific case of trying to justify one value as a sub-goal of another one).
Replies from: wedrifid↑ comment by wedrifid · 2010-06-30T14:55:28.140Z · LW(p) · GW(p)
But, the OP is asking us to use rationality in a practical way and report back. That means we have to pick a goal, or there's nothing to point our rationality at. Making the world a better place for the people living in it (or to use a more utilitarian phrasing, reducing the net amount of potential and actual suffering in the world) seems like a pretty good one. It matches my own personal goals, at any rate.
Absolutely. What did Eliezer call this "must have a goal" principle way back when in the sequences? He explained it well, whatever it was.
What my point is is that none of the the actions listed are an effective way of achieving anything. Neither of the two purposes of altruistic actions are served (that being signalling and actually changing the world to match altruistic preferences.)
Replies from: Kaj_Sotala, DSimon, kpreid↑ comment by Kaj_Sotala · 2010-06-30T15:23:52.012Z · LW(p) · GW(p)
Replies from: wedrifidWhat did Eliezer call this "must have a goal" principle way back when in the sequences? He explained it well, whatever it was.
↑ comment by DSimon · 2010-06-30T21:12:35.520Z · LW(p) · GW(p)
What my point is is that none of the the actions listed are an effective way of achieving anything. Neither of the two purposes of altruistic actions are served (that being signalling and actually changing the world to match altruistic preferences.)
(For this response I'm going to focus on the goal of improving the world, not on signalling.)
One of the options was to give blood, which contributes directly to the reduction of suffering. I admit that I haven't personally looked into the effectiveness of the blood donation system, but as a basic medical technology it's quite sound, right? Why do you feel that donating blood is ineffective?
Two of the options were about donating to charities; one to a specific charity that seeks to defend a college student falsely accused of murder, and another a more general request to donate to any "reputable charity". I can understand that you might reasonably default to the null hypothesis on evaluating the effectiveness of any particular charity, particularly a minor one with little reputation like the Amanda Knox Defense charity... but it's a much stronger statement that reputable charities in general are not "an effective way of achieving anything"! Could you describe in more detail what leads you to that conclusion?
Finally, the remaining two options were about letter-writing or otherwise contacting people with political power in the hopes of influencing their actions. In terms of cost vs. benefit, this strikes me as being very hard to attack. Communication is cheap and easy, and public approval is a major factor in most political systems. By telling politicians explicitly what will earn your approval or disapproval, you're taking advantage of this system. I like the description here of this idea.
Do you disagree and feel that communicating with politicians is an ineffective way of influencing their decisions? If so, do you have a more effective alternative to propose?
↑ comment by Mass_Driver · 2010-07-01T05:50:15.747Z · LW(p) · GW(p)
Writing a letter, ringing a politician or giving blood are not actions that maximise your altruistic preferences!
Sure, but they beat the heck out of endless navel-gazing on an ethereal blog. Compared to reading your 3,000th LW comment, giving blood might be a strictly dominant strategy -- it beats "read another comment" in almost all of the possible worlds in which we might find ourselves.
How many years should we spend optimizing our decision trees before we begin to devote some fraction of our time and energy to action? Why?
Replies from: Jonathan_Graehl, wedrifid↑ comment by Jonathan_Graehl · 2010-07-01T18:17:27.470Z · LW(p) · GW(p)
If all someone does is post here, that does sound sad.
I doubt that's actually the case. People just aren't posting about everything else they do.
↑ comment by simplicio · 2010-06-30T15:17:12.193Z · LW(p) · GW(p)
The examples listed are not rational. They are examples of 'altruism' for the sake of a 'warm feeling' and signalling. Writing a letter, ringing a politician or giving blood are not actions that maximise your altruistic preferences!
Maximize, no, but promote - yes. I concur with DSimon - if these are borderline useless, please suggest something better! (That was half the point of this post!)
Also, note that if I feel warm and fuzzy as a result of an action that promotes my goals, that is not a bad thing - on the contrary, you could make a pretty good argument that systematically ethical people are those who like doing ethical things.
I also (perhaps unfairly) assumed my audience would follow along easily enough in my slight equivocation between "ethical" and "rational."
Replies from: wedrifid↑ comment by wedrifid · 2010-06-30T15:52:11.505Z · LW(p) · GW(p)
I also (perhaps unfairly) assumed my audience would follow along easily enough in my slight equivocation between "ethical" and "rational."
Not unfair, just more wrong. This is human bias. We identify with the in group identity and associate all morality and even epistemic beliefs with it. It doesn't matter whether it is godly, spiritual, professional, scientific, spiritual, enlightened, democratic or economic. We'll take the concept and associate it with whatever we happen to think is good of or be approved of by our peers. People will call things 'godly' even when they violate explicit instructions in their 'Word of God'. Because 'godly' really means 'what the tribe morality says right now'. People make the same error in thought when they use 'rational' to mean 'be nice' or even 'believe what I say'. This is ironic enough to be amusing if not for the prevalence of the error.
comment by Christian_Szegedy · 2010-06-30T19:06:05.298Z · LW(p) · GW(p)
Rationalists should win
I hate to see this clever statement to be taken out of context and being reinterpreted as a moralizing slogan.
If you trace it back, this was originally a desideratum on rational thinking, not some general moral imperative.
It plainly says that if your supposedly "rational" strategy leads to a demonstrably inferior solution or gets beaten by some stupider looking agent, then the strategy must be reevaluated as you have no right to call it rational anymore.
Replies from: LucasSloan↑ comment by LucasSloan · 2010-06-30T19:23:42.247Z · LW(p) · GW(p)
some general moral imperative.
I can't think of a more general moral imperative than "successfully do things you want to do."
Replies from: WrongBot↑ comment by WrongBot · 2010-06-30T19:59:26.029Z · LW(p) · GW(p)
That doesn't sound like a moral imperative to me, though my definition may be in need of an update. To my way of thinking, a moral imperative involves a systematic way of ranking alternatives so as to satisfy one or more terminal values (maximizing happiness, obeying god, being virtuous, etc.).
Because it is impossible to do anything other than what you want to do, your moral imperative just reduces down to "do things successfully," which doesn't really discriminate among possible alternatives. (Unless it means "do the easiest thing possible, because you'll be most likely to succeed." But that doesn't seem to be what you were getting at.)
As I currently define moral imperatives, they're meta-wants, structures that tell you what you should want to want, if that makes sense.
Replies from: LucasSloan↑ comment by LucasSloan · 2010-07-01T05:10:06.221Z · LW(p) · GW(p)
it is impossible to do anything other than what you want to do
It is not possible to do something other than what you actually do in a situation. It is possible for non-perfect agents (like, say, humans) to do something other than what they want.
Technically, what I said isn't a moral imperative, because it doesn't say anything about what the "want" is. It is, however, advice that (nearly) all minds want to follow.
Replies from: Nisan↑ comment by Nisan · 2010-07-01T14:17:15.678Z · LW(p) · GW(p)
A meta-moral imperative, then. Whatever it is that you want to do, you should actually do it, and do it in a way that maximizes success. Or in WrongBot's scheme, whatever it is that you wished you wanted (intended) to do, you should actually want (intend) it.
comment by knb · 2010-06-30T05:30:17.489Z · LW(p) · GW(p)
Great post. I too am in strong disagreement with those who are skeptical of rationality's instrumental value. I find my life greatly enhanced by my (I believe) higher than average level of rationality.
These are some examples of ways rationality improved my life:
I don't go to religious services. Many people in my family attend services every week, and are bored to tears by it, but they feel guilty if they skip. My relative rationality leads to a major increase in happiness, since my Sunday mornings are totally free. (I had to attend church twice a week as a child and I hated it). I also have fewer ethical hangups generally speaking, leading to less experience of guilt and shame. I did experience these when I was younger and still woefully irrational, so it probably isn't just a matter of disposition.
I invest in an index fund (since they significantly outperform managed funds) to take advantage of compounding interest. Stunningly few people actually do this. I also bought in when the market was at a low (other non-professionals irrationally sell low and buy high).
I address health issues like diet and exercise using scientific evidence, rather than expensive hokum like televised "fat burners".
There are many more examples, but I think that illustrates my point. I'm sure that most other people on LW behave similarly on these issues. But often, people don't credit their rationality (good software) with their success, but rather their hardware (high intelligence).
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-07-01T05:54:38.560Z · LW(p) · GW(p)
I address health issues like diet and exercise using scientific evidence, rather than expensive hokum like televised "fat burners".
I voted you up, but I'm going to challenge you on this issue and see where this goes. What, precisely did the "scientific evidence" you relied on consist of? Did you consult Wikipedia? a certified nutritionist? PubMed? Did you do any tests to see whether the source(s) you used are trustworthy?
comment by neq1 · 2010-06-30T07:29:57.083Z · LW(p) · GW(p)
How about spreading rationality?
This site, I suspect, mostly attracts high IQ analytical types who would have significantly higher levels of rationality than most people, even if they had never stumbled upon LessWrong.
It would be great if the community could come up with a plan (and implement it) to reach a wider audience. When I've sent LW/OB links to people who don't seem to think much about these topics, they often react with one of several criticisms: the post was too hard to read (written at too high of a level); the author was too arrogant (which I think women particularly dislike); or the topic was too obscure.
Some have tried to reach a wider audience. Richard Dawkins seems to want to spread the good word. Yet, I think sometimes he's too condescending. Bill Maher took on religion in his movie Religulous, but again, I think he turned a lot of people off with his approach.
A lot has been written here about why people think what they think and what prevents people from changing their minds. Why not use that knowledge to come up with a plan to reach a wider audience. I think the marginal payoff could be large.
Replies from: DSimon↑ comment by DSimon · 2010-06-30T14:00:11.703Z · LW(p) · GW(p)
I think one possible strategy is to get people to start being rational about being in favor of things they already support (or being against things that they already disagree with). For example, if someone is anti-alt-med, but for political reasons rather than evidence-based reasons, get them to start listening to The Skeptic's Guide to the Universe or something similar.
Once they see that rationality can bolster things they already support, they may be more likely to see it as trustworthy, and a valid motivation to "update" when it later conflicts with some of their other beliefs.
comment by lsparrish · 2010-07-01T00:20:44.705Z · LW(p) · GW(p)
"Solutions looking for problems."
Less wrongers of a more self-interest based frame of mind might consider banding together in groups of 2-4 and founding startups.
Or they could be nonprofits, if you prefer.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-07-01T00:47:00.919Z · LW(p) · GW(p)
That's not necessarily self-interested. If one did well with that, one would be able to have a lot more impact in helpful ways. However, there's seem to be the not at all tiny issues that 1) startups are unlikely to succeed and 2) at minimum one needs some sort of idea. It seems l that the success of startups is closely related to whether or not they have an original, practical business model.
Replies from: lsparrish, mattnewport↑ comment by lsparrish · 2010-07-01T01:22:22.083Z · LW(p) · GW(p)
Well, I'd say the notion of getting rich is self-interest-cognition-stimulating, but a productive business can certainly have a positive outcome for others, perhaps (due to scalability and feedback factors) even more than devoting time directly to charity. Generally, businesses provide more benefit (at least as subjectively perceived by the customer) than they receive. Kind of the point of a free market system.
Startups are indeed unlikely to succeed. But they don't cost (relatively) much to start, and generally fail within a few months. Compared to the gains if you are in one that succeeds, the risk is not all that high. Also, diligent Less Wrong students should be able to mitigate many of the risk factors better than average startup founders. ;)
Another huge benefit aside from entry into the getting rich lottery is education, i.e. from the school of hard knocks. According to PG, places like Yahoo would prefer a person who has started and failed at a startup over someone who has simply graduated college. Failure brings experience. And failing at a startup is not a smear on your work history like getting fired from another kind of job would be.
The business model that works best, according to Eric Reis, Steve Blank, etc., is to develop a customer base from the very beginning. Start with something with basic functionality and start upgrading it as you get more user feedback. If nobody will buy it in its rudimentary form, you might not have a very good idea to begin with.
↑ comment by mattnewport · 2010-07-01T00:56:54.972Z · LW(p) · GW(p)
Paul Graham suggests in the linked article that the importance of a good idea to the success of a startup is overrated.
Actually, startup ideas are not million dollar ideas, and here's an experiment you can try to prove it: just try to sell one. Nothing evolves faster than markets. The fact that there's no market for startup ideas suggests there's no demand. Which means, in the narrow sense of the word, that startup ideas are worthless.
comment by michaelkeenan · 2010-06-30T14:13:28.030Z · LW(p) · GW(p)
If you're donating with the intent to save maximum lives per dollar, please refer to the research of GiveWell, an organization which tries to answer that question. Eliezer mentioned it in the context of rationalist causes here.
Replies from: simpliciocomment by Dagon · 2010-07-04T01:50:15.642Z · LW(p) · GW(p)
Upvoted because I like the topic, however this is a very confused post. Most examples given are not rational means to any end I think I have.
I'd love to see a post explaining why you think any of these recommendations are rational.
Doing nothing (or rather, doing what I'm already doing) conserves the resources I control, allows me to make better decisions later, when I have more information, and has intrinsic value (pleasure/leisure) to me.
Replies from: simplicio↑ comment by simplicio · 2010-07-05T05:57:17.823Z · LW(p) · GW(p)
I agree now that it was somewhat confused.
Most examples given are not rational means to any end I think I have.
Do you mean you don't share their ends, or think they're terrible means?
Replies from: Dagon↑ comment by Dagon · 2010-07-06T20:11:00.897Z · LW(p) · GW(p)
I mean I can't tell from the article whether you're recommending a change in ends (and if so, based on what) or means (and if so, based on what). I don't see a path from your recommended actions to any ends that I know of and share.
Declaring the ends you're expecting me to support, and a path from your recommendation to those ends would let me give more specific critiques (or simply to join your campaign if you're right).
comment by WrongBot · 2010-06-30T17:36:33.371Z · LW(p) · GW(p)
Thanks for getting me up off my ass, simplicio. I just donated (a long over-due) $30 to SIAI and set up a monthly reminder to keep donating (if I have disposable income that month). And I'll be keeping an eye out here for other pro-rationality suggestions.
A small criticism:
Make an appointment to give blood.
The United States and the European Union currently have unscientific and discriminatory policies that forbid gay and bisexual men from donating. Giving blood is definitely an ethical Good Thing, but it may not be something we would want to push as specifically associated with rationality.
Replies from: SilasBarta, red75, Emile, magfrump, simplicio↑ comment by SilasBarta · 2010-06-30T22:19:15.924Z · LW(p) · GW(p)
Pardon my ignorance, but how the hell is this even enforced? "Have you had sex with another man?" "Um ... no, no, of course not!" "Oh, okay then! Right this way! Make a fist..."
People who want to can walk right past such a ban, can't they? (IIRC, some pro-gay-rights group even considered doing that.)
So what, then? Do they watch for men holding hands in the line to give blood? For guys in leather and big 'staches?
Replies from: Blueberry↑ comment by Blueberry · 2010-06-30T23:23:13.489Z · LW(p) · GW(p)
It's self-enforced. You fill out a questionnaire and are asked to be honest. Yes, of course people who want to can walk past the ban.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-06-30T23:37:50.316Z · LW(p) · GW(p)
Okay, but even then, are they supposed to ignore evidence, so long the donor denies it on the form? Like, what if two flamboyantly dressed gay dudes get in line to donate while holding hands and making out, and then claim not to have had MSM sex. Does the technician have to say, "oh, well, guess they must be eligible"?
Replies from: Blueberry↑ comment by Blueberry · 2010-07-01T00:12:36.198Z · LW(p) · GW(p)
I would be surprised if any organization had an official policy on this. It's not frequently seen at blood donation sites.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-01T21:15:29.697Z · LW(p) · GW(p)
Because it's so easy for people who've MSMed to evade the ban. Which makes me wonder why epidemiologists dream it's so effective.
Replies from: simplicio, NancyLebovitz↑ comment by simplicio · 2010-09-07T17:24:16.455Z · LW(p) · GW(p)
I asked about this at the blood clinic once. The answer I got was that the questionnaires are really secondary to blood testing, which is done on every donation.
The questionnaires simply
(a) weed out the people who are ineligible and honest about it (and there's no salient incentive here for people to lie*);
(b) provide extra "risk factor" information on top of the tests.
Note that being gay, or having lived in Africa, are risk factors, but are not considered dealbreakers. Probably both in combination would be a dealbreaker though.
(*The only real incentive to lie here is social pressure, e.g., your whole office is donating and you don't want your AIDS to be known about, which is controlled by allowing the donor to privately put a "No" barcode sticker on their sign-in form, which blindly disqualifies their blood without identifying them by name to any staff or patients present.)
Replies from: SilasBarta↑ comment by SilasBarta · 2010-09-07T21:09:55.357Z · LW(p) · GW(p)
Thanks for the explanation!
↑ comment by NancyLebovitz · 2010-07-02T04:45:41.834Z · LW(p) · GW(p)
I can believe such a ban would be ineffective if it was trying to keep people from something they were trying to get.
However, this is about banning people who are trying to give something, and who are presumably concerned with actually helping.
It wouldn't surprise me if some men who've done MSM give blood anyway if they have good reason to think they aren't infected and especially if they have rare blood types.
However, at least some people are habitually rule-abiding.
This is all hypothetical, though. I don't know how reliably people obey the rules for blood donation, or how important all the rules are. The whole thing is on the honor system, and there are a lot of restrictions.
↑ comment by red75 · 2010-06-30T20:38:02.166Z · LW(p) · GW(p)
Did you evaluate inequivality P(D|NG)<=P(D|NO), where D=blood transmitted decease, N=blood test negative, G=gay+bisexual, O=others? What are the figures?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-07-01T03:12:36.179Z · LW(p) · GW(p)
It seems pretty safe to guess that the inequality is reversed. But probabilities are not enough to answer the question. One must weigh expected costs and benefits.
A crude method, not of estimating any probabilities, but directly assessing the final decision, is to look at the existence of countries with different systems. That they have different waiting periods shows that the decision is not blindingly obvious. Moreover, since governments tend to be more sensitive to the publicity of concentrated harm, I would expect them to err in the direction of exclusion. That I have not heard of a recent Brazilian getting HIV from blood transfusion suggests that not only is their policy better, but that P(D|N) is unmeasureably low.
That just addresses the issue of HIV, which is what these policies that mention 1977 are about. One might worry that future diseases will appear first among gays. For this, it is much harder to determine the probabilities.
↑ comment by Emile · 2010-06-30T20:20:28.097Z · LW(p) · GW(p)
The United States and the European Union currently have unscientific and discriminatory policies that forbid gay and bisexual men from donating. Giving blood is definitely an ethical Good Thing, but it may not be something we would want to push as specifically associated with rationality.
What's "unscientific" about a policy of not accepting blood donations from gay and bisexual men? Do you have a precise meaning of "unscientific" in mind?
If someone did a double-blind randomized study comparing disease incidence in countries that did or did not accept blood from game men, I'd very much like to hear about it.
The reasons given for the policy seem quite reasonable to me, and though of course I don't know the ins and the out of medical risks associated with blood transmission and details on testing, I don't see why it shouldn't be associated with rationality (Apart from wedrifid's comment on how these issues aren't really about rationality, with which I agree).
Replies from: WrongBot↑ comment by WrongBot · 2010-06-30T20:53:26.494Z · LW(p) · GW(p)
Do you have a precise meaning of "unscientific" in mind?
I mean that they maintain practices, justified on scientific grounds, that are blatantly illogical.
If someone did a double-blind randomized study comparing disease incidence in countries that did or did not accept blood from game men, I'd very much like to hear about it.
Laying aside that blinding and randomization aren't really necessary for statistical studies, I'd much rather see a study that compared the relative amounts of blood contaminated with sexually transmitted diseases across several countries with similar demographics and cultural trends, some of which refused to accept blood from gay men.
But we don't always get the evidence we want, sadly, and so we must make do with what we have, much as the U.S. Department of Health and Human Services must.
Or, we can ignore the evidence entirely and look at whether HHS is even being consistent, which is an easier question to answer definitively. The current policy is that any man who has had sex with another man since 1977 is banned from donating blood for life. It is also current policy that any woman who has had sex with such a man is banned from donating blood for the next year.
I will leave identifying this failure of rationality as an exercise for the reader.
Replies from: steven0461, Blueberry, Emile↑ comment by steven0461 · 2010-06-30T21:44:31.499Z · LW(p) · GW(p)
If you refuse to participate in or associate with any activity that your government has illogical rules about, not much will be left.
Replies from: WrongBot↑ comment by WrongBot · 2010-06-30T21:52:06.513Z · LW(p) · GW(p)
I'm not proposing a boycott of blood drives. Just that they may not be something a community of rationalists should endorse as rational.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-06-30T22:06:01.840Z · LW(p) · GW(p)
'Giving blood' and 'the way blood donation is managed' are very different things, and there's only weak reason AFAICS to expect their rationality-values (in the rather different senses of 'rational' that apply to individual actions and to institutional processes) to be correlated.
Replies from: Nisan↑ comment by Nisan · 2010-07-01T14:39:21.290Z · LW(p) · GW(p)
Indeed. If a hypothetical blood boycott protesting these rules would do more harm, on balance, than alternative means of promoting public health policy reform, then giving blood is good thing to do, and our community should endorse giving blood — even though we might gnash our teeth at the apparent endorsement of discrimination or irrationality. We can clear our collective conscience, if need be, by explicitly noting that we think giving blood is a good idea even though there are problems with the way it is collected.
Similarly, if you want to donate a little money to a school in a poor community, and the only existing school teaches silly religious stuff in addition to valuable skills, you should probably still want to donate to that school.
Replies from: Nick_Tarleton, WrongBot↑ comment by Nick_Tarleton · 2010-07-01T18:02:55.413Z · LW(p) · GW(p)
See also: Your Price for Joining
↑ comment by WrongBot · 2010-07-01T17:36:40.079Z · LW(p) · GW(p)
Agreed. I would not propose a blood boycott, and I would likewise endorse giving blood, with no teeth-gnashing involved. I would even (reluctantly) endorse the current FDA standards if doing so could be expected to increase the amount of blood donated in a non-trivial way. What I would not do is endorse the current FDA standards as rational, especially in the context of a discussion about doing rational things.
If my objective is to promote rationality (and achieving ends I value ethically is also a consideration), I would want to instead endorse some activity or organization that is approximately as fuzzy but lacks current controversy over its willingness to adhere to scientific standards, noting that said controversy is still bad (given this particular objective) regardless of whether it is warranted. If I am concerned about the public perception and adoption of rationality, I should maximize for that value.
That the controversy centers around a standard that is both sub-optimal and needlessly discriminatory is merely gravy.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-07-01T18:02:31.631Z · LW(p) · GW(p)
So, just to check, you're concerned that endorsement of giving blood will inevitably blend over into, or be equivocated with, endorsement of the way blood donation works. Is that a fair description?
Replies from: Blueberry, WrongBot↑ comment by Blueberry · 2010-07-01T18:07:38.514Z · LW(p) · GW(p)
That's nothing like what WrongBot said.
Replies from: WrongBot↑ comment by WrongBot · 2010-07-01T18:13:30.901Z · LW(p) · GW(p)
It's not something I've specifically said, but I don't think it's an unreasonable inference from my stated position. It is also mostly true.
↑ comment by Blueberry · 2010-06-30T21:37:34.645Z · LW(p) · GW(p)
I agree that HHS's policy is nonsensical. However:
The current policy is that any man who has had sex with another man since 1977 is banned from donating blood for life. It is also current policy that any woman who has had sex with such a man is banned from donating blood for the next year.
Just having different waiting periods isn't inconsistent by itself. Adding another step in the epidemiological chain will decrease the likelihood of infection and thus may justify a shorter waiting period.
Replies from: WrongBot↑ comment by WrongBot · 2010-06-30T21:50:31.063Z · LW(p) · GW(p)
Adding another step in the epidemiological chain will decrease the likelihood of infection and thus may justify a shorter waiting period.
True, but not quite applicable. In the case of an MSM, the epidemiological chain begins when he has sex with an MSM. In the case of a woman, the epidemiological chain begins when she has sex with an MSM.
The criteria for joining those excluded groups are identically rigorous, and yet the rules for each are quite different.
Replies from: Vladimir_M, HughRistik↑ comment by Vladimir_M · 2010-07-01T06:10:04.148Z · LW(p) · GW(p)
WrongBot:
True, but not quite applicable. In the case of an MSM, the epidemiological chain begins when he has sex with an MSM. In the case of a woman, the epidemiological chain begins when she has sex with an MSM.
Yes, but not all sorts of sexual acts have the same probability of HIV transmission. Those typically practiced within a heterosexual intercourse are far less likely (at least by an order of magnitude, perhaps even two) to result in transmission than those associated (at least stereotypically; not sure how often in actual practice) with sex between men.
(This is not meant to imply any overall judgment on the whole issue, merely to point out that different treatment of MSMs and women who have had sex with one does not by itself imply a logical inconsistency.)
Replies from: WrongBot↑ comment by WrongBot · 2010-07-01T06:56:15.427Z · LW(p) · GW(p)
Yes, but not all forms of sexual acts have the same probability of HIV transmission.
This statement is both true and the heart of the issue. It points out that talking about MSM in the first place is entirely unnecessary. If you look at the criteria for blood donation on the FDA's website, you'll see that three of the four listed prohibitions ban people who have engaged in certain activities like travelling or using potentially unsafe needles, whereas the fourth criterion bans a type of person: MSM.
Now, the only screening process for these factors relies on self-identification, as Silas Barta has incredulously highlighted. So, that self-identification screening process could include questions about specific risky sexual activities instead of sexual orientation, and those questions would have at least as much discriminating power as the current standard; if there are any women or straight men who also engage in those same risky sexual activities (anal sex, unprotected sex, anonymous partners, or whatever else), and there are, then such an activity-based screening procedure would be more effective than the one that screens for MSM.
There's also another benefit to an activity-based policy: it would mean not discriminating against an already oppressed group. Discriminatory policies should be a last resort because of the possibility that they will have negative social effects (and it seems that this FDA policy reaffirms the common but incorrect belief that HIV is only a gay problem), not the first resort.
Rational processes don't create or support standards that ae both needlessly discriminatory and inferior to obvious alternatives.
Replies from: Blueberry, Blueberry↑ comment by Blueberry · 2010-07-01T07:00:35.798Z · LW(p) · GW(p)
Now, the only screening process for these factors relies on self-identification, as Silas Barta has incredulously highlighted. So, that self-identification screening process could include questions about specific risky sexual activities instead of sexual orientation
And with a sane screening process, people might be more likely to answer the questions honestly, and thus the screening itself would be more accurate.
↑ comment by HughRistik · 2010-07-01T06:29:51.273Z · LW(p) · GW(p)
True, but not quite applicable. In the case of an MSM, the epidemiological chain begins when he has sex with an MSM. In the case of a woman, the epidemiological chain begins when she has sex with an MSM.
The criteria for joining those excluded groups are identically rigorous, and yet the rules for each are quite different.
There could be a difference, if on average, out of the population of people who have been with a MSM at least once, men on average have been with more MSM than women. The populations of exclusive MSM, and MSM who also sleep with women, may have different levels of risk.
↑ comment by Emile · 2010-06-30T21:23:05.650Z · LW(p) · GW(p)
Do you have a precise meaning of "unscientific" in mind?
I mean that they maintain practices, justified on scientific grounds, that are blatantly illogical.
I don't see what's unscientific, and I don't see what's illogical either. At best, they can be criticized for excessive caution, or a bad consideration of risks and benefits - but it's no slamdunk.
↑ comment by magfrump · 2010-06-30T19:42:17.814Z · LW(p) · GW(p)
This is an issue that I would very much like to work at, does anyone know of any particular venues for protesting these policies? Maybe directly e-mailing the red cross? The chance of changing these policies seems low and I wouldn't be able to determine the value; but it seems like it would be a big plus for rationality in society.
Replies from: Blueberry, WrongBot↑ comment by WrongBot · 2010-06-30T20:15:14.821Z · LW(p) · GW(p)
The Red Cross doesn't set that policy, and has actually suggested changing it. They seem to catch a lot of the blame for it, though, which I suppose isn't surprising considering that they handle pretty much all blood donations in the country.
As Blueberry pointed out below, it's a governmental policy problem.
Hah! There's an idea for a major rationalist project: attempting to increase the extent to which governmental regulators of scientific practice are themselves scientific.
↑ comment by simplicio · 2010-07-01T01:29:34.714Z · LW(p) · GW(p)
Thanks for getting me up off my ass, simplicio. I just donated (a long over-due) $30 to SIAI and set up a monthly reminder...
Glad to hear it!
The United States and the European Union currently have unscientific and discriminatory policies that forbid gay and bisexual men from donating.
Interesting... Here they ask you (if you're male) if you were sexually active with other men in a certain timeframe, but I don't think even a "Yes" answer to that totally disqualifies you.
comment by reaver121 · 2010-06-30T09:16:15.808Z · LW(p) · GW(p)
I donated blood just yesterday. Unfortunately, I'm AB+ which means my blood is only suitable to other AB+ people. About 5% of the population according to Wikipedia. On the plus side, I can receive blood from anyone :). I have to admit that I'm having trouble keeping a regular schedule of blood donation.
On the topic of diet, LessWrong helped me losing about 17 pounds through implementing some short term motivation methods. Counted in the probability of more years at life that's a huge win.
Replies from: simplicio↑ comment by simplicio · 2010-06-30T15:25:56.798Z · LW(p) · GW(p)
I donated blood just yesterday. Unfortunately, I'm AB+ which means my blood is only suitable to other AB+ people.
Right on! (I'm O neg). Re: the schedule, try scheduling your next donation every time you have a blood appointment, then just changing it to a more convenient time closer to the date.
On the topic of diet, LessWrong helped me losing about 17 pounds through implementing some short term motivation methods. Counted in the probability of more years at life that's a huge win.
I'll say! That's what I was hoping to hear... it's not all about using the skills you learned on LW to develop a theory of quantum gravity or whatever.
comment by John_Maxwell (John_Maxwell_IV) · 2010-07-02T06:49:49.123Z · LW(p) · GW(p)
Can't speak for others, but I find rationality EXTREMELY useful instrumentally. Level 1 rationality is when you manage to stop your emotions from interfering with your cognitive process, which is hugely useful for improved decision-making. Level 2 rationality is when you pick up the trick of influencing emotions with your rational mind. I can't make myself feel an arbitrary emotion via rational thought, but over several months (or maybe more) of practice I've gotten very good at eliminating what I consider incorrect emotions whenever my rational mind identifies them as being incorrect. (Hint: unpleasant emotions, like embarrassment or agitation over a trivial matter, are frequently incorrect.) I'll probably write a post or two about this at some point.
Replies from: pjeby, WrongBot↑ comment by pjeby · 2010-07-02T14:46:23.314Z · LW(p) · GW(p)
I can't make myself feel an arbitrary emotion via rational thought
How about via irrational thought? It's much easier that way. ;-)
More seriously, I make people feel arbitrary emotions all the time. Mostly, by using the magical question, "What would it be like if...?", and describing a hypothetical situation, constructed from information they've already given me.
This is almost identical, btw, to how actors are trained to display realistic emotion, in the sense that they are taught to recall "sense memories" -- the "what it was like" of a situation where they felt a particular emotion.
(So, the real magic isn't in the question itself; rather, it's in the sense memory -- or sense imagination -- that's triggered by the question.)
However, since it's not particularly convenient to have to keep asking it over and over, the point of me asking the question is usually to create a link between the summoned emotion and a different topic, situation, or action, so that it's then perceived differently in future.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-07-03T02:56:13.698Z · LW(p) · GW(p)
I've tried your methods but I've never been able to make myself feel a particularly strong emotion about anything. This might be related to the fact that I rarely feel strong positive emotions (except related to sensory experiences like food, music, and roller coasters).
If I achieve something that would have sounded awesome just six months ago, I don't feel particularly great about it because as I was working on it my probability estimate of my achievement going through was gradually being revised upwards. So if I dream about having accomplished my goal, my brain correctly predicts that I won't be that thrilled.
Replies from: pjeby↑ comment by pjeby · 2010-07-03T21:06:35.476Z · LW(p) · GW(p)
I've never been able to make myself feel a particularly strong emotion
You can't make yourself feel anything, except in the same way that you can "make" an object fall. Gravity makes the object fall, you just set up the conditions for the fall to occur.
Actors don't make themselves feel emotions, they recall sensory memories that are linked to the emotion, and then allow the emotion to arise (i.e., refrain from interfering with it.)
And simply concentrating on the emotion or wondering if you're going to feel anything is sufficient to constitute interference. That's why, when I want someone to feel an emotion, I try to ask them a question that will absorb their attention in such a way that it's entirely focused on answering the question... and thus can't get in the way of the natural side-effect of emotions arising.
Then, I just have to keep them from interfering with the emotion once they notice they're having it. ;-)
I rarely feel strong positive emotions (except related to sensory experiences like food, music, and roller coasters).
And if you recall one of those experiences vividly enough, the emotion will begin to arise, and grow the longer and more vividly you recall it -- provided that you don't mix any other stuff into your thought process, like questioning or doubting or objecting, or sitting there thinking to yourself that you're not going to question or doubt or object... at however many levels of mental recursion. ;-)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-07-04T01:35:49.869Z · LW(p) · GW(p)
I understand your interference point and your point about "trying", however I don't think either of those are my problem. I suppose if I concentrated on some food or music memory long enough I would make myself feel something like the food or music would make me feel, but that doesn't seem very useful. It seems like it'd be simpler to just say "listen to music you like when you feel down". I think I tried this a while ago and it worked alright but not great. (Watching funny videos works alright but not great, and it's not for lack of my laughter.)
What works for me is scheduling work periods for doing the things I don't like doing and trying to resolve uncertainty related to whether the task I'm doing will actually help me out.
Replies from: pjeby↑ comment by pjeby · 2010-07-04T14:27:49.527Z · LW(p) · GW(p)
I suppose if I concentrated on some food or music memory long enough I would make myself feel something like the food or music would make me feel, but that doesn't seem very useful.
Nope. As I mentioned, the main usefulness of accessing a positive experience is to link it to something, not as a way of stimulating happiness.
It seems like it'd be simpler to just say "listen to music you like when you feel down".
If you feel down, then doing something in order to feel better isn't going to work nearly as well as addressing the down feeling in the first place.
Consider the Losada P/N ratio as an analagous example.
If a 3:1 positive-to-negative feedback comment ratio is the minimum required for good performance, and the current ratio is say, 1:2 (1 positive for every two negatives), and you can only change one side of the ratio, which one is easier? Saying 6 times as many positive things, or cutting out 5/6ths of the negative ones?
In general, eliminating negatives is less effortful than adding positives, unless you're already neutral or positive in the current ratio.
(Note: I'm not claiming the Losada ratio has been shown to apply to individual feelings or internal self-talk... but it would be kind of surprising if it turned out to not apply. There are studies of affective asynchrony, however, that suggest that ratio of positive to negative affect is important on an individual level, such that past a certain point in either direction, we cease to feel things on the opposite end of the spectrum.)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-07-04T18:35:21.063Z · LW(p) · GW(p)
Nope. As I mentioned, the main usefulness of accessing a positive experience is to link it to something, not as a way of stimulating happiness.
OK, how do I do that?
In general, eliminating negatives is less effortful than adding positives, unless you're already neutral or positive in the current ratio.
Naively, but in reality it depends on the difficulty of removal/addition for the marginal negative/positive.
Replies from: pjeby↑ comment by pjeby · 2010-07-04T21:22:46.788Z · LW(p) · GW(p)
Naively, but in reality it depends on the difficulty of removal/addition for the marginal negative/positive.
...which in turn depends a bit on the current ratio. It's easier to get happier when you're already neutral or somewhat happy.
OK, how do I do that?
Mostly, I have people consider a circumstance associated with an existing negative emotion, imagine what it would've been like if it turned out differently (in order to access the positive feeling), then imagine what it would be like if they had the positive feeling upon entering that situation, and what they would've done differently.
That's a very vague outline, but it is more or less a technique for getting rid of a learned helplessness in a given situation, when done correctly.
The trick is that "correctly" takes a while to learn, because learned helplessness has a tendency to obscure which aspect of a remembered situation is the leverage point for change, as well as what it is that your emotional brain wanted and gave up on in the first place. It's generally much easier for one person to see another person's blind spot than it is to see your own, though it gets easier with practice.
Sometimes, though, it's hard to notice that you are even experiencing learned helplessness in the first place, because its only manifestation are a set of options that are missing from your mental map.... which means that unless you are looking carefully, you're unlikely to realize they're missing.
Or, you can notice when somebody else exercises those options, that some of your options are missing. (Which is why I recommend people pay close attention to the mindsets and thought processes of people who are succeeding at something they aren't... it helps to identify where one's own brain has blind spots.)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-07-05T02:53:52.102Z · LW(p) · GW(p)
I'll try out your technique a few times; sounds kind of interesting. I don't think I have significant problems with learned helplessness. My reaction to observing that I'm not doing very well at something is to ask "what should I be doing instead?"
Replies from: pjeby↑ comment by pjeby · 2010-07-05T15:05:07.353Z · LW(p) · GW(p)
I don't think I have significant problems with learned helplessness.
Note that, for the reasons I outlined, that isn't good Bayesian evidence that you actually don't. ;-)
If an area has been deleted from your map, you wouldn't be expected to notice unless something forced you to compare your map with the territory in that area.
That being said, some people seem vastly less prone to it than others, so it's certainly plausible that you might be one of those people. OTOH, those people don't have a lot of "ugh" fields either.
↑ comment by WrongBot · 2010-07-02T17:16:29.331Z · LW(p) · GW(p)
I've been pursuing a similar program for quite some time, and I'd love to see your take on it. I have a post on managing jealousy that I'm working on that'll address at least one part of the topic, but I would guess that we have approaches different enough that there's much mutual learning available.
And to second your general sentiment, I've found thinking rationally about my emotions and how to manipulate them to be not just useful, but literally essential for almost everything I want to accomplish. When I was younger I had extreme difficulties regulating my emotions, to a far greater extent than I believe is the case with most people, a problem that was likely due to my Asperger's Syndrome.
Constant analysis and experimentation over the years has now moved me to a point where I have exceptionally good control over my emotions, which has been nothing but positive.
TL;DR: I agree.
comment by DanielLC · 2010-07-05T02:57:45.702Z · LW(p) · GW(p)
At all is the enemy of highest expected value.
You shouldn't worry so much about finding the best charity that you don't end up donating to one, but that doesn't mean it isn't worth taking the risk that you won't end up donating.
Suppose you have to choices: donate to charity A, or try and find the best charity you can and donate to that. You don't have much willpower, and if you go for the second choice, there's a 50:50 chance you won't end up donating at all. But if you do find a better charity, it could easily be an order of magnitude better. Over all, you're five times better off looking for another charity.
You need to stop looking for a better charity eventually, of course. You can't always keep looking for a better one. Just be careful when you stop.
Replies from: simpliciocomment by fiddlemath · 2010-07-02T06:24:22.803Z · LW(p) · GW(p)
Briefly: If you want to use instrumental rationality to guide your actions, you need first to make explicit what you value. If we as a community want to use instrumental rationality to guide collective action, we need first to make explicit the values that we share. I think this is doable but not easy.
I like what I think is the motivating idea behind the original post: we want examples of people using the instrumental rationality they've learned here. If nothing else, this gives us more feedback on what seems to work and what doesn't; what is easy to implement and what is difficult. Mostly, though, even a set of good heuristics for instrumental rationality seems like it ought to improve our lives.
Further, I understand the idea behind this challenge. It's much easier for a hundred people to take a hundred separate actions than it is for those hundred people to collaborate on something large: small, separate actions can fail independently, and don't require organizational overhead - and one hundred coordinated commitments to that organization. A light structure makes a lot of sense.
But this post is problematic. Our notion of instrumental rationality doesn't really supply the values we want to optimize; it only helps to direct our actions to better optimize the values we make explicit. You might value kindness for its own sake, or ending the suffering of others, or improving the aesthetic experience of others, or other things. Human values differ, and making your own values explicit is difficult). Differing values should dictate varying actions. So, there's a problem where the original post assumes an unnamed set of "shared values of this community."
What's more, I suspect that most readers here haven't really tried to make their personal values that explicit. I could be totally wrong in this, of course, but I've spent a few spare moments today trying to list the various things I actually value, and was surprised at how widely they varied and how difficult it is to wade through my own psyche for those values.
Even still, I suspect that the first step in applying instrumental rationality, even in broad strokes, is to get at least a rough idea of what to optimize. I don't propose analyzing everything down to its finest details, but I'm pretty certain that a rough outline is a necessary starting point, just so the solutions you consider point in useful directions. (I'm setting aside some time to do this self-inventory soon.)
Similarly, any collective action by the LessWrong community should start by thinking out the values that the community actually shares. There are a few reasonable guesses - I expect we all value rationality, science, and knowledge; we probably all value decreasing global suffering and increasing global happiness. But even these weak assertions are broad strokes of only little use in deciding between actions.
In particular, these assertions of our group values, if true, do little to control expectation. An explicit value is a belief; it should control our expectation about which outcomes would be the most satisfying, in some coherent sense. [1] We might be able to find such explicit values fairly quickly, by judging the emotional reaction we have to hypothetical outcomes. (We do seem to have pretty decent emotional self-emulation hardware.)
And if it turns out that, as a group, we're all convinced that the best use of our time is to change some specific thing out in the world, and that we actually need our group's learned rationality to do that thing, then we should do that. Otherwise, save it until you know the problem you would seek to solve.
[1]: A coherent sense which I'm eliding here. I think this is a sensible assertion, but I've been writing this comment now for an hour. [2]
[2]: I'm starting to suspect that I should make a top-level post of this, and devote the appropriate time to it.
Replies from: cupholder, simplicio↑ comment by simplicio · 2010-07-05T05:51:01.597Z · LW(p) · GW(p)
Objections along these lines having been made by several users here, I take the point. I assumed greater homogeneity of value than was warranted and didn't make the links between values and specific actions I recommended.
As for the top-level post - I'd love to see it.
comment by orthonormal · 2010-07-01T17:50:29.925Z · LW(p) · GW(p)
A rational altruism story with a disappointing ending:
I recently moved across the country. I was originally planning to road-trip, but when I took my car to a mechanic in preparation for this, I discovered that it had a (possibly) cracked cylinder and could break down on such a trip.
I realized that this was a good opportunity for altruism, so with a week before my newly scheduled flight I tried to find a highly rated GiveWell charity to which I could donate my car, netting them the money they could obtain at auction without too much effort from me. I wouldn't have tried this if not for the idea of optimizing altruistic effectiveness.
Unfortunately, as it turned out, the only top-rated charity which had a headquarters nearby kept me waiting for a couple days, then told me they didn't take donations. At that point (2 days before I left), my default remaining option was essentially a warm fuzzy donation (since a friend had donated them a car before, I knew it was a quick and reliable process).
So Less Wrong did improve the expected value of my altruism, but not the actual value in this branch. (Had I done better with respect to akrasia, of course, I could have found something much better.)
P.S. If you're wondering why I didn't check into donating it to SIAI, there was an additional criterion I wanted to satisfy. The car was my grandmother's and was given to me by my parents; thus I felt obligated by my parents' preferences as to who received its value. They'd be on board for GiveWell-style optimization, but disappointed if I donated it to a 'weird' cause.
comment by Roko · 2010-06-30T17:10:12.014Z · LW(p) · GW(p)
Finding a way to make money is probably a good idea. As far as I know, LW is on average quite poor, mostly because young=poor, but also because nerdy types don't actually tend to be good at making $.
If you're a "rationalist", but not rich and not on your way to being rich, you're probably just deluding yourself.
Vote this comment down if your net worth is >= $100k, vote up otherwise. (thanks to mathewnewport)
Replies from: Morendil, Mass_Driver, Morendil, mattnewport↑ comment by Morendil · 2010-06-30T17:30:52.778Z · LW(p) · GW(p)
Maybe a better example of applied rationality from LW would be something like reverse engineering what it takes for a YouTube video or Internet meme to go viral, make one, and turn the proceeds over to a more rational enterprise.
Replies from: DSimon↑ comment by Mass_Driver · 2010-07-01T05:58:45.695Z · LW(p) · GW(p)
Roko, do you think you could lay out, in some detail, the argument for why rational people should busy themselves with getting rich? I'm familiar with some of the obvious arguments at a basic level (entrepreneurship is usually win-win, money can be used to help fund or attract attention for just about any other project or argument you care to have succeed, getting rich should be relatively easy in a world full of both arbitrage opportunities and irrational people), but still don't quite find them convincing.
Replies from: Roko, reaver121, Dre↑ comment by Roko · 2010-07-01T10:57:48.173Z · LW(p) · GW(p)
money can be used to help fund or attract attention for just about any other project or argument you care to have succeed
So, we have a lot of people here who claim to want to save the world or live forever or be a consequentialist altruist.
Money has high leverage on these problems. Furthermore, since there are many such people here, if we all just acted together and made an extra $100,000 each on average in the next 3 years, problems such as FAI could get a cash bonus of approximately 500*100,000 = $50M.
Replies from: HughRistik↑ comment by HughRistik · 2010-07-01T17:25:00.411Z · LW(p) · GW(p)
So, we have a lot of people here who claim to want to save the world or live forever or be a consequentialist altruist.
Money has high leverage on these problems.
Same with status. I made a similar argument in this exchange, in case you haven't already seen it.
↑ comment by reaver121 · 2010-07-01T08:25:58.861Z · LW(p) · GW(p)
I'm familiar with some of the obvious arguments at a basic level (entrepreneurship is usually win-win, money can be used to help fund or attract attention for just about any other project or argument you care to have succeed, getting rich should be relatively easy in a world full of both arbitrage opportunities and irrational people), but still don't quite find them convincing
I do find them convincing. Unfortunately, I don't find them motivating. Making a sustained effort to do something usually depends for me on :
- earning enough money to sustain my lifestyle (I could live on welfare if I really wanted to but I find that ethically wrong).
- finding it interesting
My lifestyle tends to be rather minimalistic so that even an average to low income is more then enough to sustain it. I also find it a lot easier to just forgo some comfort or gadget instead of working more to pay for it (such wants are for me usually fleeting anyway). Finding something interesting is for the most part out of my control so I can't do much there.
I have to admit however that I'm in a rather comfortable position so that I don't have to really care about money all that much. I live in Europe so medical care is compared to the USA cheap. I'm building my own house now but I'm getting a lot of help from my family both financially & practically. Still, the same reasoning holds.
This is why I found (when I was younger) communism a better idea than capitalism. I had to recognize however that most people don't think my way and that communism is unsustainable. One of the most surprising experiments in found in this regard is the one where someone can choose between :
- getting a 100 dollar raise while giving anyone else a 200 dollar raise
- getting a 50 dollar raise while giving anyone else no raise
Assume that prices of goods stay equal in both cases i.e. that fact that everyone else gets a 200 dollar raise in the first option has no influence on the price of goods. When I first read this to my great surprise most people choose option 2 while I found option 1 the blindingly obvious correct choice.
Replies from: Roko, Jonathan_Graehl↑ comment by Roko · 2010-07-01T12:14:24.511Z · LW(p) · GW(p)
This epitomizes the problem with Less Wrongers' instrumental rationality:
I do find them convincing. Unfortunately, I don't find them motivating.
My lifestyle tends to be rather minimalistic so that even an average to low income is more the enough to sustain it. I also find it a lot easier to just forgo some comfort or gadget instead of working more to pay for it
We're a bunch of linux-hacking post-hippies who don't care enough about the socially accepted measures of influence ($ and status) to motivate ourselves to make a difference. Hence, we are sidelined by dumb cave-men who at least have enough fire in their bellies to win.
Replies from: whpearson, reaver121, Mass_Driver, Nick_Tarleton↑ comment by whpearson · 2010-07-01T13:25:34.502Z · LW(p) · GW(p)
We're a bunch of linux-hacking post-hippies who don't care enough about the socially accepted measures of influence ($ and status) to motivate ourselves to make a difference.
This suggests to me that someone with political skills should create a startup which is owned by the singinst. Where hackers can donate their time to a project which makes shedloads of money. Lemons to Lemonade etc.
↑ comment by reaver121 · 2010-07-01T13:05:00.833Z · LW(p) · GW(p)
Your original post said :
If you're a "rationalist", but not rich and not on your way to being rich, you're probably just deluding yourself.
I assumed that with 'rich' you mean world top 100 millionaire levels (Correct me if I'm wrong here). You are right that I don't care enough about $ and status to reach those levels but I wouldn't say I am poor enough to make absolutely no difference at all.
My minimalistic lifestyle coupled with a average income allows me to save a lot of money. Probably more then my older colleagues who probably earn more then me. And I'm sure that I have more money in the bank that most of the people in my age bracket. (Had anyway. I spend most of it to buy land for my house).
In the future, I'm planning to donate a portion of my income but I'm waiting until my expenses have stabilized and don't expect any major financial costs.
EDIT : My apologies, just reread your top post and realized I overlooked
Vote this comment down if your net worth is >= $100k, vote up otherwise
I'm assuming that $100K is your limit for being rich and as I just have a bit more it makes my original comment invalid.
However, it does proof that not caring much about $ and status is not an insurmountable barrier to acquiring a lot of money. By reversing it (i.e. instead of working harder/smarter and earn more, just spend less) you can still get enough to make a difference (but indeed not to the levels like Bill Gates did).
Replies from: Roko, Roko↑ comment by Roko · 2010-07-01T13:38:52.072Z · LW(p) · GW(p)
However, it does proof that not caring much about $ and status is not an insurmountable barrier to acquiring a lot of money. By reversing it (i.e. instead of working harder/smarter and earn more, just spend less) you can still get enough to make a difference (but indeed not to the levels like Bill Gates did).
You can make a difference, yes. The relevant equation is:
Difference you make = (Earnings-Spend) x Altruism x 10^(charity rationality)
Since you are considering giving money to some highly efficient charity like SIAI, you will in the future make more of a difference than almost anyone else in the world, if you do so.
However, if you fix charity rationality or you've already found the most efficient charity in the world, then it seems to me than Increasing earnings is probably more effective than decreasing spend. You can earn $10^6 .year (I know of a major SIAI donor who has achieved this) , but if your current spend is $10k, you can't increase -spend by more than a few k.
Replies from: reaver121↑ comment by reaver121 · 2010-07-01T14:04:24.231Z · LW(p) · GW(p)
You are correct off course. I was merely reacting against your statement that I can make no difference at all.
Replies from: Roko↑ comment by Roko · 2010-07-01T14:11:23.089Z · LW(p) · GW(p)
Right, but the fact remains that most of the influence in this world is going to people with high values of "earnings" and little altruism or charity-rationality, and that if we could get away from this timid, minimalist mentality, we could really change that.
Replies from: wedrifid↑ comment by Roko · 2010-07-01T13:28:37.041Z · LW(p) · GW(p)
So you have $100k of worth tied up in land for the house, I presume?
I didn't mean to say that having at least a net worth of $100k was sufficient for being instrumentally rational, more that it is a necessary condition in most cases. If you're 50 years old, say, then it is far from sufficient. Earnings trajectories tend to be superlinear, so accumulated total earnings grows in a highly superlinear fashion on average, making it relatively hard to set a clear and simple financial boundary.
↑ comment by Mass_Driver · 2010-07-02T02:27:34.793Z · LW(p) · GW(p)
I think it's difficult to care about $ and status while also caring about rationality and altruism, don't you? It's one thing to say that "X is the optimal instrumental value for Y," and it's another to pursue X on a full-time basis while still being passionate enough about Y to trust yourself to trade X for Y when the time comes. I find that my "terminal values" realign alarmingly quickly when I start pursuing different goals -- 6 to 9 months is about as long as I can spend on a side-project before I start unconsciously thinking of the side-project as one of my actual goals. How about you?
Replies from: WrongBot, Roko, wedrifid↑ comment by WrongBot · 2010-07-02T02:42:36.790Z · LW(p) · GW(p)
I'm not sure that I agree with this point, but I think considering it is quite important.
On a somewhat related note, I've been contemplating a top-level post on whether paying attention to status is useful for becoming more rational, leaving aside any discussion of whether it is useful for winning; the two issues should be treated very differently, and in the discussion of status on LW that doesn't always seem to be the case.
Edited for clarity.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-02T02:55:42.086Z · LW(p) · GW(p)
On a somewhat related note, I've been contemplating a top-level post on whether paying attention to status is useful for becoming more rational, leaving aside any discussion of whether it is useful for winning; the two issues should be treated very differently, and in the discussion of status on LW that doesn't always seem to be the case.
One of the reasons that I suggest it is useful is that it allows us to realize how status related biases are changing the way we personally think. Roughly speaking our cognitive biases are either "artifacts, weaknesses and limitations of the way the brain manages to process information" or "things we think that are sub-optimal epistemologically because it helps us bullshit our way to more status".
Replies from: WrongBot↑ comment by WrongBot · 2010-07-02T03:13:12.716Z · LW(p) · GW(p)
Agreed, and I would emphasize that status-related biases can specifically hinder the pursuit of rationality itself. For example, asking people questions seems to often be interpreted as an attempt to lower their status, which seems kind of counterproductive, especially for a community like this one. Really, there are a whole range of common reactions related to the idea of "taking offense" that seem to hinder communication but affect status.
↑ comment by Roko · 2010-07-02T13:33:14.438Z · LW(p) · GW(p)
Agreed that this is important, but note that the real incompatibility is not between rationality and $/status, but between altruism and $/status.
So we should focus on the tradeoff between altruism and desire to win in society's little games.
Replies from: whpearson, mattnewport↑ comment by whpearson · 2010-07-02T16:20:57.065Z · LW(p) · GW(p)
Part of my problem with making money today is that most of the methods of making money are benefiting from status games that do not help society.
Pretend for a moment I am a cool shades seller. I sell someone some cool shades. They are happy, they get more status girls etc. Everyone else wants some cool shades, so I sell them some. Now we are back to the status quo, everyone has some bits of plastic that are no better for keeping the sunshine off than some uncool shades and I have some money. The dangerously hip sunglasses took some energy to produce and oil to create that could have been used for producing something of lasting value or preserving some life. Also I needed to have been advertising my sub zero shades with images of women clinging to suave men, in order to compete with other makers of eye wear.
So not only am I exploiting the fact that the world is mad, I am excaserbating it as well. As most consumption is about status or other signalling, it is hard to get away from it when entering the world of business. Even if you aren't a customer facing company, you will supporting and enabling other companys that do play off the biases of the individual. Not to mention things like cigarettes.
Edit: Now if we were perfect rationalists we would swallow our distaste for creating more madness if we thought that we could do more good with the money from the sunglasses, than the waste of resources and increased irrationality engendered by the advertisement.
Replies from: pjeby, NancyLebovitz, Roko, NancyLebovitz, Wei_Dai, wedrifid↑ comment by pjeby · 2010-07-03T00:35:23.233Z · LW(p) · GW(p)
Part of my problem with making money today is that most of the methods of making money are benefiting from status games that do not help society.
So sell information. These days, you don't even have to have it made into a physical product.
↑ comment by NancyLebovitz · 2010-07-03T12:23:51.878Z · LW(p) · GW(p)
I don't know if you have a named bias there, but I think seeing a situation that's pretty bad, and then not looking for good possibilities in the odd corners, counts as a bad mental habit.
I'll note that one of the biggest new fortunes is Google, and their core products aren't status related, even if many Google ads are. What's more, Google's improvement of search has made people generally more capable.
I don't think it makes sense for you to try to make the most possible money by trying to create The Next Big Thing. Maybe I'm too indulgent, but I don't think people are at their best trying to do what they hate, and I think it's easier to create things that serve motivations you can understand.
↑ comment by Roko · 2010-07-02T18:20:37.727Z · LW(p) · GW(p)
It seems that this comment is saying "I'd rather the world ended through unfriendly AI disaster than that I sold a positional good"
Yes?
In what sense is this not madness?
Replies from: steven0461↑ comment by steven0461 · 2010-07-02T19:40:04.568Z · LW(p) · GW(p)
Not all LW discussions should be taken as assuming status/money feeds back into UFAI prevention. Where it does, I obviously agree with you, but if the question of what's the right thing to do in the absence of existential risk considerations is something people find themselves thinking about anyway, they may as well get that question right by paying sufficient attention to positional vs nonpositional goods.
Replies from: Roko↑ comment by Roko · 2010-07-02T19:42:42.275Z · LW(p) · GW(p)
they may as well get that question right by paying sufficient attention to positional vs nonpositional goods.
True except that my intuition is that whpearson has somewhat got it wrong on that too, because he is trying to other-optimize non-nerds, who find keeping up with the latest fashion items highly enjoyable. They're like a hound that enjoys the thrill of the chase, separate from the meat at the end of it.
Replies from: whpearson↑ comment by whpearson · 2010-07-02T23:09:53.503Z · LW(p) · GW(p)
Smokers love the first cigarette of the Day. People who buy lottery tickets love the feeling of potentially winning lots of money. Nerds love to ignore the world and burrow into safe controllable minutia. It doesn't mean that any of them is good for them in the long term.
There is nothing intrinsically wrong with other optimising. Every one in society other optimizes each other all the time. People try and convince me to like football and to chase pretty girls. To conform to their expectations. You've been other-optimizing the non-status oriented in this very thread!
I don't normally air my views, because they are dull and tedious. My friends can buy their fancy cars as much as they want, as long as they have sufficient money to not go into unsustainable debt, and I won't say a word.
But you asked why altruists might have a problem with making money, and I gave you a response. It might be irrational, I'm unsure at this point, but like akrasia It won't go away in a puff of logic. If it is irrational it is due to the application of the golden rule as a computationally feasible heuristic in figuring out what people want. I don't want a world full of advertising that tries to make me feel inadequate, so I would not want to increase the amount of advertising in others worlds.
But maybe "non-people like me" do want this. I don't know, I don't think so. The popularity of things like tivo and ad-block that allow you to skip or block ads. Or pay services without ads suggests that ads are not a positive force in everyone's world.
I also see people regret spending so much money on positional goods they get into debt or bankruptcy. This I am pretty sure is bad ;) So I would not want to encourage it.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T04:29:21.522Z · LW(p) · GW(p)
But maybe "non-people like me" do want this. I don't know, I don't think so. The popularity of things like tivo and ad-block that allow you to skip or block ads. Or pay services without ads suggests that ads are not a positive force in everyone's world.
I believe you are mistaken. Having adds out there is a significant factor driving the production of content on the internet. Without adds we would not have google in its current form and we wouldn't have gmail at all! By personally avoiding adds we derive benefits for ourselves from the presence of adds while not accepting the costs. Let people who aren't smart enough to download AdBlock maintain the global commons!
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-07-04T13:41:12.768Z · LW(p) · GW(p)
Without adds we would not have google in its current form and we wouldn't have gmail at all!
Without advertising, there would be far less demand for television and similar media, because its effective price, as perceived by consumers, would be much higher. Query what people would have done with their extra 20 - 30 hours a week over the last 40 years or so if they hadn't spent all of it consuming mindless entertainment.
Also, without advertising, there would be far less demand for useless products, because their effective benefits, as perceived by consumers, would be much lower. A few corporations that completely failed to make useful products would have gone out of business, and most of the others would have learned to imitate the few corporations that already were making useful products. Query whether (a) having most companies make useful products and (b) having most workers be employed by companies that make useful products might be worth more than having Google.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-04T14:31:02.305Z · LW(p) · GW(p)
If there were no free television, people would still have a lot of low-intensity timekillers available-- gossip, unambitious games, drinking.
If they had to pay for television, they might have been so accustomed to paying for content that they'd have subscribed to google.
The more interesting question is how different would people need to be for advertising to not be worth doing. I think it would take people being much clearer about their motivations. I'm pretty sure that would have major implications, but I'm not sure what they'd be.
There are people who try to raise their kids to be advertising-proof, but I haven't heard anything about the long term effects.
Replies from: scotherns↑ comment by scotherns · 2010-07-06T13:55:16.426Z · LW(p) · GW(p)
There are people who try to raise their kids to be advertising-proof, but I haven't heard anything about the long term effects.
I make an effort to do this with my kids. It will be interesting to see how it effects things as they get older.
↑ comment by NancyLebovitz · 2010-07-02T16:35:18.921Z · LW(p) · GW(p)
Still, there are probably useful things to be made and done which have little or no fashion component.
For example, there don't seem to be any child and pet-proof roach traps on the market.
Replies from: markan, whpearson↑ comment by markan · 2010-07-03T02:05:38.629Z · LW(p) · GW(p)
It seems to me that whpearson's reasoning is an instance of the "ends don't justify means" heuristic, which is especially reasonable in this case since the ends are fuzzy and the means are clear.
If we grant that businesses that feed into status games (and other counterproductive activities) are likely to be much more profitable than businesses more aligned with rationalist/altruistic/"nerd" values, then arguing that one should go into the latter kind of business undermines the argument that ends do justify means here. And if the ends don't justify going into business despite a lack of intrinsic motivation, what does?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-03T09:22:26.229Z · LW(p) · GW(p)
How plausible is it to believe that the businesses which feed into counterproductive activities are likely to be much more profitable?
If we go with the very pretty Austrian theory that profits tend to be equal across all parts of the economy (unusually high profits draw capital in, unusually low profits drive it out), then the conspicuous profits in fashion-driven industry are counterbalanced by lower odds of making those profits.
I don't know how good the empirical evidence for the Austrian theory is.
↑ comment by whpearson · 2010-07-02T16:55:41.780Z · LW(p) · GW(p)
True. If you can keep independent you would be okay. If you have share holders you would be bound to maximise shareholder value.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-02T17:22:12.159Z · LW(p) · GW(p)
If you build keeping independent into your plans, you're more likely to succeed at it.
I've become somewhat dubious about the whole system of maximizing shareholder value. Anecdotally, companies become worse places to work (including less focus on quality) when they go public.
And I don't believe maximizing shareholder value is a real human motivation (not compared to wanting to make good things or please people you know or be in charge of stuff), and I suspect that a system built on it leads to fraud.
Replies from: mattnewport, whpearson↑ comment by mattnewport · 2010-07-02T17:35:10.550Z · LW(p) · GW(p)
There's a fair amount of evidence that suggests that greater management ownership of a firm correlates with better performance. In other words maximizing shareholder value appears to work better as a motivation when the management are significant shareholders.
↑ comment by whpearson · 2010-07-02T17:44:43.610Z · LW(p) · GW(p)
I didn't mean that you would intrinsically want to maximise shareholder value. Simply that if you passed up business opportunities due to your ethics and you didn't have a controlling share you might be out of a job.
Replies from: mattnewport, NancyLebovitz↑ comment by mattnewport · 2010-07-02T18:15:24.657Z · LW(p) · GW(p)
This is a pretty inaccurate interpretation of what maximizing shareholder value actually means in practice. Generally corporate management are only considered to have breached their fiduciary duty to shareholders if they take actions that are clearly enriching themselves at the expense of shareholders, making an acquisition that is dilutive to shareholders for example.
It is highly unusual for corporate management to be accused of breaching their fiduciary duty by making business decisions that fail to maximize profit due to other considerations. For one thing this would generally be impossible to prove since management could argue (for example) that maintaining a reputation for ethical conduct is the best way to maximize shareholder value long term and this is not something that could easily be disproved in a court.
Activist shareholders may sometimes try and force management out due to disagreements over business strategy but this is a separate issue from any legal responsibility to maximize shareholder value. In the US this is also quite difficult (which is a situation that I think should be improved) and so is fairly rare.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-07-02T18:24:08.971Z · LW(p) · GW(p)
Thanks. I was pretty sure that management wasn't getting sued for failing to maximize shareholder value through ordinary business decisions-- if that were possible it would be really common.
↑ comment by NancyLebovitz · 2010-07-02T18:07:45.155Z · LW(p) · GW(p)
Agreed. I was explaining why I'm dubious about publicly owned companies in general.
↑ comment by Wei Dai (Wei_Dai) · 2010-07-03T00:33:54.174Z · LW(p) · GW(p)
Society as a whole over-consumes positional goods, and under-consumes non-positional goods. However, non-positional goods are not being consumed at a level of zero, so why not make money selling those? If you can make an improvement to an existing non-positional good, or come up with entirely new ones, that would also shift some consumption from positional goods to non-position ones, which would further satisfy your altruistic values.
↑ comment by wedrifid · 2010-07-03T04:22:49.241Z · LW(p) · GW(p)
Part of my problem with making money today is that most of the methods of making money are benefiting from status games that do not help society.
Can we be sure of that? Reasoning from the observation that society seems to get rather a lot of benefits from something and that this something doesn't seem to be pure charitable contribution the endless battle to make money must be of some marginal value!
Money makes the world go round.
Replies from: whpearson↑ comment by whpearson · 2010-07-03T09:00:44.733Z · LW(p) · GW(p)
Reasoning from the observation that society seems to get rather a lot of benefits from something and that this something doesn't seem to be pure charitable contribution the endless battle to make money must be of some marginal value!
In that case, it must be the government that generates the benefit!
It is the manipulation of the status game that does not help society, the economic knock on effects might.
Could we have generated more marginal value with the same resources? Belief in super intelligence would indicate yes. Would it have been possible to generate more marginal value whilst keeping human psychology as it is currently? That is hard to argue conclusively either way. We couldn't switch off the desire for money/fame entirely, but it may have been able to have been moderated and channelled more effectively (science and reason has been fashionable in the past).
Altruists, worth the name, that enter business would have to make sure they were part of the consumerist system that generated value rather than part that detracted from the value other people generated. That would constrain their money making ability.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T09:34:54.522Z · LW(p) · GW(p)
Could we have generated more marginal value with the same resources?
Absolutely. But lets be clear that for all their faults these status games account for nearly all the value that society creates for itself. In fact, since group selection does not imply in the case of humans these games are the very foundation of civilization itself. I hate status games... but I refuse to let myself fall into that all to common 'nerd' failure mode. Rationalization from bitterness from and contempt for status games that just don't seem to matter to us as much as to others to the conclusion that they have no value.
Altruists, worth the name, that enter business would have to make sure they were part of the consumerist system that generated value rather than part that detracted from the value other people generated. That would constrain their money making ability.
Only in the "No True Scotsman" sense of 'worth the name'. Altruists push fat men in front of Trolleys. Altruism is not nice. That is just the naive do-gooderism that we read in fantasy and Sci. Fi. stories.
Altruists need not artificially constrain themselves unless the detriment from making money is sufficiently bad that the earnings can not be spent to generate a net benefit to their society (as they personally evaluate benefit). Ways to make money that fit such category are extremely hard to find. For example earnings activities up to and including theft and assassination can be used to easily give a net altruistic benefit. In fact, the marginal value of adding additional zero or negative sum players to the economy is usually fairly small. You just make the market in 'evil' slightly more efficient.
Replies from: whpearson↑ comment by whpearson · 2010-07-03T10:04:25.286Z · LW(p) · GW(p)
I won't just ignore all ethical unease.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T10:51:12.900Z · LW(p) · GW(p)
You don't need to, and I know I myself don't. (My ethical unease is based on my ersonal ethics, mind you). What I responded to was the general claim about "altruists worth the name". In the previous response I similarly responded to your rationalization, not the conclusion that you were rationalizing. I am comfortable disagreeing on matters of fact but don't usually see much use in responding to other people's personal preferences.
Replies from: whpearson↑ comment by whpearson · 2010-07-03T11:50:33.270Z · LW(p) · GW(p)
We are talking at cross purposes somewhat.
There are two points.
1) There is working at something you dislike for a greater good (as long as you are very very sure that it will be a net positive. All the talks of cognitive deficits of humans do not inspire confidence in my own decision making abilities)
2) Improving society through being involved in the consumerist economy as it is a net positive force in itself.
My comment about "worth the name" was mainly about 2 and ignoring 1 for the moment, as I had already conceded it in my initial comment.
Replies from: wedrifid↑ comment by wedrifid · 2010-07-03T12:56:36.400Z · LW(p) · GW(p)
2) Improving society through being involved in the consumerist economy as it is a net positive force in itself.
My comment about "worth the name" was mainly about 2
Then on this we are naturally in agreement. No sane altruist (where sane includes 'able to adequately research the influence of important decisions) will do things that have a net detriment and there are certainly going to be some activities that fit this category.
I would add another category that specifically refers to doing things that are directly bad so that it allows you to do other things that are good.
↑ comment by mattnewport · 2010-07-02T16:02:06.871Z · LW(p) · GW(p)
The only incompatibility I see is between rationality and (pure) altruism but I'm aware that's a minority position here.
↑ comment by Nick_Tarleton · 2010-07-01T18:14:20.827Z · LW(p) · GW(p)
(IAWYC)
Hence, we are sidelined by dumb cave-men who at least have enough fire in their bellies to win.
Of course, if you're going to talk like that it seems essential to then frame the situation as "we're going to beat those fools at their own game, muahaha!" rather than "achieving conventional success is something dumb cavemen do, eww" or "I was born a nerd, what hope do I have". I suspect that the default framing is closer to the bad sort, and am somewhat concerned that such talk makes things worse by default. (I'm also not sure what, if any, framing could defeat the seemingly-likely effect where considering successful non-nerds dumb makes one less likely to try to steal their powers, and simply makes it harder to get along with them.)
↑ comment by Jonathan_Graehl · 2010-07-01T18:14:05.810Z · LW(p) · GW(p)
If one only cared about material goods, then you're right.
Otherwise, it depends on how much you need to be relatively richer than others in order to attract the kind of social interaction you like. Think of the stereotypical well-dressed man hoping to land a winning bid on a pretty gold digger.
↑ comment by Dre · 2010-07-01T06:53:35.248Z · LW(p) · GW(p)
As I understand it, it is a comparative advantage argument. More rational people are likely to have comparative advantage in making money as compared to less rational people, so the utility maximizing setup is for more rational people to make money and pay less rational people to do the day to day work of implementing the charitable organization. Thats the basic form of the argument at least.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-07-01T18:20:24.282Z · LW(p) · GW(p)
It definitely seems the other way around to me: very high rationality may help a lot in making money, but it's not a necessary condition, while it does appear to be necessary for most actually effective object-level work (at the current margin; rationalist organizations will presumably become better able to use all sorts of people over time).
↑ comment by Morendil · 2010-07-01T12:53:38.653Z · LW(p) · GW(p)
Micro-failure of rationality: we will be able to conclude very little from final vote totals on the parent comment, lacking a way to get up- and down-vote totals separately.
Replies from: Roko, red75↑ comment by Roko · 2010-07-01T13:04:29.422Z · LW(p) · GW(p)
I think that the problem is that people aren't voting the comment at all, so it's a moot point. The comment has stayed on 1 as far as I can see.
It could be improved by a LW micro-poll feature.
Replies from: ciphergoth, pjeby↑ comment by Paul Crowley (ciphergoth) · 2010-07-01T13:44:47.777Z · LW(p) · GW(p)
If someone's looking for a genuinely high-utility thing to do in the spirit of this post, hacking on the LW codebase might be a candidate :-)
↑ comment by pjeby · 2010-07-01T13:39:09.243Z · LW(p) · GW(p)
The comment has stayed on 1 as far as I can see.
I downvoted it from 2.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-07-01T18:10:00.397Z · LW(p) · GW(p)
Likewise.
The way this has been addressed in the past is by posting two vote counters, e.g. "vote up if yes" and "vote up if no" (with another comment for karma balance if people care). It is pretty clumsy.
↑ comment by mattnewport · 2010-06-30T18:07:50.344Z · LW(p) · GW(p)
Vote this comment down if you currently own assets worth >= $100k
If your net worth is >= $100k is probably a better way to phrase this to avoid confusion over the 'ownership' of a house bought with a mortgage, a car bought on credit etc.