LW was started to help altruists
post by RHollerith (rhollerith_dot_com) · 2011-02-19T21:13:00.020Z · LW · GW · Legacy · 36 commentsContents
36 comments
The following excerpt from a recent post, Recursively Self-Improving Human Intelligence, suggests to me that it is time for a reminder of the reason LW was started.
"[C]an anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities? Or are these not the right questions; the concept a mere semantic illusion[?]"
These are not the right questions -- not because the concept is a semantic illusion, but rather because the questions are a little too selfish. I hope the author of the above words does not mind my saying that. It is the hope of the people who started this site (and my hope) that the readers of LW will eventually turn from the desire to improve their selves to the desire to improve the world. How the world (i.e., human civilization) can recursively self-improve has been extensively discussed on LW.
Eliezer started devoting a significant portion of his time and energy to non-selfish pursuits when he was still a teenager, and in the 12 years since then, he has definitely spent more of his time and energy improving the world than improving his self (where "self" is defined to include his income, status, access to important people and other elements of his situation). About 3 years ago, when she was 28 or 29, Anna Salamon started spending most of her waking hours trying to improve the world. Both will almost certainly devote the majority of rest of their lives to altruistic goals.
Self-improvement cannot be ignored or neglected even by pure altruists because the vast majority of people are not rational enough to cooperate with an Eliezer or an Anna without just slowing them down and the vast majority are not rational enough to avoid catastrophic mistakes were they to try without supervision to wield the most potent methods for improving the world. In other words, self-improvement cannot be ignored because now that we have modern science and technology, it takes more rationality than most people have just to be able to tell good from evil where "good" is defined as the actions that actually improve the world.
One of the main reasons Eliezer started LW is to increase the rationality of altruists and of people who will become altruists. In other words, of people committed to improving the world. (The other main reason is recruitment for Eliezer's altruistic FAI project and altruistic organization). If the only people whose rationality they could hope to increase through LW were completely selfish, Eliezer and Anna would probably have put a lot less time and energy into posting rationality clues on LW and a lot more into other altruistic plans.
Most altruists who are sufficiently strategic about their altruism come to believe that improving the effectiveness of other altruists is an extremely potent way to improve the world. Anna for example spends vastly more of her time and energy improving the rationality of other altruists than she spends improving her own rationality because that is the allotment of her resources that maximizes her altruistic goal of improving the world. Even the staff of the Singularity Institute who do not have Anna's teaching and helping skills and who consequently specialize in math, science and computers spend a significant fraction of their resources trying to improve the rationality of other altruists.
In summary, although no one (that I know of) is opposed to self-improvement's being the focus of most of the posts on LW and no one is opposed to non-altruists' using the site for self-improvement, this site was founded in the hope of increasing the rationality of altruists.
36 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2011-02-19T21:57:24.520Z · LW(p) · GW(p)
For a better take on the purpose of Less Wrong, see Rationality: Common Interest of Many Causes.
(This post reads as moderately crazy to me.)
Replies from: AnnaSalamon, rhollerith_dot_com, Davorak↑ comment by AnnaSalamon · 2011-02-20T03:28:21.979Z · LW(p) · GW(p)
I agree with Nesov.
To give more detailed impressions:
I suspect LW would be worse -- and would be worse at improving the world -- if altruism, especially altruism around a particular set of causes that seemed strange to most newcomers, was obligatory. It would make many potential contributors uncomfortable, and would heighten the feeling that newcomers had walked into a cult.
Improving LW appears to be an effective way to reduce existential risk. Many LW-ers have either donated to existential risk reducing organizations (SIAI, FHI, nuclear risks organizations), have come to work at SIAI, or have done useful research; this is likely to continue. As long as this is the case, serious aspiring rationalists who contribute to LW are reducing existential risk, whether or not they consider themselves altruists.
Whether your goal is to help the world or to improve your own well-being, do improve your own rationality. (“You”, here, is any reader of this comment.) Do this especially if you are already among LW’s best rationalists. Often ego makes people keener on teaching rationality, and removing the specks from other peoples’ eyes, than on learning it themselves; I know I often find teaching more comfortable than exploring my own rationality gaps. But at least if your aim is existential risk reduction, there seems to be increasing marginal returns to increasing rationality/sanity; better to help yourself or another skilled rationalist gain a rationality step, then to help two beginners gain a much lower step.
If we are to reduce existential risk, we’ll need to actually care about the future of the world, so that we take useful actions even when this requires repeatedly shifting course, admitting that we’re wrong, etc. We’ll need to figure out how to produce real concern about humanity’s future, while avoiding feelings of obligation, resentment, “I’m sacrificing more than you are, and am therefore better than you” politics, etc. How to achieve this is an open question, but my guess is that it involves avoiding “shoulds”.
↑ comment by Dorikka · 2011-02-20T03:50:37.037Z · LW(p) · GW(p)
But at least if your aim is existential risk reduction, there seems to be increasing marginal returns to increasing rationality/sanity; better to help yourself or another skilled rationalist gain a rationality step, then to help two beginners gain a much lower step.
Could you elaborate on this? I understand how having only a bit of rationality won't help you much, because you'll have so many holes in your thinking that you're unlikely to complete a complex action without taking a badly wrong step somewhere, but I mentally envisioned that leveling out at a point (for no good reason.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2011-02-20T04:16:33.174Z · LW(p) · GW(p)
It might level out at some point -- just not at the point of any flesh-and-blood human I've spent much time with. For example, among the best rationalists I know (and SIAI has brought me into contact with many serious aspiring rationalists), I would expect literally everyone to be able to at least double their current effectiveness if they had (in addition to their current rationality subskills) the rationality/sanity/self-awareness strengths of the other best rationalists I know.
Replies from: katydee↑ comment by katydee · 2011-02-26T18:42:17.326Z · LW(p) · GW(p)
Have you spent much time with non-flesh-and-blood humans?
Joking aside, what does the last part of your post mean? Are you saying that if the current rationalists you know were able to combine the strengths of others as well as their own strengths they would be substantially more effective?
↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T02:08:23.259Z · LW(p) · GW(p)
Well, maybe I did not drink enough sanity juice today :)
comment by Dorikka · 2011-02-19T21:25:41.846Z · LW(p) · GW(p)
Uh, okay, that's great. So what? I don't see that this post says anything that's actually useful, it doesn't really seem like a coherent argument for anything, and I get the vague 'hidden argument' feeling that I get from seeing Dark Arts in action.
Replies from: rhollerith_dot_com, rhollerith_dot_com, rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T02:07:51.512Z · LW(p) · GW(p)
I get the vague 'hidden argument' feeling that I get from seeing Dark Arts in action.
That leads me to believe I have been insufficiently transparent about my motivations for writing, so let me try to rectify that insufficiency:
This site (and OB before it and the SL4 mailing list) has always been an congenial place for altruists, and I wanted to preserve that quality.
Now that I have the comments, it occurs to me that my post was probably too heavy-handed in how it went about that goal, but I still think that an effective way to achieve that goal is to write a post that altruists will like and non-altruists will find pointless or even slightly off-putting. if too high a fraction of the recent posts on this site have nothing interesting to say to altruists, that is a problem because most readers will not read a lot of the content written in previous years because that is the way the web is. (the post I was replying to starts by wondering whether LW should have more posts about recursive human self-improvement, and recursive human self-improvement short of whole brain emulation or similar long-range plans is not a potent enough means to altruistic end to be interesting to altruists.) But my post was too heavy-handed in that it was a reply to a post not of interest to altruists rather than being a post that stands on its own and is of interest to altruists and not of interests to non-altruists.
Another motivation I had was to persuade people to become more altruistic, but now that I see it written out like that, it occurs to me that probably the only way to do that effectively on LW is to set a positive personal example. also, it occurs to me that my post did engage in some exhortation or even cheer-leading, and exhortation and cheer-leading are probably ineffective.
Replies from: Dorikka↑ comment by Dorikka · 2011-02-20T04:12:40.579Z · LW(p) · GW(p)
I'm not sure that I can answer effectively without stating how altruistic I consider myself to be. I feel that I am a semi-altruist -- I assign higher utility to the welfare and happiness people I am personally attached to and myself, but, by default, I assign positive utility to the welfare and happiness of any sentient being.
I found your post offputting because it looked like a covert argument for altruism of the following form:
- Eliezer and co. are altruists
- Eliezer and co have written material (especially) to help altruists, and you have benefited.
- Thus, you should be an altruist.
To me, this resembles an argument from authority and possibly an attempt at trying to make people feel that they should pay for goods already received.
This, this, and this are appealing to me as emotional arguments for altruism, just in case this helps you get a better idea of where I am coming from.
Edit: Click on the "Help" tag on the bottom left of the comments box to see how to italicize words in comments.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T05:51:50.590Z · LW(p) · GW(p)
Eliezer and co have written material (especially) to help altruists, and you have benefited. Thus, you should be an altruist. . . . trying to make people feel that they should pay for goods already received.
Agree, though to be an effective use of the Dark Arts, I should have followed up with what the marketing folks call a "call to action".
Thanks for your comments.
↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T03:09:09.361Z · LW(p) · GW(p)
I get the vague 'hidden argument' feeling that I get from seeing Dark Arts in action.
A paraphrase of a theme in my post is that people whose motives are unobjectionable and widely admired in our society have given the readers a gift motivated by X, and then to point out that many reader and posters are using that gift for motive Y, which, yeah, is a use of Dark Arts.
Is that part of your objection to my post?
Replies from: Dorikka↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-19T21:28:30.988Z · LW(p) · GW(p)
I don't see that this post says anything that's actually useful
Is it not useful to know the reason why the persons who have final authority over this site started this site?
Replies from: Dorikka↑ comment by Dorikka · 2011-02-19T22:30:26.651Z · LW(p) · GW(p)
I agree with Nesov that his link is a much better source of the information -- it has the bonus of actually being written by one of the people whose actions you are paraphrasing.
I didn't get the gist that the main point of this post was for informational purposes, but to convert people to altruism (ineffectively so, IMO). I think that I got this message primarily through seeing the term "altruism" and its variants in italics repeatedly.
comment by wedrifid · 2011-02-20T18:06:55.939Z · LW(p) · GW(p)
I endorse the quote this post objects to:
"[C]an anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities?"
The objection to this "because the questions are a little too selfish" is outright scary to read. Chiefly because looking at ways to become more personally effective is not at all opposed to producing shared social value.
The only thing I like about this post is that it is currently downvoted to -4. I fairly consistently reject all claims of LW 'cultishness'. This post is essentially an admonishment to behave more like a cult of personality. Seeing it summarily rejected comes as a huge relief!
comment by nazgulnarsil · 2011-02-20T01:34:40.364Z · LW(p) · GW(p)
a few LWers shutting up and getting rich, regardless of whether they are altruists or egoists, will improve the world. I am basically an egoist and I plan to contribute non-trivial amounts of my post college income to SENS for purely selfish reasons.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T03:00:08.017Z · LW(p) · GW(p)
It is good for me to remain aware of that fact.
comment by NancyLebovitz · 2011-02-20T00:11:21.462Z · LW(p) · GW(p)
This reminds me of Steve Pavlina's idea of lightworkers and darkworkers.
He says (if I understand him correctly) that people do better if they set out to either serve themselves or serve other people, rather than (as most do) have an unthought-out mixture of motivations, and that lightworkers will figure out that they need to take care of themselves and darkworkers will figure out that cooperation serves their ends.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T01:42:26.018Z · LW(p) · GW(p)
Hunh. A mentor of mine, John David Garcia, said that an altruist should probably ensure the security of his self and his family by, e.g., getting a few million dollars of net family worth and making sure all the kids have gotten a good education (3 of John's 4 children have MDs) before trying to improve the world, which is similar to the point that Pavlina seems to be making.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-02-20T01:50:45.139Z · LW(p) · GW(p)
It's an interesting claim, but there's at least one failure mode-- the altruist spends their life aiming for financial security, but never achieves it. In that case, they would have been more helpful by giving more modestly throughout their career.
Replies from: None, rhollerith_dot_com↑ comment by [deleted] · 2011-02-22T16:51:55.858Z · LW(p) · GW(p)
This is at least not obviously true to me. The situation you describe seems like a failure mode in hindsight. If the altruist succeeds in earning lots of money and then donates far more than they could have, otherwise, then it would seem in hindsight like the only correct strategy.
In other words, unless you show that a decision always makes you worse off, you have to make some sort of argument about probabilities in order to support your case.
A relevant point I've seen someone make is that if your money is better off invested than given to charity now, the charity you donate to could invest it and get that benefit anyway. So the real question to ask is whether you can invest your money more efficiently than the charity can. It may be overly optimistic of me to think so, but I believe an education for your children, in particular, is such an investment. I don't know about the "few million dollars of net family worth" business (in particular, I've never seen having that much money as a goal I personally have in life) but it's possible there are arguments in that direction.
Edit: I'll leave the end of that sentence there for the record, but I feel bad about it. It's possible there are arguments in that direction; it's possible there are arguments in the other direction; I don't know and I shouldn't take sides just because it seems to parallel my other arguments.
↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T02:59:37.377Z · LW(p) · GW(p)
Yes, and the strategy can be fixed up to partially avoid that failure mode. E.g., leave your assets to an effective charity when you die, and go into altruist mode ten years before you expect to become unable to do productive work.
comment by steven0461 · 2011-02-19T23:19:17.407Z · LW(p) · GW(p)
If altruism is true then much of the effort of egoists is wasted and vice versa, but for some reason we never discuss the question.
Replies from: Vladimir_Nesov, Dorikka, Perplexed, timtyler↑ comment by Vladimir_Nesov · 2011-02-19T23:23:29.466Z · LW(p) · GW(p)
We do discuss the question sometimes (e.g. limitations of utilitarianism, personal identity, time discounting are all relevant to figuring out the truth of egoistic values). But I agree that egoism/altruism question having an actual answer which methods of rationality should help to find is one of the major blank spots of this post.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-02-20T01:39:15.759Z · LW(p) · GW(p)
I always assumed that altruists are aiming for a different region in the space of all possible configurations of the future light cone than egoists are, but maybe I need to examine that assumption.
Replies from: Dorikka, Vladimir_Nesov↑ comment by Dorikka · 2011-02-20T05:42:04.035Z · LW(p) · GW(p)
I think that these converge in at least some cases. For example, an egoist wanting to live a really long time might work on reducing existential risks because it's the most effective way of ensuring that he wakes up from being frozen. An altruist might work on reducing existential risks because he wants to save (other) people, not just himself.
↑ comment by Vladimir_Nesov · 2011-02-20T02:20:08.842Z · LW(p) · GW(p)
I always assumed that altruists are aiming for a different region in the space of all possible configurations of the future light cone than egoists are
They do act on conflicting explicit reasons (to some small extent), but I don't expect they should.
↑ comment by Dorikka · 2011-02-20T01:35:23.295Z · LW(p) · GW(p)
I'm not sure here what you mean by 'true.' One or the other may be more optimal in terms of maximizing a given utility function, but I doubt that the same answer optimizes all utility functions (so we should consider such on a case by case basis.)
Replies from: steven0461↑ comment by steven0461 · 2011-02-20T23:07:48.223Z · LW(p) · GW(p)
Yes, I should have been more careful and allowed for the possibility of fundamentally divergent values, but I'd be surprised if a substantial part of all the considerations that go into the question weren't shared across people.
↑ comment by Perplexed · 2011-02-20T00:12:01.389Z · LW(p) · GW(p)
Lets discuss it then. I disagree with your claim.
According to the folk who promote egoism, much of the effort of egoists is devoted to finding out what everyone else wants, and then satisfying those desires (for a price). Admittedly, that is partly self-serving mythology, but it is probably closer to the truth than the myths that altruists spin about themselves.
Similarly, egoists are likely to approve of people seeking warm-and-fuzzies as much as they approve of people seeking sexual gratification. Egoists have no problem with the values pursued by altruists - they only get annoyed when altruists band together to push their values onto others.
comment by Perplexed · 2011-02-19T21:46:16.876Z · LW(p) · GW(p)
One of the main reasons Eliezer started LW is to increase the rationality of altruists and of people who will become altruists. In other words, of people committed to improving the world. ... If the only people whose rationality they could hope to increase through LW were completely selfish, Eliezer and Anna would probably have put a lot less time and energy into posting rationality clues on LW and a lot more into other altruistic plans.
There is an assumption embedded here which I think it is worthwhile to make explicit. This is the assumption that increasing the rationality of an altruist improves the world more than would an equivalent increase in the rationality of an egoist. (The improvement in the world is taken to be as judged by an infallible altruist.)
I am far from convinced that this assumption is valid, though I suppose that the argument could be made that an irrational altruist poses a greater danger to the world's well-being than does an irrational egoist. Though, if you focus on the relatively powerless, it is conceivable that irrationality in egoists does more harm than irrationality in altruists.
Rational altruists and rational egoists, of course, mostly adhere to social contracts and hence behave very much alike.
comment by Normal_Anomaly · 2011-02-20T18:54:41.392Z · LW(p) · GW(p)
I think there may be some confusion in the original post about the relationship between "self-improvement" i.e. improving one's intelligence, rationality, and motivation, and "bettering oneself" i.e. getting more wealth and status. The former is what curiousepic's recursive improvement post was about, and rollerith responded as though it was about the latter. Rollerith: is this an acceptable interpretation, or an unwarranted attempt at mind-reading?
comment by komponisto · 2011-02-20T05:31:52.768Z · LW(p) · GW(p)
I agree with Nesov, Salamon, and Dorikka and suspect furthermore that the distinction between "altruists" and "egoists" that is drawn in the post is a confusion which doesn't survive careful scrutiny.
It seems to me that the people you call "altruists" are simply people who are attempting to have a particularly large impact on the long-term future of the world in a particularly direct way. But not everyone needs to work so directly on the most global scales, all the time; there is room (and likely need) for a division of labor, with some working more locally. For some (probably very many) people, improving themselves is among the most effective contributions to the future of humanity they can make.