How to Save the World
post by Louie · 2010-12-01T17:17:48.713Z · LW · GW · Legacy · 135 commentsContents
135 comments
Most of us want to make the world a better place. But what should we do if we want to generate the most positive impact possible? It’s definitely not an easy problem. Lots of smart, talented people with the best of intentions have tried to end war, eliminate poverty, cure disease, stop hunger, prevent animal suffering, and save the environment. As you may have noticed, we’re still working on all of those. So the track record of people trying to permanently solve the world's biggest problems isn’t that spectacular. This isn’t just a “look to your left, look to your right, one of you won’t be here next year”-kind of thing, this is more like “behold the trail of dead and dying who line the path before you, and despair”. So how can you make your attempt to save the world turn out significantly better than the generations of others who've tried this already?
It turns out there actually are a number of things we can do to substantially increase our odds of doing the most good. Here's a brief summary of some on the most crucial considerations that one needs to take into account when soberly approaching the task of doing the most good possible (aka "saving the world").
1. Patch your moral intuition (with math!) - Human moral intuition is really useful. But it tends to fail us at precisely the wrong times -- like when a problem gets too big [“millions of people dying? *yawn*”] or when it involves uncertainty [“you can only save 60% of them? call me when you can save everyone!”]. Unfortunately, these happen to be the defining characteristics of the world’s most difficult problems. Think about it. If your standard moral intuition were enough to confront the world’s biggest challenges, they wouldn’t be the world’s biggest challenges anymore... they’d be “those problems we solved already cause they were natural for us to understand”. If you’re trying to do things that have never been done before, use all the tools available to you. That means setting aside your emotional numbness by using math to feel what your moral intuition can’t. You can also do better by acquainting yourself with some of the more common human biases. It turns out your brain isn't always right. Yes, even your brain. So knowing the ways in which it systematically gets things wrong is a good way to avoid making the most obvious errors when setting out to help save the world.
2. Identify a cause with lots of leverage - It’s noble to try and save the world, but it’s ineffective and unrealistic to try and do it all on your own. So let’s start out by joining forces with an established organization who’s already working on what you care about. Seriously, unless you’re already ridiculously rich + brilliant or ludicrously influential, going solo or further fragmenting the philanthropic world by creating US-Charity#1,238,202 is almost certainly a mistake. Now that we’re all working together here, let's keep in mind that only a few charitable organizations are truly great investments -- and the vast majority just aren’t. So maximize your leverage by investing your time and money into supporting the best non-profits with the largest expected pay-offs.
3. Don’t confuse what “feels good” with what actually helps the most - Wanna know something that feels good? I fund micro-loans on Kiva. It’s a ridiculously cheap way to feel good about helping people. It totally plays into this romantic story I have in my mind about helping business owners help themselves. And there’s lots of shiny pictures of people I can identify with. But does loaning $25 to someone on the other side of the planet really make the biggest impact possible? Definitely not. So I fund a few Kiva loans a month because it fulfills a deep-seated psychological need of mine -- a need that doesn’t go away by ignoring it or pretending it doesn’t exist. But once that’s out of the way, I devote the vast majority of my time and resources to contributing to other non-profits with staggeringly higher pay-offs.
4. Don’t be a “cause snob” - This one's tough. The more you begin to care about a cause, the more difficult it becomes not to be self-righteous about it. The problem doesn’t go away just because you really do have a great cause... it only gets worse. Resist the temptation to kick dirt in the faces of others who are doing something different. There are always other ways to help no matter what philanthropic cause you're involved with. And everyone starts out somewhere. 15 years ago, I was optimizing for anarchy. Things change. And even if they don't, people deserve your respect regardless of whether they want to help save the world or not. We're entitled to nothing and no one. Our fortunes will rise and fall based on our abilities, including the ability to be nice -- not the intrinsic goodness of our causes.
5. Be more effective - You know how sometimes you get stuck in motivational holes, end up sick all the time, and have trouble getting things done? That’s gonna happen to everyone, every now and then. But if it’s an everyday kind of thing for you, check out some helpful resources that can get you unstuck. This is incredibly important because the steps up until now only depended on what you believed and what your priorities were. But your beliefs and priorities won’t even get you through the day, much less help you save the world. You're gonna need to formulate goals and be able to act on them. Becoming more capable, more organized, more well-connected, and more motivated is an essential part of saving the world. Your goals aren’t going to just accomplish themselves the first time you “try”. If you want to succeed, you’ll likely have to fail a bunch first, and then try harder.
6. Spread awareness - This is a necessary meta-strategy no matter what you’re trying to accomplish. Remember, deep down, most people really do want to find a way to help others or save the world. They just might not be looking for it all the time. So tell people what you’re up to and if they want to know more, tell them that too. You shouldn’t expect everyone to join you, but you should at least give people a chance to surprise you. And there are other less obvious things you can do, like join networking groups for your cause or link to the website of your favorite cause a lot from your blog and other sites where they might not be mentioned quite so much. That way, they can consistently turn up higher in Google searches. Or post this article on Facebook. Some of your friends will be happy you shared it with them. Just saying.
7. Give money - Spreading awareness can only accomplish so much. Money is still the ultimate meta-tool for accomplishing everything. There are millions of excuses not to give, but at the end of the day, this is the highest-leverage way for you to contribute to that already high-leverage cause that you identified. And don’t feel like you’re alone in finding it difficult to give. Most people find it incredibly difficult to give money -- even to a cause they deeply support. But even if it’s a heroically difficult task, we should still aspire to achieve it... we’re trying to save the world here, remember? If this were easy, someone else (besides Petrov) would have done it already.
8. Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-break is free money. You should take free money. Not taking the free money is implicitly agreeing that your government knows how to spend your money better than you do. Do you think your government’s judgment and preferences are superior to yours? and; 2) Non-profit organizations can have endowments and those endowments can invest in securities just like individuals. So if long term-investment in the stock market were really a superior strategy, the charity you’re intending to give your money to could do the exact same thing. They could tuck all your annual contributions away in a big fat, tax-free fund to earn market returns until they were ready to unleash a massive bundle of money just like you would have. If they aren’t doing this already, it’s probably because the problem they’re trying to solve is compounding faster than the stock market compounds interest. Diseases spread, poverty is passed down, existential risk increases. At the very least, don’t try to out-think the non-profit you support without talking to them - they probably wish you were donating now, not just later.
9. Optimize your income - Do you know how much you should be earning? Information on salaries in your industry / job market could help you negotiate a pay raise. And if you’re still in school, why not spend 2 hours to compare the salaries of the different careers you’re interested in? Careers can last decades. Degrees take 4-6 years to complete. Make sure you really want the kind of salaries you’ll be getting and you know what it will be like to work in your chosen industry. Even if you’re a few years into a degree program, changing course now is still better than regretting not having explored other options later. Saving the world is hard enough. Don’t make it harder on yourself by earning below market wages or choosing the wrong career to begin with.
10. Optimize your outlays - Cost of living can vary drastically across different tax districts, real estate markets, commuting methods, and other daily spending habits. It’s unlikely you ended up with an optimal configuration. For starters, if you don’t currently track your spending, I highly recommend you at least try out something light-weight like Mint.com so you can figure out where all your money is going. Remember, you don’t have to scrimp and sacrifice your quality of life to save money -- a lot of things can be less expensive just by planning ahead a little and avoiding those unnecessary “gotcha” fees. No matter what you want to do to improve the world, having more money to do it makes things easier.
11. Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. I've done this before and know others who have too. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.
12. Have fun! - Don’t get so wrapped up trying to save the world that you sacrifice your own humanity. Having a rich, fulfilling personal life is a well-spring of passion that will only boost your ability to contribute -- not distract you. Trust me: you won’t be sucked into the veil of Maya and forget about your vow to save the world. So have a beer. Call up your best friend. Watch a movie that has absolutely no world-saving side-benefits whatsoever! You should do whatever it is that connects to that essential joy of being human and you should do it as often as you need; without apologies. Enough people sacrifice their lives without even realizing it -- don’t sacrifice your own on purpose.
135 comments
Comments sorted by top scores.
comment by PeerInfinity · 2010-12-01T18:47:29.604Z · LW(p) · GW(p)
This is an awesome post! Thanks, Louie :)
some obvious suggestions:
- Make friends with other world-savers.
- Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers.
- Find other world-savers who can significantly benefit from skills or other resources that you have, and offer to help them for free.
- Find other people who are willing to help you for free, with things that you especially need help with.
- Look for opportunities to share resources with other world-savers. Share a house, share an apartment, share a car... There's lots of synergy among the people living at the SIAI house.
- Join the x-risks career network
- If you know of an important cause that currently doesn't have a group dedicated to that cause, consider starting a group. For example, the x-risks career network didn't exist a year ago.
- Check out the Rationality Power Tools
- really, anything that will help make your life more efficient will help you be more efficient at world-saving. Getting Things Done, 4 Hour Workweek, lots more...
↑ comment by Jordan · 2010-12-01T22:16:11.285Z · LW(p) · GW(p)
A great list, although the first two points seem distinctly cult-like. I think it's important for worldsavers as a group to maintain very broad connections to the greater social network.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-12-01T22:24:38.370Z · LW(p) · GW(p)
good point, thanks, but I think it would still be a very bad idea to avoid having any friends who are world-savers, just to avoid seeming cult-like.
And I should mention that I think that it would also be a bad idea to avoid being friends with anyone who currently isn't a world-saver, because of a mistaken belief that only world-savers are worthy of friendship.
Also, even the cults know that making friends with non-cult-members can be an effective recruitment strategy.
I rephrased the second point as "Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers."
the original version was "Spend less time with your current friends, if it's obvious that they have no interest in world-saving, and they aren't helping you be more effective at world-saving, and you're not likely to make them any more interested in world-saving."
or maybe I should just drop the second point entirely...
Replies from: wedrifid, None↑ comment by wedrifid · 2010-12-02T00:35:27.152Z · LW(p) · GW(p)
or maybe I should just drop the second point entirely...
That depends whether you are optimising for world saving advice or social signalling.
In the current form it doesn't seem cultish so much as it seems blatantly obvious. To be honest the part about synergy and sharing actually struck me as more cultish.
Replies from: juliawise↑ comment by juliawise · 2011-12-18T22:18:32.076Z · LW(p) · GW(p)
In my geographical area, I know only about 10 people who might be described as trying to save the world. I would hate to have that small a pool of potential friends. Also, I think spending time on non-world-saving activities is essential to my mental health. Some of that off time happens with friends who aren't interested in saving the world. That's fine.
Replies from: wedrifid↑ comment by [deleted] · 2010-12-02T02:12:32.451Z · LW(p) · GW(p)
Thoughts on the 1st and 2nd points:
To the extent that you are or can be someone others look up to and are inspired by, stay friends with as many non-world-savers as possible. If you assess yourself as unable to exert a possible influence in this way, have less non-world-saver friendships. Or at least keep your two worlds from colliding, so the positive one isn't hampered by the recreational one.
Having friends with shared interests is critical for many people -- I can't tell you how little I care about IT (my job) when I don't have other enthusiastic people to discuss the tech with. Or, wait, I guess I just did.
Jordan - When Ben Franklin started the Junto, and later the American Philosophical Society, was he being cultish?
↑ comment by Louie · 2010-12-02T00:50:32.344Z · LW(p) · GW(p)
Great ideas! I incorporated a not so subtle mention of the x-risks career network into #6 based on your suggestion. My goal here was to keep things general in tone and only deeply permeate the subtext + links with my own value judgments. It's a kind of overt neutrality with a strong undercurrent of things you can look into if you're interested. But if you never click on a link, you could just as easily be a member of any current activist set and still get a lot of value out of my writing.
Actually I think I'll write up a new section like "Become more generally capable" which seems like something I didn't specifically cover but almost certainly should.
Anyone have suggestions for "must have" items to go in that summary section? What other Less Wrong posts are good for that?
EDIT: Added as the new point #5 now -- it's general if you just read it but rich in specific examples if you follow up on the resources linked from it
Replies from: None↑ comment by [deleted] · 2010-12-02T02:16:43.429Z · LW(p) · GW(p)
"Become more generally capable" is an applause sign; it's too generic, not actionable. Although you can mitigate this by including as many specific actions as possible. Maybe stress the importance of proper diet (Paleo) and movement and sufficient sleep on general capability. Not sure what else would count without it turning into a list of how to become more specifically capable, contra "generally".
Replies from: wedrifid↑ comment by wedrifid · 2010-12-02T02:49:10.642Z · LW(p) · GW(p)
"Become more generally capable" is an applause sign
A rather weak one if it is. I don't associate it with strong affect of any kind.
; it's too generic
Possibly. More specificity could be helpful.
, not actionable.
Sure it is. Search your brain, the internet or lesswrong for personal development techniques and practices. There are posts here on self improvement, including some specifically for developing capabilities for 'world saving'. (One way to be less general would be to link to one of them.)
Replies from: None↑ comment by [deleted] · 2010-12-02T16:39:07.904Z · LW(p) · GW(p)
Perhaps I'm using the term "applause sign" incorrectly. My intended meaning there is that it is obvious, it provides no new information to anyone, everyone will not their heads as though it is wisdom, but it is not specific enough to make it easy for people to do. Much like "lose weight" is a bad goal, but "get to 190 lbs, 10% body fat by April 15th" is a better goal, and is even better as "get to 190 lbs, 10% body fat by April 15th by limiting intake to 1000-1500 calories, 90% Paleo/primal foods, heavy lifting 3 days a week, daily yoga and mobility work, and 5 nature hikes a week for at least 30 minutes."
Pardon if the "applause sign" term was misappropriated. "Sounds like wisdom, but is not informative enough to be helpful" is probably closer.
Replies from: wedrifidcomment by Jack · 2010-12-02T16:53:58.382Z · LW(p) · GW(p)
I don't really like this post. It reads like one of those fake advice websites set-up by companies selling products that target those advice seekers. Like "How To Get Rid of Acne" with not-so-subtle links to an order page for Clearasil. After I get over my exasperation at the tone, feel, and SIAI pitch I don't see anything new here to get excited about. Good collection of links I guess. Everyone else seems to love it though, so I suppose it just rubbed me the wrong way.
Replies from: Louie, Roko, prase↑ comment by Louie · 2010-12-02T23:36:54.437Z · LW(p) · GW(p)
Thanks for your thoughtful criticism. Could you point out the worst abuses of my tone? I'm happy to modify it to improve things if anyone has specific suggestions from the text.
Also, you're incredibly fortunate to have learned nothing from my summary. I suggest that in your case (and probably others who agreed with you), you're a Less Wrong legend. Heck, you're #6 all-time in comment karma! For reference, Yvain is #8. Anyone who's been here long enough to be that right, that often, will find (almost) nothing new in this article. But if you had counter-factually never seen Less Wrong and arrived here in the past month or two, amazing as it may seem, you likely wouldn't know the majority of this "basic" information.
Did you at least get a little out of points #8 and #10? Those were the two bits that were actually my own original contributions and not generally part of the Less Wrong cannon. Also, several of the links in #5 are unique to me including the heading link which didn't exist before I posted my friend Dennis' presentation online. Also, did anyone who read this actually sign up for any of ActiveInboxHQ.com, Mint.com or 43things.com? I would be tremendously less effective without each of those. They help on different time frames (daily, monthly, and yearly respectively).
Again, I'm sorry if this post is mostly repetitive and unnecessary for those of us who have been here awhile. But as FormallyknownasRoko points out, this article somehow didn't exist. Just like Roko, I needed to point a smart friend with no background in this stuff to something about optimal philanthropy. I felt like linking them straight to Anna's "Back of the Envelope" talk from 2009 or Eliezer's "Money the unit of caring" were both "too zoomed in" a spot to dump someone who didn't have an overview of why they might want to be an optimal philanthropist to begin with.
Anyway, I think this article is actually really important to get right. So your issue with the tone is very important and I'd like to address it if you think it would be a stumbling block for outside readers as well. Please point out the most egregious links and phrasings and I'll seriously consider revising them. I'd love to have this piece improved by the collective optimizing power of Less Wrong.
Replies from: Jack, thomblake, GuySrinivasan↑ comment by Jack · 2010-12-03T00:58:21.534Z · LW(p) · GW(p)
Heck, you're #6 all-time in comment karma!
Wait. I am? Yikes! Where is this information available? I think I probably just make a lot of comments. You're right though, I've been around here a while I should adjust for that.
Re: Style and tone
I have pattern-match aversions that are stronger than I'd like sometimes (though at other times this is extremely helpful). It's possible that I'm reading things into your post that the you and the people who liked it didn't.
Just to start with, your post includes lots of links to pages that explain your point in detail- but it is so overhyperlinked that the signal/noise ratio is greatly diminished. I don't understand why you linked to the wikipedia page on Ghandi, Code Pink, one.org, the Red Cross, Oxfam, PETA and Greenpeace, the entire Metaethics sequence, wikipedia on axiology, the Gates Foundation, the Clinton Global Initiative... and that's only halfway through point number two! People will be a lot more likely to click the links you think are important if they're the only links on the page.
Numbers 8 and 10 included some decent, new points.
I think the main issue though was that if you just look at points 2, 3, 6, 7 and 8 (half the post) and pay particular attention to the links that you endorse this just comes off as an advertisement for the Singularity Institute for Artificial Intelligence. This seems even more apparent to me now that I've looked closer at it. Basically the message I get from this post is that I should make as much money as I can and give as much of it as I can to SIAI as soon as possible. As a result the other points in the post just come off as attempts at establishing credibility and objectivity-- so that the message will seem more persuasive. To me, whether you intended it or not, this looks just like how those advice sites will give some decent but uncontroversial advice so the reader will say "Ah! This guy knows what he is talking about. Oh, he recommends this brand of acne cream? Lets get some!".
Now I know lots of people here think rational people should be making as much money as they can and giving as much of it, as early as they can, to SIAI. But I'd like to see the argument "SIAI is the most per dollar effective charity you can give to" divorced from the argument "Be smart and effective in who you give your money to". And when people want to promote SIAI, better they do so explicitly via posts with titles like "The Best Way to Save the World is to Give Money to SIAI" rather than posts ostensibly about something more general.
I imagine a number of the upvotes my comment got were from people who feel that SIAI stuff has a tendency to dominate discussions around here and live little room for other rationalist enterprises. It's hard for Less Wrong to be a place for rationalists generally when it often comes across as a place for Singulatarians mostly. Maybe the minority needs to be more vocal about finding something different for us to do... but that seems like a path that could lead to factions (or maybe the minority just wants to hang out, I don't know).
Replies from: Louie↑ comment by Louie · 2010-12-03T05:01:15.898Z · LW(p) · GW(p)
Thanks for your suggestions. They're very helpful. I removed six of the less relevant links. Mostly from the beginning. The signal to noise ratio in them was indeed too high. Thanks again for pointing that out. I also removed a link to SIAI from point #6 based on your suggestions.
I left the links to other charities in the first paragraph for now because I feel like they are similar to the list of below-average charities I link to in #2 -- I mention them in the context of failure. So I think most people will realize they are not recommending them as helpful resources but just citing them as well-known examples. Although maybe I should remove the links just to deny those groups PageRank juice... especially since I mention them so high up in the article. I'm gonna go "nofollow" them now.
I don't want to quibble too much because my intent isn't to be right, but to make the best article I'm capable of making that people can link their friends to. So if you still have objections, could you elaborate on how I'm being partisan in #7 and #8? Here's how it looks to me now that I've made updates:
3 - Guilty. I'm definitely being partisan here. I make a direct link to SIAI. Although I then immediately link to a video which goes a long way to support my claim that SIAI is in fact a high leverage charity. I think scope insensitivity prevents most people (including me) from imagining that a cause with something approaching existential risk reduction's potential to create value could even be possible. That video which I link to for support has been out for a year now. There were hundreds of people who saw that presentation. And over a thousand have watched it online. I've never seen anyone make any counter-claims or a better estimates. I'm sure a refinement must be possible -- one which I'd love to see if anyone's up to the challenge. But I feel like it's a solid argument in a broad sense and justifies me linking to SIAI at least once directly.
6 - Link to SIAI removed.
7 - I link to a post about why money is superior to volunteering (in all but the most extreme cases) which justifies it's conclusion rather well. Even in the derivative links, I don't think there's an appeal to support any particular charity.
8 - I link to an outside academic reference which explains x-risks rather well to someone who knows nothing. But it's in the context of a list of other causes activists might care about. I think it's pretty neutral. It's not like existential risk reduction is some taboo form of charitable undertaking that's inferior to poverty reduction or disease eradication. My current calibration tells me that x-risk reduction may be superior, but I'm only making the weaker, implicit claim that it's at least equal.
In the final analysis, I'm not undecided on the matter, so I don't think my piece needs to be entirely objective. It should be possible for someone reading my article to figure out what conclusion I've come to if they'd actually like to know.
Replies from: Jack, wedrifid↑ comment by Jack · 2010-12-04T07:20:56.322Z · LW(p) · GW(p)
First, can you tell me how you know about comment karma? Do you have admin powers or talk to someone who does? It is a little creepy.
Second, I'm not sure at this point what your goal is with this piece. Is it merely to provide general advice that will help people become more effective at saving the world? Or are you trying to get people to give money to SIAI, by convincing them this is what they should do to save the world? I think there is some inferential distance between our positions as the result of you considering those questions more closely related than I do. There is so much SIAI-cluster stuff in here that it seems like your goal is the latter.
I ask because while you've been more than admirable in responding to my individual criticisms (I upvoted the above comment) it's begun to feel like what you want this article to be just isn't something I would upvote even if we kept going through iterations of criticism and revision. Less Wrong is a fine place to test run articles promoting rationality generally, I'm not crazy about it as a place to run test articles promoting SIAI.
If you just want to drop in an endorsement of SIAI I recommend doing it in first person and possibly in a parenthetical. Instead of:
But once that’s out of the way, I devote the vast majority of my time and resources to contributing to other non-profits [link] with staggeringly higher pay-offs [link].
say
But once you've satisfied that emotional need, devote your time and money to charities that do the most good for the least money. (I've been convinced [link to your video] that SIAI [link to the website] is the best use of my hours and dollars).
And then leave it there! Number two has the exact same problem as number three (and the exact same Anna Salamon video, incidentally).
The links in number six still aren't about spreading awareness generally, the x-risk career network isn't going to be a helpful for most people who read your article; same goes for the thing about Less Wrong search engine optimization. Unless your goal is to get people involved with the SIAI/FHI cluster of organizations it doesn't make sense to link to them unless they are accompanied by a bunch of other examples for other kinds of charity.
Seven and eight aren't problematic on their own they're problematic after you endorse a charity. They're particularly problematic after I get curious, do ten seconds of googling and find out you're the Volunteer Coordinator for the charity you endorsed. They aren't credible because it looks like a conflict of interest.
Which is why I'm asking what your goal is because it kind of looks like the goal is to get people who are curious about saving the world interested in SIAI- my advice isn't good if that's your goal as I'm giving suggestions that will make the other content of the post more convincing by making the SIAI related content less central and seem less authoritative. There's nothing wrong with having this as your goal, but I don't think Less Wrong is the place to post SIAI pamphlet copy unless Eliezer has decided to give up the pretension about this being a place for rationalists generally.
Replies from: Louie, JGWeissman↑ comment by Louie · 2010-12-04T09:08:15.560Z · LW(p) · GW(p)
can you tell me how you know about comment karma?
I have access to a copy of the LW database because I'm coordinating the addition of new features to the site between SIAI and Tricycle. I don't have any admin privileges on the live site or promotion powers or anything else that anyone else doesn't have though.
I've been trying to think of more site stats to add for people. I think a top commenter list might be nice... or at least having it appear in people's profiles so everyone can check their own stats. If there's interest, I can work on that or get someone else to add it.
Replies from: TheOtherDave, rhollerith_dot_com↑ comment by TheOtherDave · 2010-12-04T20:59:41.642Z · LW(p) · GW(p)
I would love to be able to sort my own contributions by (Popular, New, Old, Controversial) the same way we can sort comments on a post. I'd also like Unpopular as a sort key there.
Basically, I use comment karma as a way of getting feedback on what people think is good and bad, but having to page through all of my comments looking for items with a high absolute value is awkward. The current arrangement seems to assume that comment karma scores don't vary much after a few days, and that just isn't true.
Less valuably but still interestingly, I'd like to be able to do the same with other people's contributions... e.g., find the most popular comments someone else has made.
Replies from: Unnamed↑ comment by Unnamed · 2010-12-05T02:35:46.723Z · LW(p) · GW(p)
I'd also like to be able to do this, especially for other people. When I'm checking someone's profile and wondering "who is this person?", being able to see their highest karma posts/comments would be a quick way to get some information about them.
↑ comment by RHollerith (rhollerith_dot_com) · 2010-12-04T14:26:04.523Z · LW(p) · GW(p)
"I've been trying to think of more site stats to add for people."
I'd like to see average score per comment. I.e., karma from comments divided by number of comments made. Hacker News puts this number in the profile.
(Actually, I prefer karma divided by words posted but karma divided by comments posted conforms better to people's expectations because other sites like HN use it.)
↑ comment by JGWeissman · 2010-12-04T07:43:37.915Z · LW(p) · GW(p)
First, can you tell me how you know about comment karma? Do you have admin powers or talk to someone who does? It is a little creepy.
http://lesswrong.com/topcomments/
Replies from: Jack↑ comment by Jack · 2010-12-04T07:49:30.438Z · LW(p) · GW(p)
Ah....
Thats not what
/#6 all-time in comment karma
means to me (it sounds like the karma gotten from comments rather than posts). Which would be much better evidence of my having been around a while than a poll I once did. But I am #6 there so I guess thats what he was talking about.
ETA: Actually it looks like that comment is still getting upvotes. It has no business being on that page as it is meta and no longer useful. If people want to down vote it off page I would totally endorse that (just find a comment of mine you like or up vote the karma dump that is attached to it. I'll edit the comment accordingly.
↑ comment by thomblake · 2010-12-03T00:44:38.450Z · LW(p) · GW(p)
Heck, you're #6 all-time in comment karma!
That's tracked somewhere? Where?
Replies from: Perplexed↑ comment by SarahNibs (GuySrinivasan) · 2010-12-07T01:32:42.626Z · LW(p) · GW(p)
Also, did anyone who read this actually sign up for any of ActiveInboxHQ.com, Mint.com or 43things.com?
I did. Mint.com and 43things.com, this weekend. So far the act of writing down some things I wanted to do has been good enough to spark action on a couple of random things I'd been procrastinating on (unsubscribing from blockbuster.com after exporting my queue, buying a new mattress, and signing up for a cashback credit card). We'll see if they do anything longterm.
↑ comment by Roko · 2010-12-02T21:02:22.615Z · LW(p) · GW(p)
I think this may be an entry to my competition.
comment by Will_Sawin · 2010-12-01T18:04:19.093Z · LW(p) · GW(p)
Can someone give or link to a convincing argument, possibly in the form of a lesswrong post, that having fun is beneficial? It seems intuitive, but that intuition doesn't answer:
How much fun should one have? What kind of fun should one have? etc.
if one wants to save the world.
Replies from: wedrifid, Louie, PlaidX, Yosarian2↑ comment by Louie · 2010-12-02T07:52:23.575Z · LW(p) · GW(p)
How much fun should one have?
I thought I specified this with "as often as you need".
Although, after reading your comment it now occurs to me that it could be possible that others might not know how much fun they need. Is that true?
If so, I recommend you explore having an "unlimited" amount of fun without ceasing for days on end (think cliches of "spring break") until you can naturally feel the inflection point at which adding more hedonistic experiences on top of your current pleasure no longer improves your happiness and you long for "relief from recreation". If I'm remembering correctly, this is how I actually calibrated how much fun I need. Once you know this point, you can more naturally feel the bend of your own hedonistic pleasure curve and keep yourself in a state of content, disciplined happiness or slide yourself up towards bliss or down towards more subdued states depending on what's appropriate for the situation.
What kind of fun should one have?
Sex is indeed the correct answer. In some ways, I feel like a chicken-shit for not finding the right way to say this directly in my article. I guess I didn't want to point out sex as an ideal form of recreation since, based on reading comments here on LW, I perceive it as being relatively scarce among some readers. Now that I think of it, my mind actually estimates it as so low that it effectively rounds it down to zero unless I think it through consciously and realize that it can't possibly be that divergent from any other community. Still, I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that.
[ OTOH, if they're down here reading the comments, sorry about that. ]
I guess normally just avoid mentioning sex to people unless I know they're in an abundant sexual situation in life. This heuristic is probably overkill. How do other people deal with this?
Replies from: katydee, VNKKET, diegocaleiro↑ comment by katydee · 2010-12-06T07:30:40.866Z · LW(p) · GW(p)
I'm not interested in sex, what's the next best thing?
Replies from: Strange7, wedrifid, CronoDAS, Louie, juliawise↑ comment by Strange7 · 2011-08-28T17:23:28.844Z · LW(p) · GW(p)
Cuddling with people who are willing to accept it as just cuddling might be a good place to start.
Another avenue to explore is, imagine you had enough resources that neither physical health and safety nor status contests presented any challenge to you, but not quite enough for world-shattering extravagances. Beloved king of a small, peaceful city-state, maybe, with a staff of wise and dutiful ministers who can handle all the routine administrative drudgery. What would you do with your time? Appeal to the senses with fine food and music, perhaps, or explore mathematics?
Once you have a list of things, you'll probably notice at least a few of those diversions don't actually require regal-level assets to dabble in and enjoy, so try the practical stuff for real and explore different variations until you find something you like. Then, for each of the 'sweet spots,' go find a community of hundreds of people on the internet who have been obsessing about that particular sort of enjoyment for longer than you've been alive, and mine them for ideas, bearing in mind always that the only wrong ways to have fun are the ones that either a) have unacceptable long-term consequences for yourself and/or people you care about or b) don't actually result in fun.
↑ comment by VNKKET · 2011-08-30T02:37:46.546Z · LW(p) · GW(p)
I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that.
This thought is very much appreciated.
↑ comment by diegocaleiro · 2010-12-02T23:22:03.257Z · LW(p) · GW(p)
Everyone is embedded (Buss 2004) with a model of the interpreter when we speak in language. This model prevents us from saying imoral things in presence of selfs we are well acquainted with in their concept of morality.
I assume for speaking, just remind people of sex as much as your mind naturally allows. In the case of writing, where readers are many and do not have a model in your mind, shut up and calculate. That is, just talk about the pleasure of sex if you are using it in an argument about something else.
This is also helpful because it avoids Status Promotion bias, the bias that you have to pretend to have an awesome sex life so that people become attracted to you.
There are so many kinds of fun to be had, I suggest sex is overrated. Take great movies, roller coasters, conversations with friends, swimming, watching fire burn, pic-nics and hiking as prime examples which do Not last very long (as opposed to videogames, that simply exaust your minutes away).
↑ comment by PlaidX · 2010-12-01T21:40:12.862Z · LW(p) · GW(p)
One reason is that if you attempt to be an optimized world-saving robot, your mental health will deteriorate.
Mine did, at least. Now I'm in therapy. Take your mental health seriously, don't think you can sweep it under the rug.
Replies from: Will_Sawin, PeerInfinity↑ comment by Will_Sawin · 2010-12-01T22:13:43.712Z · LW(p) · GW(p)
What fun would you say is optimal for preserving mental health? Seems like that would be social contact, but it's not clear whether that intuition is correct.
Replies from: PlaidX, diegocaleiro↑ comment by diegocaleiro · 2010-12-03T19:27:18.591Z · LW(p) · GW(p)
This intuition is correct if you take mental health to highly correlate health in general.
Except for Ageing, and Tabagism (also called slow motion suicide), not having a deeply rewarding and intrincate social life is the most important factor determining your health.
"People with strong social relationships were 50 percent less likely to die early than people without such support, the team at Brigham Young University in Utah found. They suggest that policymakers look at ways to help people maintain social relationships as a way of keeping the population healthy. "A lack of social relationships was equivalent to smoking up to 15 cigarettes a day," psychologist Julianne Holt-Lunstad, who led the study, said in a telephone interview. Her team conducted a meta-analysis of studies that examine social relationships and their effects on health. They looked at 148 studies that covered more than 308,000 people.
Having low levels of social interaction was equivalent to being an alcoholic, was more harmful than not exercising and was twice as harmful as obesity. Social relationships had a bigger impact on premature death than getting an adult vaccine to prevent pneumonia, than taking drugs for high blood pressure and far more important than exposure to air pollution, they found.
Paper is here: http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1000316
Replies from: wedrifid↑ comment by wedrifid · 2010-12-03T19:32:12.421Z · LW(p) · GW(p)
META--: Will someone Please create a way in which comments can use italicized text so I don't have to do This to emphasize a word!
We use markdown. Click the "Help" link at the bottom right of the comment box. Use either asterices or underscores around a word or phrase that you wish to emphasise. Two of the same on each side for bold.
↑ comment by PeerInfinity · 2010-12-01T21:54:23.192Z · LW(p) · GW(p)
I made this same mistake, and ended up being significantly less optimized at world-saving as a result.
Replies from: None↑ comment by Yosarian2 · 2012-12-29T17:03:05.776Z · LW(p) · GW(p)
Well, if you believe in a utilitarian theory of morality, then the most ethical thing to do is to maximize utility (happiness) for everyone, including yourself. So basically, you should have as much fun as you can, except in cases when you could devote that same effort to increase someone else's happiness by a greater value.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2013-01-01T18:06:53.469Z · LW(p) · GW(p)
That's not relevant. The claim being made is that the best way to increase other's happiness is to have fun yourself, at least some.
Replies from: Yosarian2↑ comment by Yosarian2 · 2013-01-01T18:50:49.171Z · LW(p) · GW(p)
Fair enough.
In that case, I would just mention that if you improve your own mood, that is likely to improve the mood of people close to you and in your social network in general. Both happiness and sadness are contagious.
Also, maintaining a positive mood is likely to make your more efficient at other tasks.
comment by Vladimir_Nesov · 2010-12-01T21:53:26.311Z · LW(p) · GW(p)
We're entitled to nothing and no one.
What does this mean? I understand the intended affect, but not the denotation.
In such cases, the topic is often the existence of moral arguments against a position. "What? He did S? How could he?" is raw material for constructing moral arguments that allow you to have less of S done, by affecting either the person in question, or others with influence over that person. But in this particular case, it's not apparent to me what kind of moral argument is to be constructed (apart from using empathy of others to yourself, and thus trying to enforce your decision through actions of others by strongly feeling the need for your decision to be enforced).
Replies from: Louie↑ comment by Louie · 2010-12-01T23:54:46.571Z · LW(p) · GW(p)
I actually thought this was a very useful transition between the two sentences it abuts because it summarizes and repeats the ideas in them in another way. Is it not clear that the underlying sub-text of my sentence is more like "The way the world currently works, we're entitled to no material support and no ones a priori support for our causes."
Since we're not "entitled to [some]one [...] based on [...] the intrinsic goodness of our cause", it explains why you shouldn't disrespect them (or at least not "net disrespect them") over their failure to join. This is less of a moral argument and more just describing how the world currently works. We can't expect more support from people for help saving the world in proportion to the obvious (to us) value that each cause actually contains. I suppose it might be different if the world had more rationalists.
Perhaps another way to think of this is that we shouldn't disrespect a person twice who isn't a rationalist and not saving the world. If they're not on board with the idea of following chains of logic to their conclusions and then accepting them, it's a bit like beating a blind dog for walking into the wrong room of your house. They might figure things out eventually by some random cue, but it's cruel and ignores their disability in a thoughtless sort of way. Better to wait for them to regain their eyesight before expecting them to really understand... and hopefully at that point, you haven't been so heavy-handed with them in the past that they will run away in fear of you.
comment by Mass_Driver · 2011-08-28T04:58:26.903Z · LW(p) · GW(p)
So, this is a fantastic exposition of how to be a rational altruist -- but it still left me a little disappointed, because the title suggests that you will teach us how to "save the world," i.e., how to accomplish some really epic-level quest like ending hunger or disease. You don't actually do that here.
Instead, you argue that the most good we can realistically hope to accomplish is to educate people and to donate to efficient charities on a modest scale and to have fun, and so you set about teaching us how to do that.
Even assuming that you're correct, it would still be nice to know how a rationalist might go about trying to succeed at an epic altruistic quest. I can't quite let go of the 'actually save the world' option until I've thought about what the best strategy is for doing that and then seen, in all its depressing detail, how and why the optimal epic plan would be less good than the optimal modest plan. I suspect other people might also benefit from the comparison.
comment by Vladimir_Nesov · 2010-12-01T21:48:12.670Z · LW(p) · GW(p)
Things change. And even if they don't, people deserve your respect regardless of whether they want to help save the world or not.
Can you unpack this sense of "respect"? It seems to me that it must necessarily be influenced by properties like this, I don't know how to define the word so that it isn't.
(Of course, the sign of the influence is not a given, depending on one's epistemic situation respect could well go down in you learn that the person believes X, even if you're pretty certain X is the correct thing to believe. And the extent of the influence could well be small in most cases, but again depending on what other things the person knows.)
Replies from: Louie↑ comment by Louie · 2010-12-01T23:55:56.790Z · LW(p) · GW(p)
I think this was my way of saying that it makes sense as an instrumental rationality technique to afford people at least some positive level of respect (as opposed to negative respect levels, or overall disrespect) regardless of their current world saving position. I could say all that in the article, but it sounds mealy-mouthed that way.
So my advice is that if you're really a "respect-Bayesian" and you have to account for evidence (so you're duty bound to adjust downward), try not to update others' total respect value below zero over this. Or move your zero-floor down so that almost everyone has a positive value both a priori and in practice.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-02T00:59:36.606Z · LW(p) · GW(p)
At this point, when you start discussing "positive/negative respect", I'd need to ask what that means in even more detail. What defines the "zero point", why would you have a total order ("levels"), why is this an interesting concept. Again, I see the affect, the surface promise of meaning, but not any straightforward way of discerning what's actually meant.
(I agree that with any reasonable guesses at the concepts, "respect" going into the "negative" because of not saving the world in the case of not being aware of the arguments is incorrect, but I don't appreciate the abundance of apparently arbitrary detail in your explanation.)
Replies from: FAWS, wedrifid↑ comment by FAWS · 2010-12-02T01:20:44.669Z · LW(p) · GW(p)
At this point, when you start discussing "positive/negative respect", I'd need to ask what that means in even more detail. What defines the "zero point", why would you have a total order ("levels"), why is this an interesting concept. Again, I see the affect, the surface promise of meaning, but not any straightforward way of discerning what's actually meant.
One possible definition of a zero point would be signaling (or being perceived to signal) neither a raising nor a lowering of the status of the person in question. So the imperative could be reformulated as "don't make moves to lower other people's status in interactions with them".
Replies from: wedrifid↑ comment by wedrifid · 2010-12-02T01:34:05.675Z · LW(p) · GW(p)
One possible definition of a zero point would be signaling (or being perceived to signal) neither a raising nor a lowering of the status of the person in question.
(It isn't your imperative but...) High status people will often take that as disrespectful.
Replies from: FAWS↑ comment by FAWS · 2010-12-02T01:51:44.725Z · LW(p) · GW(p)
I understand treating higher status people like you would treat equal status people as signaling a lowering of their status so I think that's already taken into account.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-02T02:36:38.240Z · LW(p) · GW(p)
I understand treating higher status people like you would treat equal status people as signaling a lowering of their status so I think that's already taken into account.
Not necessarily. Status is transactional and dynamic. High status people (of a certain kind) demand a constant stream of 'status raising' behaviors in the same way governments demand taxes.
↑ comment by wedrifid · 2010-12-02T01:03:33.979Z · LW(p) · GW(p)
I would add that the advice would seem better replaced with "for the purpose of social signalling don't be a respect-Bayesian". Now it seems to be "bias your bayesian updating such that your posterior respect gives desirable signals".
(Although in the absence of the unpacking I can only infer.)
comment by Henrik_Jonsson · 2010-12-01T19:34:20.223Z · LW(p) · GW(p)
Very good post Louie! I agree with all the points, pretty much.
Number 11 seems especially important - it seems like a common trap for people in our crowd to try to over-optimize, so for me having an enjoyable life is a very high priority. A way of thinking that seems to work personally is to work on the margin rather than trying to reorganize my life top-down - to try to continually be a bit more awesome, work with more interesting people, get a bit more money, invest a bit more energy, etc, than yesterday.
In contrast, if I started out trying to allocate the resources I had access to / could gain access to in an optimal manner I suspect I would be paralyzed.
comment by Roko · 2010-12-02T20:16:46.772Z · LW(p) · GW(p)
Don’t try to out-think the non-profits you support -
I take issue with this. Many nonprofits are not so smart. Some are idiotic. Always do due dilligence.
Replies from: Louie, David_Gerard↑ comment by Louie · 2010-12-02T23:48:45.921Z · LW(p) · GW(p)
You're right. I don't want to leave people thinking that non-profits are always right and should never be questioned or given outside advice. Perhaps that language was overly strong.
Is there a modification I could make to improve it?
I thought the way I phrased the whole sentence made it context specific
Don’t try to out-think the non-profits you support - they wish you were donating now, not just later.
But I could see how it could be construed to be making a broader statement than I intended.
Maybe it could be:
Don’t try to out-think the non-profits you support in this specific way - they wish you were donating now, not just later.
Don’t try to out-think the non-profits you support like this - they wish you were donating now, not just later.
When it comes to donating, don’t try to out-think the non-profits you support - they can likely do more with your money in the present.
Don’t try to out-think the non-profits you support without talking to them - they probably wish you were donating now, not just later.
EDIT: Changed in article. What do you think?
↑ comment by David_Gerard · 2010-12-02T20:57:29.947Z · LW(p) · GW(p)
These can be times when contributing one's time can actually be more useful than the equivalent amount of money. Case by case, of course.
comment by patrissimo · 2010-12-02T00:34:22.944Z · LW(p) · GW(p)
I think this is missing the primary advice of "work on instrumental rationality." The art of accomplishing goals is useful for the goal of saving the world - and still useful if you change your goal later! (say, to destroying the world, or moving to a new one :) )
So while this is a great list of ways to be instrumentally rational specifically for philanthropy, I think the general tools of instrumental rationality are also useful too (like: have concrete goals, hypothesize how to achieve them, try methods, evaluate them and change based on results, find mentors who have succeeded at what you are trying to do, make sure you talk to people who think differently from you, be conscious about where to spend limited willpower...)
Replies from: Louie↑ comment by Louie · 2010-12-02T01:25:00.402Z · LW(p) · GW(p)
Agreed. I'm surprised I managed to write this whole list without remembering to add that.
I think it's one of those fish in water kind of things. I was going out of my way to summarize the points in my mind that I attribute somewhat to LW and instrumental rationality didn't naturally fall into that category when I plumbed my brain for "important things less wrong can teach you about saving the world". I get the feeling that I already absorbed a high enough level of instrumental rationality before I ever made it here that I didn't actually get any additional mileage out of the relevant material on LW about it. In fact, it's so yesterday's news to me that I often forget that others don't have similar predispositions or that others are still developing here and can use pointers to helpful material on the subject.
Thanks for reminding me not to take this for granted! I'll add a new section in a few hours.
EDIT: Added as the new point #5.
comment by JonatasMueller · 2010-12-03T00:03:33.231Z · LW(p) · GW(p)
Good post, though I thought that it is a little too focused on money. It could say (more explicitly) what types of charity are best, and what types of action... and other ways to help that aren't money.
In my opinion, some of the most efficient ways to achieve a positive difference are, foremost: (these are strategic priorities with more positive potential than all the rest) human genetic engineering and intelligence augmentation, artificial intelligence, and reduction of existential risks. In second order of importance: (these are ways to increase utility in the here and now) destroying animals and the environment (which are cause of huge suffering), producing artificial meat to replace cruel animal farming, promoting birth control among the poor.
Activities to achieve these goals include:
- Becoming very rich and using the money to achieve them;
- Convincing people with lots of money to donate to these causes, and any other people to become aware of them and contribute somehow, by various means, such as by writing books, articles, making movies, posting on websites, talking to them, encouraging them to do activities to achieve them;
- Conducting research personally in fields such as genetic engineering, artificial intelligence, artificial meat, birth control, etc., and convincing more people to do the same;
- Helping or creating charity organizations directed towards birth control;
- Fighting and discrediting religion, which is a significant hurdle to many of these efforts;
- Convincing people about the right general framework of ideas that is compatible with these goals.
In my opinion, most other kinds of efforts to make a positive change, such as feeding the poor; preserving the environment; curing diseases; giving education to the poor; etc., are overrated and short-sighted, their effects in the long-term being relatively small. An increase in intelligence would produce an increase in the ability to do everything else, so it would be much more effective in the long-term; all these measures lose importance if our civilization and technological advancement be lost to some global catastrophe.
When AI starts working, several problems on which people work now will be rapidly solved (except those that require lengthy experiments). Therefore focusing on these problems now may be a waste of time, except for the meantime until their solution by AI.
Raising money seems like a matter of chance or luck. You'll naturally try it but you can't count on it, so it's not a matter of deciding to do it. Raising public awareness and enthusiasm seems to be an action with a relatively high potential: you can potentially get many other people to raise money, do scientific research, and raise public awareness and enthusiasm in their turn, so this may be the action with the most potential, even though it only accomplishes indirectly. Doing scientific research personally seems to require high stakes, in career, life, and seems to depend a bit on the place you live and what are the things that you like to study and work in. This one is a hard decision, because it is sort of a gamble with your life.
Replies from: XiXiDu, Louie, XiXiDu↑ comment by XiXiDu · 2010-12-03T09:27:02.184Z · LW(p) · GW(p)
I should add that a lot of people here agree with your stand except that there is a bigger risk from AI than there is benefit. That is, we'll have to work on AI but first we should figure out how to make it friendly. That is what the SIAI is working on.
By the way, welcome to Less Wrong. You know me as Alexander Kruel on Facebook.
Replies from: timtyler↑ comment by timtyler · 2010-12-06T23:12:24.067Z · LW(p) · GW(p)
There seems to be a significant "risk" of making a much better world with much smarter agents and a lot less insanity and stupidity. A lot of people see that as a bad thing, however.
Looking at history, this sort of thing is fairly common. Most kinds of progress face resistance from various kinds of luddites- who would rather things stayed the way they were.
Replies from: nshepperd↑ comment by nshepperd · 2010-12-06T23:32:34.978Z · LW(p) · GW(p)
What? I don't follow. Are you saying it would be a much better world if an unfriendly AI replaced humanity? I don't think it's luddite-ish to say I'd rather not die so something else can take my place.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2010-12-17T01:57:31.765Z · LW(p) · GW(p)
I'd agree to AI "unfriendly" (whatever this means... it shouldn't reason emotionally, it should just be sufficiently intelligent) replacing humanity... since we are the problem that we're trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren't very happy and fulfilled, etc. Eventually we'll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.
↑ comment by Louie · 2010-12-03T11:09:26.131Z · LW(p) · GW(p)
Welcome to Less Wrong!
Hey, have you read this paper about cognitive enhancement? If not, you might like it.
Anyway, a lot of people have thought about this for years. This piece is a summary of that analysis. If you check the links in this article like these two videos and then read just these two articles, you might see more clearly why my article is organized the way it is and focuses heavily on donating while more or less ignoring other strategies.
And I agree with you that most efforts to make a positive change are overrated and short-sighted. That was kind of my point in #2 and #3. Most causes are inefficient at creating good outcomes or optimized for making you feel good, not creating good. I'm working on solutions versus maintenance, but if other people are determined to work on maintenance activities, it's better if they do them wisely.
It sounds like you already know a good deal about existential risk and the potential of AI. If you want to help out SIAI, I'm the remote volunteer coordinator. You can email me at louie.helm@intelligence.org
↑ comment by XiXiDu · 2010-12-03T09:22:30.357Z · LW(p) · GW(p)
Good post, though I thought that it is a little too focused on money.
The problem on Less Wrong is that there exists an unidentified subgroup who believes that 1.) the best you can do is support the SIAI 2.) most people can best support the SIAI by donating money. This view might not be the general consensus here, yet the most influential people certainly believe so.
What is necessary is a paper or article sequence that outlines a decision procedure and exemplifies rational choice by dissolving the question about the best (most effective) possible action(s) one can take to benefit humanity and possible help saving the world. This hasn't been done. Supposedly you should be able to conclude an answer here by reading the sequences. That might be the case but isn't very effective as it is at best treated as an marginal issue. How to save the world is not an explicit conclusion of the current sequences.
Replies from: Louie, wedrifid, Louie↑ comment by Louie · 2010-12-04T03:04:29.651Z · LW(p) · GW(p)
who believes that 1.) the best you can do is support the SIAI 2.) most people can best support the SIAI by donating money.
A less collapsed summary of the view you describe is:
1) Saving lives is good 2) X-risk reduction is a surprisingly high leverage way to save lives 3) Using money gives you more options for how to contribute to a cause, not less
So I think it's a decidedly uncharitable framing to imply that our analysis reduces everyone's options down to the singular option of donating. An equally valid interpretation is that everyone now has unlimited options for how to save the world. You can be a computer programmer or an online poker player or a circus performer or anything else you love doing. Then you can turn the thing you love doing the most or what you're best at (usually the same thing) into a vehicle for saving the world.
Characterizing all the millions of different ways to earn money for saving the world as just "donating" is like characterizing all books as "just paper" or all software as "just bits". The means of transmission (paper, bits, donating) isn't the important part for any of these. What we care about in practice is the content of those transmission mechanisms: the functioning of the software, the content of the book, or the career / economic activity that allows you to save the world.
I know you don't argue explicitly against this, so I apologize for laying all this out in response to your comment. I hope you don't mind me expounding on this here to try and develop a more helpful framing.
↑ comment by wedrifid · 2010-12-04T04:02:49.250Z · LW(p) · GW(p)
What is necessary is a paper or article sequence that outlines a decision procedure and exemplifies rational choice by dissolving the question about the best (most effective) possible action(s) one can take to benefit humanity and possible help saving the world.
As a point of detail that isn't the kind of question you dissolve, just one you answer! :)
↑ comment by Louie · 2010-12-04T03:26:13.853Z · LW(p) · GW(p)
there exists an unidentified subgroup
Does this phrase actually add clarifying detail to your premise?
How are we unidentified? It seems to me like the majority of posters on Less Wrong who strongly advocate a view along the lines of what you're describing post under our real names. What more could we do to identify ourselves?
This phrase explicitly accuses the people you disagree with (or pretend to disagree with?? I can never tell with you) of being sinister and shadowy. It's probably not warranted in the case of clearly identified people who share their views openly and honestly.
Replies from: XiXiDu, wedrifid↑ comment by XiXiDu · 2010-12-04T09:43:19.174Z · LW(p) · GW(p)
Since I am not sure who, and therefore how many people here share that opinion, but know that some do, I referred to them as unidentified subgroup. That labeling was solely reflective of my current state of knowledge and not supposed to be judgemental.
I'm often using a translator which outputs many different English words for a German concept. I suppose that might be one of the reasons what I am writing sometimes appears weird or inept.
Replies from: Emile, Louie↑ comment by Emile · 2010-12-04T11:01:36.950Z · LW(p) · GW(p)
As a very unrelated side note, I usually read your username as Chinese, where "xi du" is "to smoke/take drugs" so "xi xi du" would be something like "to casually try drugs" (the verb is doubled to reduce emphasis). I have no idea if that's how you meant it.
Replies from: XiXiDu, TobyBartels, wedrifid↑ comment by XiXiDu · 2010-12-04T12:08:08.043Z · LW(p) · GW(p)
I came up with that nickname at the age of 16 (in the year 2000). It is supposed to be a random sequence of letters that is pronounceable in German. A search gave no results, hence I naively suspected it to be unique. Only much later I learnt that many sequences of letters humans are able to pronounce do also bear a meaning in some language. Last year I learnt that xixi means piss in Portuguese. Some native English speakers also asked me if it is supposed to mean sexy dude. But I can assure you that I never intended my nickname to signal a sexy dude who takes a piss and casually tries drugs. I was rather annoyed that many nicknames were already taken when I tried to register with various services. I also wanted to be uniquely identifiable. It pretty much worked, as almost all of the 46.100 results of a Google search for xixidu are related to myself.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-12-04T13:00:26.009Z · LW(p) · GW(p)
How do you pronounce your nickname?
I'd vaguely assumed the name was Chinese, with some presupposition that you were, too.
Replies from: XiXiDu↑ comment by XiXiDu · 2010-12-04T14:09:10.367Z · LW(p) · GW(p)
Ksicksiduh - but I prefer to go by my real name (Alexander Kruel) when it comes to vocal communication.
I've never been Chinese. It wasn't my intention at all to sound Asiatic. I looked at an instant messenger avatar of a Rubber duck when that particular sequence occurred to me.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-12-04T14:24:07.306Z · LW(p) · GW(p)
Thanks-- I pronounce people's names in my head when I'm reading.
"Xi" is a letter combination that shows up in English transliterations of Chinese. That, plus your saying that English isn't your first language, was what gave me the false impression.
↑ comment by TobyBartels · 2010-12-04T20:37:53.821Z · LW(p) · GW(p)
I've always pronounced your nickname in my head as if it were a Pinyin transliteration of Chinese (much like the English words "she she do"), even though I had no idea what it might mean. Making every other letter uppercase also gives the impression of Chinese (where there can be disagreement between transliterations for words made of several characters, such as "pinyin" vs "pin-yin" vs "pin yin", to take an example from my comment), even though nobody actually transliterates Chinese quite like that.
But now I'll do German instead.
↑ comment by wedrifid · 2010-12-04T04:06:48.846Z · LW(p) · GW(p)
How are we unidentified? It seems to me like the majority of posters on Less Wrong who strongly advocate a view along the lines of what you're describing post under our real names. What more could we do to identify ourselves?
Now I'm feeling username envy. I'm Cameron Taylor, from Melbourne! I can't think of a better world saving option than the one in question, even if my advocacy is of the form 'least bad'.
(I don't know what the 'unidentified subgroup' idea was supposed to be. It makes no sense.)
Replies from: Louie↑ comment by Louie · 2010-12-04T07:26:49.180Z · LW(p) · GW(p)
Hey Cameron!
You're from Melbourne?? I'm American but I've been traveling in Australia the past few months. I'm in Byron Bay now. Do you know Patrick Robotham? I met him when I first got back here and stopped in Melbourne. He organizes the Less Wrong Meet-up at Don Tojo in Melbourne near the University. You guys actually have a surprisingly good number of LW rationalists there. Perhaps the most anywhere outside of the California Bay Area, New York, or London.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-04T07:56:53.004Z · LW(p) · GW(p)
I haven't made it to one of the meet ups yet. I must at some stage. I didn't realize they were so well attended!
Replies from: Louie↑ comment by Louie · 2010-12-04T10:20:24.464Z · LW(p) · GW(p)
They're brand new. Patrick only started organizing them after meeting with me and realizing how many other LWers there were in Melbourne. I think there has only been 1 or 2 of them but there is a large critical mass of attendants from what I heard from Patrick.
comment by Upset_Nerd · 2010-12-02T09:04:28.034Z · LW(p) · GW(p)
Considering my recent personal experience (which I mentioned here) with removing a huge hidden negative motivation from my life I'd say that the absolutely most critical thing is to find out why you want to save the world.
If you find out that it's actually because you feel some kind of SASS threat if you don't try to save the world, I'd strongly suggest trying to directly remove that feeling anyway. The risk here is of course that after you've done it, you might find out that you never actually wanted to save the world to begin with. However, considering how I've personally experienced the shift in identity from feeling like I should be a good person to feeling like I am a good person, and the much increased motivation in actually doing good it has brought with it, I suspect that the few people who'll realize that they don't actually want to save the world will be more then compensated by the much increased effectiveness of the people who'll go from feeling like they should be world-savers into feeling like they are world-savers.
Replies from: Louie↑ comment by Louie · 2010-12-02T23:55:07.508Z · LW(p) · GW(p)
Excellent point (I think). What's a SASS threat?
Replies from: Upset_Nerd↑ comment by Upset_Nerd · 2010-12-05T13:39:19.512Z · LW(p) · GW(p)
Sorry for being late with my answer.
SASS is PJs terminology, it stands for Significance, Affiliation, Stability, and Stimulation. The exact categories aren't that critical, the important idea is that they represent the terminal values all humans seem to have hard wired into them so to speak.
So what I meant is that it's important to know why you're motivated into doing action X. If it is because you've learned that you'll gain SASS by doing X then everything is fine. That's operating under what PJ calls "positive motivation" and you'll feel as if you really want to do it and you can pursue X without feeling stressed out, by naturally selecting the best course of action, among other things.
If you're operating under a SASS threat on the other hand, which you do if you've learned that you'll lose SASS if you don't do X, then your mental state will be completely different. This is what he calls "negative motivation" and there you'll feel like you should and ought to do X without really feeling like you genuinely want to. It's usually accompanied by only doing as much of X as necessary to remove the immediate feeling of threat and then mostly feeling bad about not doing more even though you feel like you "could", "should", "ought" and similar feelings.
comment by patrissimo · 2010-12-02T00:29:42.373Z · LW(p) · GW(p)
Love almost all of this. I worry that (3) is making the common rationalist mistake of basing a strategy on the type of person you wish you were rather than the type you are. (Striding toward Unhappiness, we might call it).
So, you wish that your passion for a cause were more strongly correlated with the utilitarian benefit of that cause, and game the instinct to work on what feels good with small gifts while putting most of your effort towards what you think is optimal. But if the result is working on something you aren't as passionate and excited about, you may work less effectively, burn out on helping the world, or just be miserable. Your taste for a cause is what it is, not what you want it to be. It matters whether you feel good about what you do.
(4) compensates to this for some degree - you will tend to try to find reasons to value & love whatever you do, so to some extent you can pick a cause first and fall in love with it later. But this doesn't always work, and can result in demotivated team members who demoralize others. A passionate & excited team is a high-performing team.
comment by PeerInfinity · 2010-12-06T18:28:33.038Z · LW(p) · GW(p)
A random thought:
If you donate less than 10% of your income to a cause you believe in, or you spend less than one hour per week learning how to be more effective at helping a cause you believe in, or you spend less than half an hour per week socializing with other people who support the cause... then you are less instrumentally rational than the average christian.
edit: shokwave points out that the above claim is missing a critical inferential step: "if one of your goals is to be charitable"
edit: Nick_Tarleton points out that the average christian only donates 2.9% of their income to the church. And they don't go to church every sunday either. Also, being charitable ≠ doing good.
explanation:
The average christian donates about 10% of their income to the church. This is known as "tithing". The average christian spends about 1 hour per week listening to a pastor talk about how to become a better christian, and be more effective at helping the cause of christianity. This is known as "going to church", or "listening to a sermon". And going to church usually involves socializing with the other members of the church, for an amount of time that I'm estimating at half an hour.
And that's not counting the time they spend reading advice from other supporters of the cause (i.e. reading the bible), or meditating on how to improve their own lives, or the lives of others, or other ways to support the cause, or hacking their mind to feel happy and motivated despite the problems they're having in life (i.e. praying), or the other ways that they socialize with, and try to help, or get help from other people who support the cause (i.e. being friends with other christians).
The point I'm trying to make is that the christians are investing a lot of resources into their vaguely defined mission, and it would be sad if people who care about other, actually worthwhile causes are less instrumentally rational than the christians.
edit: oops, there's already a good LW article on this topic.
Replies from: Nick_Tarleton, shokwave, zntneo, TheOtherDave↑ comment by Nick_Tarleton · 2010-12-06T19:43:32.376Z · LW(p) · GW(p)
First off, strongly agreed that community matters and is worth investing in.
...you are less instrumentally rational than the average christian.
You may be less something, but rational targeting of effort (both doing something besides converting people, and being strategic at whatever you're doing) utterly swamps quantity of effort here. Being charitable ≠ doing good.
The average christian donates about 10% of their income to the church.
Source, or are you just assuming people do what they're supposed to? This (first search result) says the mean is 2.9%. (I would also bet that most Christians don't know what they nominally should give.) (ETA: I read your comment after you deleted the paragraph acknowledging this.)
the christians are investing a lot of resources into their mission of converting the whole world to christianity
I feel obligated to point out (outgroup homogeneity bias, etc.) that far from all Christians see this as their goal.
Replies from: shokwave, PeerInfinity↑ comment by shokwave · 2010-12-07T06:26:11.933Z · LW(p) · GW(p)
After some math, 2.9% still feels like more than most people donate to their non-religious causes. 2.9% of the average annual expenditure is more than 1400 dollars! I am willing to accept that Christians are doing more for their cause than I am for mine. Mine is more effective, but unless I can say that Christianity is a net negative (I can't), when you multiply it through the effectiveness, I still come out below Christians.
↑ comment by PeerInfinity · 2010-12-06T20:16:19.602Z · LW(p) · GW(p)
good points, thanks. I made some more edits.
I added a note mentioning that the mean is 2.9%, and that comment "Being charitable ≠ doing good."
I replaced "their mission of converting the whole world to christianity" with "their vaguely defined mission"
↑ comment by shokwave · 2010-12-06T19:22:33.984Z · LW(p) · GW(p)
People are going to balk at your use of "intrumentally rational". I would suggest explicating the chain of inference:
If you donate less than 10% ... then you are less charitable than the average christian; and if one of your goals is to be charitable, then you are less instrumentally rational than them too.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-12-06T19:34:59.759Z · LW(p) · GW(p)
you're right. thanks. I updated the comment to include your change.
↑ comment by zntneo · 2011-05-12T01:22:35.567Z · LW(p) · GW(p)
It seems you area assuming that donating to a church = donating to a good cause which i am not sure is always if most of the time right.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2011-05-21T04:52:05.817Z · LW(p) · GW(p)
sorry, I should have stated explicitly that I'm NOT assuming that "donating to a church = donating to a good cause".
What I am assuming is that the christians think that "donating to a church = donating to a good cause"
↑ comment by TheOtherDave · 2010-12-06T19:20:25.454Z · LW(p) · GW(p)
(blink)
So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically.
In any case, I agree with you that contributing resources to causes I support and training myself to understand them better and support them more effectively, and socializing with other supporters are all good things to spend some resources on.
Incidentally, most of the Christians I know who do this in their capacities as Christians are not actually devoting those efforts to converting the world to Christianity, but rather to things like aiding the needy. Then again, the Christians I know well enough to know how they practice their religion are a pretty self-selecting bunch, and generalizing from them probably isn't safe.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-12-06T19:37:12.895Z · LW(p) · GW(p)
So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically.
You're right, thanks, the previous wording was confusing. I removed the paragraph that said "I suspect that the average christian actually gives significantly less than 10% of their income to the church, and doesn't go to church every sunday, but I haven't actually looked up the statistics yet." The point of that paragraph was that I'm admitting that I'm probably overestimating the contributions of the average christian.
comment by ewbrownv · 2010-12-02T19:07:31.387Z · LW(p) · GW(p)
Can anyone offer a single example of a major, longstanding problem that has been solved by this kind of approach?
Replies from: Perplexed, ewbrownv, Vaniver↑ comment by Perplexed · 2010-12-02T19:25:40.400Z · LW(p) · GW(p)
Solved by what kind of approach? Organized charity?
How about the polio vaccine? March of Dimes.
Admittedly, solved problems are rare. Quite a few charities at least alleviate problems. Even though it is faith-based, I think the Salvation Army does some real good. Red Cross. Big Brothers. DWB. League of Women Voters.
↑ comment by ewbrownv · 2010-12-02T21:09:16.870Z · LW(p) · GW(p)
The original article holds up charitable organizations as a means to make the world a better place. But all the examples I can think of where the human condition improved significantly were due to new technology (birth control, antibiotics), sweeping cultural changes (religious tolerance), or increasing wealth (sanitation, literacy). Charities, on the other hand, typically focus on handouts and lobbying, which may benefit individual aid recipients and rent seekers but rarely seems to do anything about the underlying problem.
So my question is, what is the evidence that such organizations can actually deal with large issues like hunger, disease, poverty, oppression, genocide, and so on? And if there is no track record of success, why do we continue to pin our hopes on them?
Replies from: BlippoBold↑ comment by BlippoBold · 2012-02-15T16:37:08.121Z · LW(p) · GW(p)
This is the question I'd love to see answered.
I appreciate the original article's analysis if you've already decided that giving resources (money, work, whatever) to non-profits is a desirable and rational use of those resources. Maybe it is, but I'd love to see someone really tackle that issue.
I once heard Rush Limbaugh say something like "In 200 years, capitalism has saved more lives than thousands of years of charity." I generally dislike the man, but I found it hard to disagree with him there (actually I assume he was probably paraphrasing someone else).
It seems to me that the wealth created through market economies has massively improved living standards unlike anything else. The technological, medical, social, and education advances that contribute to improved health and welfare are greatly accelerated by competition and increased wealth.
Maybe there are areas where charity is more effective than markets, but I'd like to see someone make the argument. Even well-run and well-intentioned charities run the risk of creating dependency and inhibiting local markets.
Could it be that the best use of your time and money is to create as much wealth as possible and keep that wealth circulating through the market (investing and spending)? (And perhaps contributing to lobbying and advocacy efforts that work to spread open markets.)
If anyone knows of a good discussion of this question, please let me know.
↑ comment by Vaniver · 2010-12-02T19:19:53.111Z · LW(p) · GW(p)
So, the Anti-Corn Law League destroyed grain tariffs in Britain and permanently altered the public perception of tariffs in that country compared to the rest of the world (more of the British public correctly see tariffs as a way to screw over customers than as a way to protect domestic jobs).
Abolition groups also seem like they should be mentioned, here.
Those are just the two off the top of my head, but I'm not sure they fit "this kind" of approach. The first one suggests a "do the math" approach to helping people, but also a strong deontologist "this isn't fair!", and the second one seems mostly along the same lines. I don't think SIAI and such are that comparable to Garrison, but perhaps they are.
I guess my questions in response are "can you be more specific by "this kind of approach" and "what are your standards for a 'major' problem?"
comment by PeerInfinity · 2010-12-02T18:26:33.047Z · LW(p) · GW(p)
Another obvious suggestion:
- If there isn't already a wiki for the cause that you are interested in helping, then consider starting one.
Most people reading this are probably well aware of the awesome power of wikis. LW's own wiki is awesome, and LW would be a whole lot less awesome without its wiki.
What we need is a wiki that lists all the people and groups who are working towards saving the world, what projects they are working on, and what resources they need in order to complete these projects. And each user of the wiki could create a page for themselves, listing what specific causes they're interested in, what skills and resources they have that they're willing to contribute to the cause, and what things they could use someone else's help with. The wiki could also have useful advice like this LW post, on how to be more effective at world-saving.
I already made a few attempts to set up something like this, but these involved ridiculously complicated systems that probably wouldn't have worked as well as I hoped. It would probably be a much better idea to start with just a simple wiki, where users can contribute the most important information. We can add more advanced features later, if it looks like the features will be worth the added complexity.
Maybe the wiki will end up saying "Just donate to SIAI. Unless you're qualified to work for SIAI, there really isn't much else you can do to help save the world." But even in this case, I think it would be really helpful to at least have an explanation why there is no point trying to help in any other way. And even then, we could still use the wiki for projects to generate cash.
I find it really disturbing that the cause of saving the world doesn't have its own wiki. And none of the individual groups working towards saving the world have their own wiki. SIAI doesn't have a wiki. Lifeboat doesn't have a wiki. FHI doesn't have a wiki. H+ doesn't have a wiki. GiveWell doesn't have a wiki. Seriously, how did the cause of saving the world manage to violate The Wiki Rule?
Several years ago, Eliezer started the SL4 Wiki, and that was awesome, but then somehow after a few months, everyone lost interest in it, and it died. Then I tried to revive it, by importing all of its content to MediaWiki, and renaming it the transhumanist wiki. But noone other than me made any significant effort to edit or add content to the wiki. And even I haven't done much with the wiki in the past few months.
A few weeks ago, H+ contacted me, expressing interest in making the transhumanist wiki an official part of the humanityplus website, but I haven't heard any more about that since then.
Oh, and there's also the Accelerating Future People Database. This is a database of people who are working towards saving the world. This is a critical component of the system that I was describing, but we also need a list of projects, and a list of ways for volunteers to help.
Does anyone here think that a wiki like this would be a good idea? Does anyone here have any interest in helping to create such a wiki? If I created a wiki like this on my own, would anyone have a use for it? Is there some other reason I'm not aware of, why creating a wiki like this would be a very bad idea?
Replies from: gwern, XiXiDu, Roko↑ comment by gwern · 2010-12-02T18:51:07.200Z · LW(p) · GW(p)
Does anyone here think that a wiki like this would be a good idea? Does anyone here have any interest in helping to create such a wiki? If I created a wiki like this on my own, would anyone have a use for it? Is there some other reason I'm not aware of, why creating a wiki like this would be a very bad idea?
Some people, when faced with a problem, say, I know - I'll start a wiki! Now they have 2 problems.
I said something similar yesterday, and I have a short essay, Wikipedia And Other Wikis about why forking off WP is a bad idea (which is a related bad idea).
tl;dr: network effects are a bitch
↑ comment by XiXiDu · 2010-12-02T19:23:43.203Z · LW(p) · GW(p)
Maybe the wiki will end up saying "Just donate to SIAI. Unless you're qualified to work for SIAI, there really isn't much else you can do to help save the world."
If this is the answer then the SIAI should simply conclude this in a paper. Or EY should write a new sequence that concludes that supporting the SIAI is the rational choice if you want to save the world. I believe a Wiki would just add to the confusion. A wiki is good as a work of reference or a collaborative focal point for people working on a certain project. But when it comes to answering a certain question, a Wiki might lead people astray.
I'm still puzzled by the fact that saving the world is not much dealt with on Less Wrong. What would be a better way to exemplify rational choice than concluding what to do when you want to save the world. On Less Wrong rationality is an abstract concept that is seldom used to tackle real life decisions.
↑ comment by Roko · 2010-12-02T20:20:01.116Z · LW(p) · GW(p)
LW's own wiki is awesome
Can we quantify that? What has it achieved?
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-12-02T20:24:03.685Z · LW(p) · GW(p)
The LW wiki has made it approximately one order of magnitude easier to find the best content from LW.
You could try to quantify that by:
- the time it takes to find a specific thing you're looking for
- the probability of giving up before finding it
- the probability that you wouldn't even have bothered looking if the information wasn't organized in a wiki.
- maybe more
comment by VNKKET · 2010-12-01T23:40:36.835Z · LW(p) · GW(p)
Thank you for this post! One thing:
- Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.
If GiveWell's cost-benefit calculations are remotely right, you should downplay matching donations even more than just making this item second-last. I fear that matching donations are so easy to think about that they will distract people from picking good charities.
Replies from: Airedale, Louie, Roko↑ comment by Airedale · 2010-12-02T23:24:53.164Z · LW(p) · GW(p)
I think you and Louie may be talking about two different kinds of matching donations. The GiveWell post is about an employer matching donations only to a specific charity. Some employers will hold this sort of pledge drive, particularly in the wake of an especially harmful natural disaster.
However, many employers will match donations, up to a certain level, to any qualified (e.g., 501(c)(3)) charity; I believe one can find such employers by searching the database linked by Louie.
Replies from: VNKKET↑ comment by Louie · 2010-12-03T00:40:04.424Z · LW(p) · GW(p)
I think if people are already here, it's more than safe to mention matching donation programs. It could actually really help motivate people. I know it helped me a lot in the past.
I once donated $3k (the limit of my previous employer's matching program) to local service charities in Austin, TX. The only reason I started investigating charitable giving in the first place was because I found the info about the matching program buried in the packet of info I got from HR when I was hired (which I got around to looking through 6 months after starting). My goal at the time was barely altruistic. It was some mix of "Cool, I can get $3,000 in extra money! I just need to find something else besides myself that I care about." and "Wow, I work for a government defense contractor. I know what they will spend that $3,000 on if I don't find something better!".
I don't think Less Wrong or Give Well existed at the time. My search for a good cause probably ended prematurely, but it still marked the beginning of a search for something outside of myself that I cared about.
Also, even though searching through information about giving to charity and strongly considering giving did almost nothing for me, actually giving that $6,000 changed everything about how I saw myself.
Replies from: VNKKET↑ comment by VNKKET · 2010-12-12T07:08:08.297Z · LW(p) · GW(p)
Oh, oops, we were talking about different things. I think you're right to mention matching donations (especially after hearing your anecdote), but I wonder if there's room for a warning like, "It's more important to pick the right charity than to get someone to match your donation. (Do both if you can, of course.)"
comment by [deleted] · 2010-12-02T00:22:03.236Z · LW(p) · GW(p)
Don’t confuse what “feels good” with what actually helps the most
This. I can overstate how often I find myself going with what feels good instead of actually doing the best to help. Its a horribly addicting habit.
comment by Roko · 2010-12-02T20:22:07.506Z · LW(p) · GW(p)
Does this count as an entry to the $100 efficient charity challenge?
comment by utilitymonster · 2010-12-02T05:31:20.929Z · LW(p) · GW(p)
Enjoyed most of this, some worries about how far you're getting with point 8 (on giving now rather than later).
Give now (rather than later) - I’ve seen fascinating arguments that it might be possible to do more good by investing your money in the stock market for a long period of time and then giving all the proceeds to charity later. It’s an interesting strategy but it has a number of limitations. To name just two: 1) Not contributing to charity each year prevents you from taking advantage of the best tax planning strategy available to you. That tax-break is free money. You should take free money.
If you are worried about this you could start a donor advised fund for yourself.
2) Non-profit organizations can have endowments and those endowments can invest in securities just like individuals. So if long term-investment in the stock market were really a superior strategy, the charity you’re intending to give your money to could do the exact same thing. They could tuck all your annual contributions away in a big fat, tax-free fund to earn market returns until they were ready to unleash a massive bundle of money just like you would have. If they aren’t doing this already, it’s probably because the problem they’re trying to solve is compounding faster than the stock market compounds interest.
These assumptions about the motivations of people running non-profits seem too rosy. Most organizations seem to have a heavy bias toward the near. Maybe the best don't, but I'd like to see more evidence.
Diseases spread, poverty is passed down, existential risk increases.
There is a very relevant point here, but, unfortunately, we aren't given enough evidence to decide whether this outweighs the reasons to wait.
Do we want x-risk explicitly mentioned without explanation if this is for the contest?
Replies from: Louie↑ comment by Louie · 2010-12-02T07:16:10.149Z · LW(p) · GW(p)
I wrote much more about this point but decided to cut it down substantially since it was already disproportionately large compared to it's value to my overall rhetorical goals.
But here's some other things I wrote that didn't make it into the final draft of #8:
"I do agree that this helps you donate more dollars that you can credibly say came from you. But does it reliably increase total impact? It seems unlikely. For instance, imagine donating to a highly rated GiveWell charity that is vaccinating people against a communicable disease in Africa. The vaccines will be cheaper in the future and if you invest well, your money should be worth more in the future too. More money, cheaper vaccines -- impact properly increased, right? But preventing the spread of that disease earlier with less money could easily have prevented more total occurrences of that disease. Most problems like disease, lack of education, poverty, environmental damage, or existential risk compound quickly while you sit on the sidelines. Does the particular disease or other problem you want to combat really spread slow enough that you can overtake it with the power of compounding interest? You should do the calculation yourself, but most of the problems I’m aware of become harder to solve faster than that. And this is definitely a bad strategy if the charity you’re supporting is actually working on long-term solutions to the problems they’re combating and not just producing a series of (noble but ultimately endless) band-aid outcomes. Solving the problem is entirely different than managing outcomes indefinitely and can drastically shift the balance in favor of giving less sooner rather than more later."
I also wrote a lot of poorly phased notes (that I wasn't entirely happy with) to the effect that if you still thought this was a great idea... so much so that you actually planned to do it, you should definitely not execute it silently without communicating your plan to the non-profit you're expecting to support. My guess is that most heads of non-profits would "shriek with horror" (at least on the inside) at the thought of you really doing something like this and would patiently counsel you in the calmest tones they could manage that you should not do it and instead give sooner. I don't personally run any non-profits. This is all only partially-informed speculation on my part. I could have gotten something wrong here but all the ways to analyze it seem to keep pointing in the same direction even if none of the arguments are rigorous or completely specified with numbers from real situations.
I was also going to point out that lots of the largest and even several smaller scale non-profits have endowments like the kind I mention. I agree with you that too many non-profits are geared towards the near-term without any sense of perspective for what they could accomplish globally if they weren't fixated that way. But I guess I'm assuming that you threw out any non-profits that short-sighted back in step 2.
comment by someperson · 2012-08-02T23:02:23.154Z · LW(p) · GW(p)
I'm uncomfortable with patching my moral intuition with math. Wouldn't that imply that we should be willing to use violence to shut down promising AGI labs who don't take friendliness concerns into consideration?
If your moral system leads you to do things that make your moral intuition queasy, you should question your moral system.
Biting bullets is an overly simple solution to moral dilemma. You find yourself making monsters without much effort.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2012-08-02T23:47:45.784Z · LW(p) · GW(p)
If your moral system leads you to do things that make your moral intuition queasy, you should question your moral system.
Mmm, depends whether you are using "question" as a euphemism for "reject." Certainly, you should re-examine your explicit reasoning about ethics if the conclusions you reach conflict with many of your moral intuitions. However, you should also re-examine your moral intuitions when they fail to agree with your explicit reasoning about ethics. Otherwise, there would be very little point in conducting ethical analysis -- if your analysis can't ever validly prompt you to discard or ignore a moral intuition, then you may as well stop searching your conscience and just do whatever 'feels right' at any given moment. Sometimes your intuitions give way, and sometimes your formal reasoning gives way -- that's how you reach reflective equilibrium.
Biting bullets is an overly simple solution to moral dilemma. You find yourself making monsters without much effort.
Ah, but is "don't make monsters" your most important moral objective? Suppose you had to become a monster in the eyes of your friends in order to save a village full of innocent children. Is it obvious that it would be wrong to become a monster in this sense?
comment by TheRev · 2011-01-10T13:06:42.588Z · LW(p) · GW(p)
Though I won't be curing AIDS, designing cheaper solar panels, or searching for the Higgs Boson, seeing as I haven't chosen a career in the sciences, I am preparing for law school which should put me in a career that fairly well optimizes my income, while giving me a chance to use some of the rational argument skills on this site. Also, I live in Kansas, which, if I prove good enough at law, could provide me good opportunities to be on the front line against religious ignorance and bigotry here in the states. It would be a dream of mine to be in court against Fred Phelps and others like him, or to argue a case dealing with creationism being taught in schools. If not, there is sure to be some very interesting cases dealing with bioethics, cryonics, AI, or genetic engineering, fought in American courts in the coming decades. Or, without going to much into politics, since this isn't really the place for that, just do some civil liberties work, since I think most of us can agree that rationality and police states don't tend to work well together.
Replies from: gwern↑ comment by gwern · 2011-01-10T16:40:29.231Z · LW(p) · GW(p)
I am preparing for law school which should put me in a career that fairly well optimizes my income, while giving me a chance to use some of the rational argument skills on this site.
How sure are you of said optimization?
- http://www.theatlantic.com/business/archive/2011/01/you-know-the-legal-job-market-must-be-bad/68852/
- http://www.economist.com/node/17461573?story_id=17461573
- http://lawyerist.com/law-school-admissions-bubble/
- http://www.nytimes.com/2011/01/09/business/09law.html
- http://www.nytimes.com/2010/08/05/business/global/05legal.html
- http://www.calicocat.com/2004/08/law-school-big-lie.html
- http://abovethelaw.com/2010/01/ivy-league-law-school-graduate-begs-for-work-on-craigslist/
EDIT: an 11% drop in applications in 2011: http://online.wsj.com/article/SB10001424052748704396504576204692878631986.html
Replies from: TheRev↑ comment by TheRev · 2011-01-11T01:55:11.662Z · LW(p) · GW(p)
Honestly, I'm not that sure. I knew that there have been issues for law graduates to find jobs, but with the state of the economy the way it is, there are problems for graduates across the board, not just in law school. I'll be graduating this spring with degrees in political science and history. So, I can try and find a job now when the market for college graduates in general is similarly bad, and I'll likely end up working a low paying hourly office job, like customer support, or do some graduate work, like law school or a masters or phd program in one of my fields. Though there is a glut of graduates and paucity of jobs for masters and phd graduates in my fields as well. Eventually, the economic situation will sort out, and jobs will return, and historically, law has been fairly lucrative. Hopefully, this will happen in the next three years, but if I have to wait a few more years after graduation to start making big money, that's acceptable to increase the long-term odds that I will have a well-paying job.