So what are our Mid-Term goals?
post by Fergus_Mackinnon · 2011-06-29T20:44:56.471Z · LW · GW · Legacy · 20 commentsContents
20 comments
I know that Rationalism is fairly spread out as an ideology, but, thankfully, very few rationalists seem to subscribe to the popular belief that allowing someone you can't see to suffer by your inaction is somehow different from walking on while some kid you don't know bleeds out on the pavement. So, if most of us are consequentialist altruists, precisely what should we be doing? SIAI are working on the Silver-Bullet project, for example, but what about the rest of us? Giles is attempting to form a altruist community, and I'm sure there are a number of other, scattered, projects members are working on independently. (I searched for any previous discussions, but didn't find any. If there are some I missed, please send me the links.)
However, a lot of the community's mid-term plans seem to be riding on the success of the SIAI project, and although I am not qualified to judge it's probability... not having other plans in the event it fails, when most of us don't have any skills that would contribute to the Friendly AI project anyway, seems overly-hopeful. There are of course several short-term projects, the Rationalist Boot-Camps for example, but they don't currently seem to be one of the main focuses.
I suppose, what I'm trying to ask without stepping on too many SIAI researchers toes, is what should non-researchers who want to help be doing in case it doesn't work, and why?
20 comments
Comments sorted by top scores.
comment by Nick_Tarleton · 2011-06-29T21:51:55.036Z · LW(p) · GW(p)
very few rationalists seem to subscribe to the popular belief that allowing someone you can't see to suffer by your inaction is somehow different from walking on while some kid you don't know bleeds out on the pavement.
I believe that the latter is evidence, and the former isn't, of being unusually callous and potentially worthy of suspicion. (Nitpicking, I know, but if you're going to be skeptical of naive ethics and try to be a consequentialist, it's important to notice valid implicit judgments like this.)
Replies from: Giles↑ comment by Giles · 2011-07-01T00:26:52.787Z · LW(p) · GW(p)
I'd put it slightly differently - if you want to be an effective consequentialist then you don't want to act like a sociopath. Whether your ultimate motives are completely selfish or completely altruistic, you can benefit from reciprocal altruism and the support of friends and community. As an extreme example, if you let yourself become completely isolated from your community you may suffer mental health issues that would compromise your ability to act rationally.
So unless you're really in a hurry, walking past the kid on the pavement probably isn't worth the status hit.
comment by Alexei · 2011-06-30T00:28:18.180Z · LW(p) · GW(p)
SIAI's second priority is to raise the sanity waterline. A lot of people put their efforts towards that. For example: creating LW meetups in new cities, writing articles, and simply being more rational people.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-06-30T15:21:02.999Z · LW(p) · GW(p)
Yes, that's something I'm not sure about. There's plenty of good reasons to pursue the goal, and, as far as I've seen, plenty of people willing to support it on this site... but there isn't any obvious coordination. If enough of us could gain positions at universities, spread out to maximize coverage, or focused to 'take-over' an institution, then we would likely see far more success. (Unfortunately none of our current ad-hoc 'leadership' has that much appeal outside the group, so playing politics doesn't look like it would work out well for us.) I'm probably going to try and work towards an Economics doctorate if I can get good enough grades once I'm at university, for example. Once established I'd be able to start writing to the media, try to get a column in an influential tabloid, write a popular science explanation of economics, etc with authority, and try to recruit/support rationalists.
Replies from: Giles, Alexei↑ comment by Giles · 2011-07-01T00:36:35.585Z · LW(p) · GW(p)
Does the Less Wrong movement suffer from not having enough "people people"? I would guess that getting the ability to influence people (as opposed to basic social skills) is something that takes a lot of work.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-07-01T08:29:26.374Z · LW(p) · GW(p)
I think so. The people most given to the introspection that leads to an interest in this philosophy seem to be those somehow marginalized by society, and they typically aren't very socially adept. Which is a pity, given how supportive a background in rationalist literature could be for a psychology student.
↑ comment by Alexei · 2011-06-30T17:33:04.077Z · LW(p) · GW(p)
You are right, for the most part there isn't an organized movement. But everyone is doing something. And I'm sure you can find someone in LW community who has about the same ideas/approach as you, so you can team up.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-06-30T20:09:04.070Z · LW(p) · GW(p)
I'm trying to do so, but the chances of them being nearby, and of university age seem pretty low. Has there ever been a LessWrong user census? Ages, locations, etc.
Replies from: Alexei↑ comment by Alexei · 2011-06-30T22:19:00.543Z · LW(p) · GW(p)
You are right, finding a person nearby is very unlikely. You (or they) will have to be willing to move.
This one is the only one I know about. It might be a good idea to redo it, since by now the numbers are completely different.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-06-30T23:15:30.007Z · LW(p) · GW(p)
Maybe a less statistical one might be better, while interesting, it doesn't easily meet my needs. I'll post one in the morning unless someone else wants to do it, although if privacy is an issue for some people we'll likely need some off-site element.
comment by Normal_Anomaly · 2011-06-29T21:16:38.898Z · LW(p) · GW(p)
One useful suggestion is to earn a lot of money and donate it to a good cause, VillageReach and GiveWell being some of the best. GiveWell is a charity evaluation site that finds especially cost-effective charities, and gives all donations above what it needs to the best ones. VillageReach is GiveWell's most highly recommended group; it provides basic healthcare to people in the third world. And of course, explaining the idea of efficient charity to other people so they do this too is a good force multiplier.
comment by mstevens · 2011-06-30T12:27:54.413Z · LW(p) · GW(p)
Your link "Giles is attempting to form a altruist community" seems to be broken.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-06-30T15:04:25.219Z · LW(p) · GW(p)
My apologies. Here's the main thread, but there's a sequence on his plan I can't find listed anywhere but his profile.
http://lesswrong.com/lw/5kc/altruist_support_the_plan/
Replies from: jsalvatier↑ comment by jsalvatier · 2011-06-30T15:12:45.280Z · LW(p) · GW(p)
You can still edit your post; there's a button near the bottom of the post.
Replies from: Fergus_Mackinnon↑ comment by Fergus_Mackinnon · 2011-06-30T15:15:42.493Z · LW(p) · GW(p)
Done. I'm still learning how the site's software works, so thanks for pointing it out.
comment by timtyler · 2011-06-29T21:34:34.876Z · LW(p) · GW(p)
I know that Rationalism is fairly spread out as an ideology, but, thankfully, very few rationalists seem to subscribe to the popular belief that allowing someone you can't see to suffer by your inaction is somehow different from walking on while some kid you don't know bleeds out on the pavement.
That's mostly the utilitarians, as I understand it. I know that not very many philosophers are utilitarians - less than 25% - but it would be interesting to know how many self-declared rationalists are.
comment by Giles · 2011-07-01T00:53:33.090Z · LW(p) · GW(p)
However, a lot of the community's mid-term plans seem to be riding on the success of the SIAI project, and although I am not qualified to judge it's probability...
This seems like something where there's a high value-of-information. As a Bayesian, you can come up with a (subjective) probability of more or less anything, even if it's just your original arbitrary prior probability. The feeling of "not being qualified to judge" corresponds (I think) to expecting that further information (which is currently available to experts) would be highly valuable when it comes to making decisions.
So do you feel that further information on this topic wouldn't be valuable (e.g. no useful skills to contribute), or do you feel that the cost is too high (e.g. you've read everything that's immediately available and it still doesn't make any sense)?
Replies from: asr↑ comment by asr · 2011-07-01T03:53:31.179Z · LW(p) · GW(p)
As a Bayesian, you can come up with a (subjective) probability of more or less anything, even if it's just your original arbitrary prior probability.
This is a misleading claim. Finding coherent probabilities on a set of logically linked claims is NP-complete, and therefore believed to be intractable. Just because you aspire to have probabilistic beliefs doesn't mean you do, or even that you actually can, in a computationally realistic sense.
Replies from: Giles↑ comment by Giles · 2011-07-01T15:00:23.311Z · LW(p) · GW(p)
This is true. I'm not sure that the probabilities you assign need to be consistent in order to be useful though - you can update them at the point when you discover an inconsistency.
Also, there's surely something analogous to value-of-information even when you don't have probabilistic beliefs as such?