People who want to save the world

post by Giles · 2011-05-15T00:44:18.347Z · LW · GW · Legacy · 247 comments

Contents

247 comments

atucker wants to save the world.
ciphergoth wants to save the world.
Dorikka wants to save the world.
Eliezer_Yudkowsky wants to save the world.
I want to save the world.
Kaj_Sotala wants to save the world.
lincolnquirk wants to save the world.
Louie wants to save the world.
paulfchristiano wants to save the world.
Psy-Kosh wants to save the world.

Clearly the list I've given is incomplete. I imagine most members of the Singularity Institute belong here; otherwise their motives are pretty baffling. But equally clearly, the list will not include everyone.

What's my point? My point is that these people should be cooperating. But we can't cooperate unless we know who we are. If you feel your name belongs on this list then add a top-level comment to this thread, and feel free to add any information about what this means to you personally or what plans you have. Or it's enough just to say, "I want to save the world".

This time, no-one's signing up for anything. I'm just doing this to let you know that you're not alone. But maybe some of us can find somewhere to talk that's a little quieter.

247 comments

Comments sorted by top scores.

comment by Alicorn · 2011-05-14T05:59:26.980Z · LW(p) · GW(p)

I want the world to be saved.

Replies from: Giles, wallowinmaya, Dorikka, atucker
comment by Giles · 2011-05-14T23:48:34.270Z · LW(p) · GW(p)

I agree Alicorn's phrasing is better. My own position would literally be: "I want to act so as to maximize the degree to which the world is saved". In practice this is more likely to be "helping other people to save the world", but that's a strategy not a goal.

I'm indifferent to personal glory etc.

I want to maximize something rather like a utility function, so I want my degree of ambition to naturally scale with the opportunities available. If I only have the opportunity to do a very little good, I want to do a very little good. If I have the opportunity to do a lot (even very indirectly), I want to do a lot.

From my point of view, I'm always at the site of the action (or at least, at the site of my own decisions, which is all I can directly control).

Finally, I don't think I'm a consequentialist. What I'm describing is my volition, not my ethical system. I haven't quite decided my metaethics - I need to do some more thinking on that, and maybe wait for more of lukeprog's sequence.

comment by David Althaus (wallowinmaya) · 2011-05-14T14:24:52.820Z · LW(p) · GW(p)

Wow, maybe I'm stupid, but why did this comment got so much karma? I'm really just curious...

Replies from: Perplexed
comment by Perplexed · 2011-05-14T14:33:42.558Z · LW(p) · GW(p)

Alicorn, a deontologist, wishes that a certain consequence (the salvation of the world) obtain. Whether she is involved in producing that consequence or not.

Giles, presumably a consequentialist, phrases his own wish so as to egoistically place himself at the site of the action.

The juxtaposition carries a certain irony.

Replies from: Normal_Anomaly, rhollerith_dot_com, Alicorn, adamisom, Peterdjones
comment by Normal_Anomaly · 2011-05-15T01:38:59.593Z · LW(p) · GW(p)

Before seeing this subthread, I interpreted it almost exactly opposite. I thought of "I want the world to be saved" as just that, but "I want to save the world" as meaning "I want the world to be saved, and I am willing to work toward this goal myself." Sort of along the lines of this exchange from Terry Prachett's The Wee Free Men:

‘Ah. Something bad is happening.’

Tiffany looked worried.

‘Can I stop it?’

‘And now I’m slightly impressed,’ said Miss Tick. ‘You said, “Can I stop it?”, not “Can anyone stop it?” or “Can we stop it?” That’s good. You accept responsibility. That’s a good start.

When I say that I want to save the world, that's what I try to mean.

Replies from: Perplexed
comment by Perplexed · 2011-05-15T22:46:13.957Z · LW(p) · GW(p)

'Ah. Something bad is happening.'

Tiffany looked worried.

‘Can I stop it?’

‘And now I’m slightly impressed,’ said Miss Tick. ‘You said, “Can I stop it?”, not “Can anyone stop it?” or “Can we stop it?” That’s good. You accept responsibility. That’s a good start.

I personally think it is a horrible start. That is the kind of start that leads to young men with boxcutters boarding airplanes, with the Crusades as one intermediate step in the causal chain. It is the kind of start that leads to brave little fellows in kilts bashing everyone around them with clubs just to demonstrate their manhood.

The kind of start I would prefer Tiffany to make would begin with a different question: "Oh, what is happening? And how do we know it is bad?".

I would prefer that the Ravenclaws figure out what it is that needs to be done, before the Griffindors and Hufflepuffs start chanting "I want to do something!" and begin to look around for a Slytherin to suggest something for them to do.

Don't take this personally. I don't think that you or anyone else reading this blog is a potential terrorist. But I came of age in the sixties and knew quite a few people who were involved in radical politics. And quite a few more people in the military. The slogan back then was "By whatever means necessary." And it still amazes me how many horrible things got done just because people were unwilling to show lack of commitment to the cause. Because when you commit to action in the abstract, and believe that the end justifies the means, it becomes a contest to find the means that most conclusively demonstrates one's allegiance to the end.

So "I want the world to be saved, and I am willing to work toward this goal myself." is not something I like to hear. Nor is "I, as an individual, accept responsibility for the fate of the world." I would much rather hear, "Here is what is wrong and here is how we can fix it. Won't you help me convince enough other people of this?"

comment by RHollerith (rhollerith_dot_com) · 2011-05-14T15:31:58.173Z · LW(p) · GW(p)

"Phrases his own wish so as to egoistically place himself at the site of the action," is an apt summary of my problem with the phrase, "I want to save the world".

Replies from: Eliezer_Yudkowsky, wallowinmaya
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-14T23:54:57.223Z · LW(p) · GW(p)

How's that philosophy working out for you in terms of producing world-saving actions?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-15T00:20:33.666Z · LW(p) · GW(p)

Heh. OK, good point. Would it help if I said that shortly after publishing grandparent I considered appending words to the effect of, This is more of a gut reaction than a conclusion informed by my experiences with social reality, and I am very willing to change my mind. In other words, if I really got to know more of the people who define themselves as "world savers," good chance I'd change my mind.

But would it really hurt your plans to use the phrase, "improve the world," rather than, "save the world"? If the world needs saving (and I definitely believe it does need saving from irresponsible AGI researchers) then aren't people unlikely enough to overlook the fact that improving the world entails saving the world?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-15T05:42:11.574Z · LW(p) · GW(p)

Well, like I said. How's that careful avoidance of any phrasing that potentially smacks of egotism, working out for you in terms of producing world-saving actions?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-15T06:50:24.983Z · LW(p) · GW(p)

Well, like I said. How's that careful avoidance of any phrasing that potentially smacks of egotism, working out for you in terms of producing world-saving actions?

You seem to believe that it is good to encourage a lot of actions. That is true if the effects of the actions are limited to increasing human rationality. Well, even that is not true, because if you increase the rationality of a destructive patent lawyer or politician, (note that I do not want to get into a discussion of whether patent lawyers or politician are harmful on average: I just needed to grab some likely suspects to keep my prose from getting too abstract) you simply enable him to be more effective at undeservedly harming people -- and I humbly suggest that for the purposes of this discussion, "harm" can be defined as "decrease the rationality of". But in general I will grant that what I just said is mostly probably just a quibble and that increasing the sanity waterline is a good thing.

In general, though, I am sceptical that "producing world-saving actions" is what we should be aiming for. Maybe I am biased by the fact that I am a cautious person, but I think that if only we could make everyone a lot more cautious (about the right things, namely, about effects on the global situation, not effects on one's personal situation) we'd be in much better shape than we actually are.

In great great grandparent (GGGP) I talk of egotism, but now I am talking of caution. The reason that that is not changing the subject is that an egotist is significantly more likely to cause harm through lack of caution than a non-egotist is. Egotists tend to have higher self-esteem and status and both arguments from evolutionary psychology and observation of people lead me to believe that higher self-esteem and status make people less cautious. (Nor is it the case that low-self-esteem types are necessarily ineffectual.)

Note also that in GGGP I wasn't asking you to eschew incautious people; I was merely asking you to avoid using language that actively repels cautious people because it might be nice to keep some around.

Also, I do not think teaching incautious people rationality skills is an effective response to human lack of caution. Some of them (particularly those with the best control over their motivational architecture) will be made more cautious that way, but some one them will simply be made more effective in pursuing their incautious ends.

I almost did not publish this because the probability that it will sway you in any significant way is so low. In fact it might be wise for you to consider this as simply a notification and a brief description of a longer conversation it might be worthwhile to have with you some day about my worries that SIAI is paying insufficient attention to an large class of potential contributors. SIAI understands altruists well because SIAI leaders are altruists. And they seem to understand egoists well. "egoist": someone whose values and terminal goals are largely selfish -- Hopefully Anonymous and Roko 2008 being salient examples. (I say "Roko 2008" instead of "Roko" because he might become or have become much less aligned with the egoists.) But it's not just all altruists and egoists. I'm talking about motivations here: which natural human positive reinforcer (fancy word for desire) motivates the person's e-risks or philantropic work.

Replies from: Eliezer_Yudkowsky, Rain
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-15T09:49:16.013Z · LW(p) · GW(p)

In general, though, I am sceptical that "producing world-saving actions" is what we should be aiming for. Maybe I am biased by the fact that I am a cautious person, but I think that if only we could make everyone a lot more cautious

Aaaand not to put too fine a point on it, but how much research is that caution getting done, exactly? Philanthropic donations produced by this philosophy? Anything?

comment by Rain · 2011-05-15T14:26:00.448Z · LW(p) · GW(p)

I think that if only we could make everyone a lot more cautious (about the right things, namely, about effects on the global situation, not effects on one's personal situation) we'd be in much better shape than we actually are.

I think the precautionary principle is useless. It's easy to see why when reading books such as, We Wish to Inform You That Tomorrow We Will Be Killed with Our Families, which describes the 1994 Rwandan genocide. My motto is, "The only way out is through."

comment by David Althaus (wallowinmaya) · 2011-05-14T16:36:18.170Z · LW(p) · GW(p)

AHAA! I got it, at least I hope so. For me "I want to save the world" and " I want the world to be saved" meant exactly the same, i.e. I didn't realize that the sentence "I, person P, want to save the world" meant that P had to be involved in this whole save-the-world-business. Now "I want to save the world" evokes rather egoistic and self-aggrandizing characters in my mind. Strange world...

comment by Alicorn · 2011-05-14T19:16:55.643Z · LW(p) · GW(p)

It may be worth noting - again - that my non-moral reasons for action (prudential considerations) work more or less consequentialistically.

comment by adamisom · 2013-01-24T17:40:32.259Z · LW(p) · GW(p)

Hey guys, how about we debate who's being egoistic about saving the world and who isn't? That sounds like a really good way to use LessWrong and knowledge of world-saving.

Replies from: Capla
comment by Capla · 2014-11-19T23:26:39.353Z · LW(p) · GW(p)

We do seem to love accusing people of being altruistic only for singling.

comment by Peterdjones · 2011-05-14T15:14:39.948Z · LW(p) · GW(p)

I don't think there is any force to either claim. Deontologists are generally concerned with rules not salvation, and consequentialists are generally not ego(t)ists.

Replies from: Perplexed, timtyler
comment by Perplexed · 2011-05-14T16:29:12.525Z · LW(p) · GW(p)

I'm not sure to what claims you are referring. If you mean, for example, the claim that Alicorn is a deontologist, then I should point out that she has publicly confessed. If you mean my implicit claim that Giles is male, then I confess to jumping to that conclusion without sufficient evidence.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-14T16:39:48.920Z · LW(p) · GW(p)

I think I misread your comment.

comment by timtyler · 2011-05-14T16:39:41.553Z · LW(p) · GW(p)

consequentialists are generally not ego(t)ists.

Consequentialists are sometimes egoists.

Replies from: Perplexed
comment by Perplexed · 2011-05-14T18:04:32.479Z · LW(p) · GW(p)

And egoists are almost always consequentialists.

comment by Dorikka · 2011-05-14T06:00:15.870Z · LW(p) · GW(p)

Props for precision.

comment by atucker · 2011-05-14T13:14:59.795Z · LW(p) · GW(p)

Does this mean that you don't want to be involved in doing it? And if so, why?

Or is it just you want it to happen, which may or may not involve you?

Replies from: Alicorn
comment by Alicorn · 2011-05-14T19:18:11.506Z · LW(p) · GW(p)

I don't actively want to be involved in doing it. I would be quite happy to be among the masses of the saved by someone else's hand. I'm willing to help when ways to do that present themselves, since ignoring ways to make things I want to happen happen would be pretty dumb.

Replies from: None
comment by [deleted] · 2011-05-17T20:50:01.371Z · LW(p) · GW(p)

"I'm willing to help when ways to do that present themselves"

And if they don't, will you sit back and wait for them, or will you look for them?

(Not passing judgment, just trying to tease out more details of your position.)

Replies from: Alicorn
comment by Alicorn · 2011-05-17T21:10:34.554Z · LW(p) · GW(p)

I worked for Singinst for a while. I'm not really dedicating my life to diligently ferreting out more things to do, but I do put myself in the way of such information should it come to light (e.g. I hang out here, I'm on the Singinst mailing list).

comment by Vladimir_Nesov · 2011-05-14T11:41:35.015Z · LW(p) · GW(p)

Where can I exchange units of applause for units of world-saving?

Replies from: Rain, atucker, wedrifid, wedrifid
comment by Rain · 2011-05-14T23:02:06.189Z · LW(p) · GW(p)

For every non-duplicate comment replying to this one praising me for my right action, I will donate $10 to SIAI, up to a cap of $1010, with the count ending on 1 June 2011. Also accepting private messages.

Edit: The cap was met on 30 May. Donation of $1010 made.

Replies from: Normal_Anomaly, Rain, MixedNuts, Vladimir_Nesov, Psy-Kosh, Rain, Cyan, None, Kaj_Sotala, Kutta, Larks, cousin_it, Oscar_Cunningham, ArisKatsaris, MichaelHoward, Yvain, EchoingHorror, None, rhollerith_dot_com, LucasSloan, lsparrish, CarlShulman, Gedusa, Giles, FAWS, XiXiDu, Nick_Tarleton, Zetetic, Emile, ata, Automaton, Dorikka, Mercurial, Eliezer_Yudkowsky, arundelo, Normal_Anomaly, Giles, Barry_Cotter, Alicorn, wallowinmaya, XFrequentist, realitygrill, Peter_de_Blanc, Benquo, Armok_GoB, Armok_GoB, cousin_it, jaimeastorga2000, Bongo, purpleposeidon, Eneasz, atucker, Normal_Anomaly, Zack_M_Davis, gscshoyru, Cunya, Ori93, Tyrrell_McAllister, Miller, MixedNuts, Nisan, jasonmcdowell, loqi, EStokes, Nick_Roy, MinibearRex, Will_Newsome, Synzael, endoself, Karl, AlexMennen, curiousepic, TimFreeman, wedrifid, Nick_Tarleton
comment by Normal_Anomaly · 2011-05-15T01:45:33.648Z · LW(p) · GW(p)

This comment inspired me to make a donation to Village Reach. Your right action just got $350 worth of preventative medical care for kids, plus this praising comment.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-15T13:56:43.574Z · LW(p) · GW(p)

... Why did you not donate it to the SIAI instead?!?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-05-15T15:24:10.913Z · LW(p) · GW(p)

It's complicated. Just take my word for it that I wanted to but wasn't able to.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-15T16:54:18.253Z · LW(p) · GW(p)

Oh. Ok no problem in that case.

comment by Rain · 2011-05-31T00:37:35.823Z · LW(p) · GW(p)

I just made the donation of $1010. Thanks to all those who commented!

comment by MixedNuts · 2011-05-15T01:31:37.674Z · LW(p) · GW(p)

I will extol thee, my fellow LessWronger, O SIAI donor, and I will bless thy name until June 1. Every day I will bless thee; and I will praise thy name until June 1. Great is Rain, and greatly to be praised; and eir greatness is searchable and indexed by Google.

comment by Vladimir_Nesov · 2011-05-15T00:52:14.736Z · LW(p) · GW(p)

Your action is particularly right in not requiring that every user must limit the amount of praise to one comment.

comment by Psy-Kosh · 2011-05-21T17:06:58.017Z · LW(p) · GW(p)

I do a virtual Rain dance to honor this right action.

Further, I compound this by donating an additional $30 myself to SIAI right now.

comment by Rain · 2011-05-15T13:47:08.854Z · LW(p) · GW(p)

I'll pat myself on the back for coming up with this idea, which has promised $340 to SIAI as of me submitting this comment.

Replies from: steven0461, Rain
comment by steven0461 · 2011-05-15T19:45:53.028Z · LW(p) · GW(p)

This way of doing things is pretty cool, because now not only do you get to feel good for taking a right action, others get to feel good for getting you to do it, and you get to feel good for getting others to get you to do it.

comment by Rain · 2011-05-19T12:51:13.325Z · LW(p) · GW(p)

The total is now $740.

comment by Cyan · 2011-05-18T01:19:20.428Z · LW(p) · GW(p)

l33t pr41z Ph0R R41N. j00 r0X0R!

comment by [deleted] · 2011-05-17T20:54:07.648Z · LW(p) · GW(p)

Public commitment is a great way to improve one's chances of right action. And the "praise me" part of the set-up lets you potentially get even more warm fuzzies than the donation would alone! Nice job of community-usage and self-manipulation to get something productive done. Seriously.

comment by Kaj_Sotala · 2011-05-16T09:41:09.297Z · LW(p) · GW(p)

I momentarily stopped to think about a way to make this praise-comment clever. When I couldn't, on the spot, come up with anything clever enough, I considered waiting until I would. But then I realized that that might make me forget to comment entirely! So let me now praise you for your wonderful deed, which provides SIAI money, acts of creativity to us, and great well-being in the form of positive emotions and group bonding to everyone! Huzzah!

comment by Kutta · 2011-05-15T20:58:47.053Z · LW(p) · GW(p)

All my praise are belong to you.

comment by Larks · 2011-05-15T00:12:18.612Z · LW(p) · GW(p)

Congratulations on doing a thing closer to the best thing than many other relevant alternatives!

comment by cousin_it · 2011-05-15T11:33:03.027Z · LW(p) · GW(p)

I hereby praise ya. Make it rain for the singularity!

comment by Oscar_Cunningham · 2011-05-15T07:40:55.404Z · LW(p) · GW(p)

Excellent! You've made my day better and done something good at the same time!

ETA: To be clear, the word "Excellent!" is praise.

comment by ArisKatsaris · 2011-05-15T00:31:18.751Z · LW(p) · GW(p)

Kudos for a right action!

comment by MichaelHoward · 2011-05-15T00:30:40.439Z · LW(p) · GW(p)

For your act of righteousness, this comment praises you.

comment by Scott Alexander (Yvain) · 2011-05-15T00:18:35.507Z · LW(p) · GW(p)

This is an excellent action! Commendations and praise be to you!

comment by EchoingHorror · 2011-05-15T04:07:04.556Z · LW(p) · GW(p)

Your action, praise, do I.

  • Rationalist!Yoda
comment by [deleted] · 2011-05-27T10:39:37.141Z · LW(p) · GW(p)

I praise you for having the wisdom of using a long enough deadline. When I first read your comment, it felt like you were exploiting me, as if you were forcing me to share my limited praise resources. But because I had enough time, I got over myself, realized that this is not a zero-sum game, that this is not an attack on my status and that what you are doing is clever and good.

Well done, I praise you for your right action.

comment by RHollerith (rhollerith_dot_com) · 2011-05-19T21:36:08.528Z · LW(p) · GW(p)

I commend anyone who donates to SIAI unless the donor acquired the assets by stealing, defrauding or otherwise imposing undeserved harm on another -- and based on his writings here, the latter seems very unlikely in Rain's case.

comment by LucasSloan · 2011-05-19T20:10:15.801Z · LW(p) · GW(p)

And unto the ten thousandth generation, they sing Rain's praises for he saves 80 of them for each donation. Thank you very much for doing this.

Replies from: MixedNuts
comment by MixedNuts · 2011-05-20T20:22:45.541Z · LW(p) · GW(p)

I can't find your source for that number. I'm interested.

Replies from: LucasSloan
comment by LucasSloan · 2011-05-22T08:43:11.599Z · LW(p) · GW(p)

Here

Anna Salamon calculates that a dollar donated to SI saves on average 8 human lives.

comment by lsparrish · 2011-05-18T03:35:38.215Z · LW(p) · GW(p)

I praise you for your right action. Also, here is a random string of integers to prove the non-duplicate nature of my comment: 5224818730

Replies from: Alicorn
comment by Alicorn · 2011-05-18T03:36:06.281Z · LW(p) · GW(p)

How did you generate those integers? Are they really random?!

Replies from: lsparrish
comment by lsparrish · 2011-05-18T05:06:46.618Z · LW(p) · GW(p)

Here is the link I used.

comment by CarlShulman · 2011-05-15T19:09:10.978Z · LW(p) · GW(p)

I praise you for this specific right action, and for the virtuous character and skills that it signals (honestly, based on other available info).

comment by Gedusa · 2011-05-15T18:42:39.257Z · LW(p) · GW(p)

Your right action is most excellent!

comment by Giles · 2011-05-15T18:33:48.038Z · LW(p) · GW(p)

I praise this act of taking a snarky comment literally and turning it into something wonderful. If this idea takes hold, we'll either see less snarkiness or more wonder.

Replies from: wedrifid
comment by wedrifid · 2011-05-15T19:18:53.079Z · LW(p) · GW(p)

I praise this act of taking a snarky comment literally and turning it into something wonderful. If this idea takes hold, we'll either see less snarkiness or more wonder.

By way of dissociation with bitter, somewhat self righteous counter-snarkiness I removed my praise comment that was the sibling of the parent and replaced it with praise for Vladimir. Hopefully this will lead to more pragmatic insight.

comment by FAWS · 2011-05-15T10:53:55.843Z · LW(p) · GW(p)

Praised be this commitment of action by Rain.

comment by XiXiDu · 2011-05-15T09:28:00.293Z · LW(p) · GW(p)

I praise you for acting less wrong.

comment by Nick_Tarleton · 2011-05-15T08:55:23.103Z · LW(p) · GW(p)

Thanks and compliments for your right action.

comment by Zetetic · 2011-05-15T07:27:06.410Z · LW(p) · GW(p)

You've my sincerest praise for this right and good action.

comment by Emile · 2011-05-15T06:11:21.020Z · LW(p) · GW(p)

Such an action is worthy of the praise it received!

comment by ata · 2011-05-15T05:24:31.905Z · LW(p) · GW(p)

Praise and blessings be upon thy name!

comment by Automaton · 2011-05-15T03:07:31.613Z · LW(p) · GW(p)

I praise you for acting rightly.

comment by Dorikka · 2011-05-15T02:46:18.865Z · LW(p) · GW(p)

Huzzah!

comment by Mercurial · 2011-05-15T00:15:27.057Z · LW(p) · GW(p)

Praise for right action! Thanks for doing this!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-14T23:55:57.033Z · LW(p) · GW(p)

I praise this right action.

comment by arundelo · 2011-05-14T23:51:51.499Z · LW(p) · GW(p)

I hereby praise you for your right action. My username is arundelo.

Replies from: arundelo
comment by arundelo · 2011-05-14T23:55:43.186Z · LW(p) · GW(p)

Yes, a somewhat munchkinish way of fulfilling the non-duplicate requirement. Thanks for the free $10!

comment by Normal_Anomaly · 2011-05-14T23:33:44.058Z · LW(p) · GW(p)

Upvoted and replied. Kudos to you for the right action!

comment by Giles · 2011-05-14T23:31:12.221Z · LW(p) · GW(p)

My own donations to SIAI are currently limited by the peer pressure not to donate, rather than my actual available funds. As such, replying to your comment gives me an excellent way to donate by stealth. Praise for your weird brilliance!

comment by Barry_Cotter · 2011-05-14T23:22:54.225Z · LW(p) · GW(p)

Praise for right action. Attempting to do good should have positive EV, let's encourage that.

comment by Alicorn · 2011-05-14T23:05:26.621Z · LW(p) · GW(p)

*hugs* for donations to reduce x-risk!

comment by David Althaus (wallowinmaya) · 2011-05-19T21:56:01.027Z · LW(p) · GW(p)

You are awesome and your action is praiseworthy!

comment by XFrequentist · 2011-05-16T01:48:37.628Z · LW(p) · GW(p)

Props for your righteous action!

clenched fist salutes

comment by realitygrill · 2011-05-15T20:38:06.074Z · LW(p) · GW(p)

Oh Rain, I praise thou so that your status may soar (temporarily) for your right action!

comment by Peter_de_Blanc · 2011-05-15T15:59:50.926Z · LW(p) · GW(p)

I praise you for your right action.

comment by Benquo · 2011-05-15T14:05:23.933Z · LW(p) · GW(p)

This is good and right of you. I approve.

comment by Armok_GoB · 2011-05-15T14:01:11.417Z · LW(p) · GW(p)

Hugs Rain in a way signifying the praisworthyness of this very right action! ^_^

comment by Armok_GoB · 2011-05-15T13:55:45.280Z · LW(p) · GW(p)

I hereby declare your action to be praised by me.

comment by cousin_it · 2011-05-15T11:36:31.892Z · LW(p) · GW(p)

Aaaand, you also get a 2X Praise Bonus! Thanks to Nesov for the suggestion.

comment by Bongo · 2011-05-15T01:03:29.369Z · LW(p) · GW(p)
*praises Rain*
comment by purpleposeidon · 2011-05-20T09:26:42.028Z · LW(p) · GW(p)

Our multitude of voices exalting Rain's donation rebound off the faster-approaching towers of the Singularity!

comment by Eneasz · 2011-05-18T22:03:51.014Z · LW(p) · GW(p)

Praise be unto Rain and the right action he is to undertake!

Replies from: MartinB
comment by MartinB · 2011-05-19T13:27:54.428Z · LW(p) · GW(p)

Praise + action

comment by atucker · 2011-05-18T18:05:47.670Z · LW(p) · GW(p)

Wow. That's really awesome for you to do.

Praise for Rain, and Rain's right action!

comment by Normal_Anomaly · 2011-05-18T01:14:31.033Z · LW(p) · GW(p)

Yet more praises rain on Rain.

comment by Zack_M_Davis · 2011-05-17T18:39:27.251Z · LW(p) · GW(p)

I too praise this right action.

comment by gscshoyru · 2011-05-17T18:04:11.855Z · LW(p) · GW(p)

Sweet. A free (for me) way to donate money. Thank you very much for providing this opportunity (i.e. I praise you for your right action.)

comment by Cunya · 2011-05-17T05:56:11.698Z · LW(p) · GW(p)

Cunya praises your right action!

comment by Ori93 · 2011-05-17T05:10:04.593Z · LW(p) · GW(p)

Thanks for your right action! I sincerely praise you.

comment by Tyrrell_McAllister · 2011-05-16T14:31:24.246Z · LW(p) · GW(p)

For your right action, I praise you.

comment by Miller · 2011-05-16T11:22:03.105Z · LW(p) · GW(p)

I could probably come up with some contrarian rationalization not to praise you, but I'll just not do that. Praise to you for making this minute more useful to the world than my last minute.

comment by MixedNuts · 2011-05-16T11:13:08.927Z · LW(p) · GW(p)

Usually, donating conditionally would be less right than unconditionally and asking for praise later. Yet in this context, knock-on effects make it righter. Major props.

comment by Nisan · 2011-05-16T03:49:04.224Z · LW(p) · GW(p)

I praise you for this right action.

comment by jasonmcdowell · 2011-05-16T00:24:50.583Z · LW(p) · GW(p)

I praise you for your right action. Not only does your action have recursive beauty, but it also, like a socio-volitional whirlpool, a decision-theoretic attractor, guides me by example.

Edit: Ah, so that's what you meant by duplicate.

Replies from: Larks
comment by Larks · 2011-05-16T18:43:41.889Z · LW(p) · GW(p)

Dupe

comment by loqi · 2011-05-15T23:19:32.364Z · LW(p) · GW(p)

I hereby extend my praise for:

  • Your right action.
  • Its contextual awesomeness.
  • Setting up a utility gradient that basically forces me to reply to your comment, itself a novel experience.
comment by EStokes · 2011-05-15T21:29:32.768Z · LW(p) · GW(p)

Thanks for doing such a great thing! :D

comment by Nick_Roy · 2011-05-15T20:20:53.626Z · LW(p) · GW(p)

I praise you for you right action, Rain. I honestly do.

comment by MinibearRex · 2011-05-15T19:30:49.364Z · LW(p) · GW(p)

The Knights Who Say Ni salute your noble undertaking, provided that you first build a working cello out of toothpicks.

comment by Will_Newsome · 2011-05-15T04:47:09.983Z · LW(p) · GW(p)

Praise be to you for your right action! May you be blessed by the gods.

comment by Synzael · 2011-05-23T17:01:50.058Z · LW(p) · GW(p)

Thank you ^_^ i really appreciate you supporting a path towards a FAI singularity.

comment by endoself · 2011-05-21T17:08:55.914Z · LW(p) · GW(p)

I praise your right action.

comment by Karl · 2011-05-18T01:56:22.549Z · LW(p) · GW(p)

Congratulation for raising the expected utility of the future!

comment by AlexMennen · 2011-05-17T19:49:02.331Z · LW(p) · GW(p)

Good for you. Allowing other people to force you to do what you should be doing anyway is a great way to increase utility!

comment by curiousepic · 2011-05-16T18:14:09.506Z · LW(p) · GW(p)

I certainly hope you mean non-duplicate per-user, since I'm not going to read through every one of the comments to ensure that my response is non-duplicate. In any case, I sing your praise on high.

comment by TimFreeman · 2011-05-15T19:14:33.888Z · LW(p) · GW(p)

I praise your right action, and accept the minor karma hit.

Hmm, I wish I knew how to avoid this post polluting the "Recent Comments" list.

Replies from: steven0461, Rain
comment by steven0461 · 2011-05-15T19:25:11.729Z · LW(p) · GW(p)

Hmm, I wish I knew how to avoid this post polluting the "Recent Comments" list.

Rain accepts PMs.

comment by Rain · 2011-05-15T19:18:04.555Z · LW(p) · GW(p)

It seems most people want their praise to be public, in which case avoiding the recent comments list would be counterproductive.

comment by wedrifid · 2011-05-15T11:40:42.192Z · LW(p) · GW(p)

Praise for your right action.

comment by Nick_Tarleton · 2011-05-15T08:54:59.871Z · LW(p) · GW(p)

Thank you for your right action.

comment by atucker · 2011-05-14T16:30:44.785Z · LW(p) · GW(p)

Where can I exchange snide remarks for constructive criticism?

In all seriousness, I don't think Giles is trying to get much applause here, so much as make it easier for people to coordinate their efforts.

I think (correct me if I'm wrong) that he knows that he doesn't know the specific steps to take in order to accomplish his goals. Which is why he wants to talk to these people.

I think that he has done a pretty bad job of PR, and should have more concrete ideas and plans before he continues posting on the subject. Furthermore, he's continuing to use the heavily loaded phrase "save the world" in ways which probably discredit it, and this site.

That being said, I think that this comment is almost entirely destructive, and makes no progress towards anything other than continuing to tear Giles down. Which the current karma system is already doing.

Replies from: Giles, Giles
comment by Giles · 2011-05-15T02:13:52.707Z · LW(p) · GW(p)

If I'm going to be torn down, I appreciate information as to why. A snide remark is a lot more useful for this than a plain downvote.

comment by Giles · 2011-05-15T02:01:27.582Z · LW(p) · GW(p)

Where can I exchange snide remarks for constructive criticism?

Right here. Following Rain's example:

Reply to this comment with snide remarks, about a linked comment/post on a topic which a LW reader could be expected to have some familiarity with.

I will attempt, within my ability and within reason, to turn each snide remark into a constructive criticism. Up to a limit of 101 comments. I won't respond if someone else does a satisfactory job first.

Duplicates are allowed but will yield duplicate responses. There is no per-user limit but please play nice and don't hog them all for yourself.

(EDIT: time limit - end of 2011)

Replies from: Giles
comment by Giles · 2011-05-15T02:02:23.557Z · LW(p) · GW(p)

I'll kick it off with Vladimir_Nesov's example:

Where can I exchange units of applause for units of world-saving?

Replies from: Giles
comment by Giles · 2011-05-15T02:08:13.539Z · LW(p) · GW(p)

I appreciate Giles's stated motivations but feel he is not pursuing the optimal approach to achieving them. Specifically, he has been a little too hasty recently when posting to LW; some of his posts appear to be "empty" or even "trollish", and if anything this could be seen to damage his cause rather than advance it. If he has these goals but is unsure of the best approach to achieving them, perhaps he would be better off personally corresponding with those who have similar goals instead of engaging in disruptive or irritating activity such as this.

comment by wedrifid · 2011-05-14T12:23:03.649Z · LW(p) · GW(p)

Brilliant.

comment by wedrifid · 2011-05-15T19:11:51.688Z · LW(p) · GW(p)

I praise you for your wry incisiveness.

comment by Dorikka · 2011-05-14T05:53:32.079Z · LW(p) · GW(p)

I'm planning to save the world by accumulating a large amount of money and donating it to the most effective charity that I can find.

Two reasons why I currently think this path is best for me:

1) I think that my mind is much better suited to accumulating money than directly working on really hard problems. Decision theory just makes my head hurt.

2) If I change my mind about which charity I consider effective, being a donor allows me to immediately act on my updated beliefs without wasting my past learning. Ex: If I became an FAI researcher and then (after I had spent years learning how to be an effective FAI researcher) decided that life-extension technologies were more effective, I would have to study a bunch of new stuff. If I'm donating, I just send the money to a different place. Curious note: The influence of this factor on my final decision is inversely related to my confidence level in my current judgement.

Edit: I may be wrong about #2; the instrumental utility granted from such may be smaller than I am estimating it to be. However, I think that I have enough of a comparative advantage in making money that even if #2 grants me only a small amount of utility, my decision is likely to remain the same.

I wanted to state this before people began to argue about the merits of #2 (if they did so), because it tends to be irritating when you argue against a proposition and you find out after the fact that the person who initially believed the proposition to be true assigned less importance to its truth than you thought.

Replies from: Eliezer_Yudkowsky, wallowinmaya
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-14T23:57:46.899Z · LW(p) · GW(p)

I would very strongly advise that you donate something while you're trying to accumulate money. Otherwise I would bet against a generic person in your situation ever following through (Outside View).

Replies from: Dorikka, Giles
comment by Dorikka · 2011-05-15T02:31:56.641Z · LW(p) · GW(p)

Your statement makes intuitive sense, but do you have any data that you think would be a more persuasive argument?

comment by Giles · 2011-05-15T01:24:45.475Z · LW(p) · GW(p)

I hadn't considered this one as an argument against the "entrepreneur now, donate later" strategy. It works from the inside view too - I don't want to expose myself to influences that might make me strongly modify my utility function in the direction of selfishness, and surrounding myself with go-getting business types might do just that.

Speaking of which, I still owe you money. I have personal issues which currently prevent me from making a significant SIAI donation, but I'm trying to strategize my way around them.

Replies from: beriukay
comment by beriukay · 2011-05-15T08:18:37.058Z · LW(p) · GW(p)

Maybe we could set up a donation matching system that, while not as amazing as the ones by the big donors, could add up to something interesting and fruitful. The logistics seem a bit difficult to set up, but I know that I would be willing to match funds with someone in a similar position as myself.

Replies from: Giles
comment by Giles · 2011-05-15T15:32:13.953Z · LW(p) · GW(p)

I'm interested. What is it that means people want to pair up, rather than just individually giving as much as they can? If there's a pool of potential donors who are limited by "akrasia" then yes, that's a totally awesome idea.

I'll see if anyone at SIAI is interested and maybe discuss with you how it could be implemented.

Replies from: beriukay
comment by beriukay · 2011-05-16T11:22:07.696Z · LW(p) · GW(p)

I think you're right, that akrasia would be one of the biggest reasons. There's also the possibility that there aren't enough applause lights for giving, and that thus giving to the SIAI just doesn't feel as good as it should. And since LW doesn't press my superstimulus buttons the way a video game does, it hurts more to pay for 20 hours of entertainment here than it does to pay for, say, Portal 2, which didn't even provide 20 hours of fun (but oh what fun...).

I've tried to set up donation matching with friends before. Most are just not interested. The one that was willing has recently decided to buy a house and get married, so he can't play any more. But for a while, I was a part of a superorganism that had twice the donating power as just me alone, and that felt pretty cool.

I'll start thinking of how it could be implemented, just in case the SIAI is interested.

Replies from: Rain
comment by Rain · 2011-05-16T13:48:59.288Z · LW(p) · GW(p)

There's also the possibility that there aren't enough applause lights for giving, and that thus giving to the SIAI just doesn't feel as good as it should.

Until I explicitly asked for it, this was certainly true for me. The Red Cross thanks me and provides gifts or status boosts more than 12 times, in person, on each individual visit to donate blood, sometimes doing so in a public forum. SIAI doesn't even send an automated email any more.

Replies from: Alicorn
comment by Alicorn · 2011-05-16T20:00:33.980Z · LW(p) · GW(p)

I find it annoying when the Red Cross calls me, even when it's just with thanks, but part of why I've given blood in the past is that there's a plaque on the wall in my grandma's house of a newspaper clipping in which my grandfather is praised for exceeding the (I think) 10-gallon mark of blood donation.

Replies from: Clippy
comment by Clippy · 2011-05-16T20:12:14.052Z · LW(p) · GW(p)

Human blood has very low iron content by weight -- it is measured in micrograms per deciliter.

Replies from: CuSithBell
comment by CuSithBell · 2011-05-16T20:19:16.719Z · LW(p) · GW(p)

I was also disappointed when I learned that the process of extracting the iron is nontrivial.

comment by David Althaus (wallowinmaya) · 2011-05-14T10:43:22.538Z · LW(p) · GW(p)

What do you think is the best strategy to earn money?

Replies from: Dorikka
comment by Dorikka · 2011-05-14T17:37:17.124Z · LW(p) · GW(p)

I think that it's opening a business, though I don't yet know in what industry nor in which country such would be most profitable.

Replies from: None
comment by [deleted] · 2011-05-15T21:41:24.576Z · LW(p) · GW(p)

My strategy as explained in this LW comment has accumulated 351k USD in almost 7 years; I'm almost 28 years old. It may not be optimal, and it's definitely not universally applicable, but I suspect that it would work for many people. Its virtues are that it's not risky, and (most importantly!) it's devoid of magic tricks. It just requires hard work (but not that hard) over many years (but not that many).

I've been thinking about writing a top-level post (which would be my first) along these lines.

Replies from: curiousepic, Dorikka, Rain
comment by curiousepic · 2011-05-16T18:18:56.467Z · LW(p) · GW(p)

Out of curiosity, what percentage of that amount have you donated? I would encourage you to write this post.

comment by Dorikka · 2011-05-16T02:57:23.948Z · LW(p) · GW(p)

This is my default strategy (I'm getting a degree in Chemical Engineering) if I can't get a better one to come to fruition.

If you have any additional insights beyond those in your linked comment, a top-level post might be useful.

comment by Rain · 2011-05-16T18:23:19.009Z · LW(p) · GW(p)

A blog I follow with a similar life strategy is Get Rich Slowly.

comment by Kevin · 2011-05-14T06:04:01.182Z · LW(p) · GW(p)

I want to have saved the world.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-14T23:59:27.017Z · LW(p) · GW(p)

If I have a choice between actions, and one of them is more likely to save the world than the other, I will take the one that is more likely to save the world.

Even I don't live up to that every time, not even close, but it sure sounds a lot scarier than "wanting to save the world", doesn't it?

Replies from: MixedNuts, Giles, Dorikka, NancyLebovitz
comment by MixedNuts · 2011-05-15T11:08:06.088Z · LW(p) · GW(p)

Much less scary.

"Save the world" is a very high-level goal, and I don't know how to do it. Your procedure is straightforward. I just need to invoke it when I recognize there's a choice. Resisting temptation is much easier (not just simpler, easier) than deciding whether you're being tempted.

Also, that's not actually your goal. You don't rob banks.

comment by Giles · 2011-05-15T05:45:17.958Z · LW(p) · GW(p)

Dunno - to me they sound almost equivalent (except that you have no other motivations at all, and I'm not sure I can honestly say that about myself).

In any case, I'm not sure what sounds scary. It's all the people who don't seem to want to improve the world in any way at all that scare me.

comment by Dorikka · 2011-05-15T02:41:57.437Z · LW(p) · GW(p)

If I have a choice between actions, and one of them is more likely to save the world than the other, I will take the one that is more likely to save the world.

There are certain actions for me such that the impact that they have on the probability that the world will be saved is insignificant enough that such impact is overwhelmed by the amount of immediate fun that they will generate, so an action that generates lots of immediate fun may be more desirable than one which increases the chance that the world will be saved by a really-super-small amount. Are you saying that for EV_Eliezer, there is no increase small enough in the chance that the world will be saved such that a huge amount of immediate fun is of greater terminal utility?

Replies from: Rain
comment by Rain · 2011-05-15T14:09:00.395Z · LW(p) · GW(p)

Are you saying that for EV_Eliezer, there is no increase small enough in the chance that the world will be saved such that a huge amount of immediate fun is of greater terminal utility?

I think you quoted the wrong part to answer your question.

Even I don't live up to that every time, not even close

It appears he takes many actions which he thinks are less likely to save the world than the known alternative.

comment by NancyLebovitz · 2011-05-15T00:26:08.179Z · LW(p) · GW(p)

Approximately what proportion of your actions (or time spent, if that's easier to compute) have a clear chance of contributing to saving the world?

comment by Mitchell_Porter · 2011-05-15T03:03:11.913Z · LW(p) · GW(p)

Yesterday I went on vacation from LW, but today I thought I'd see how this post was going, since it had the potential to produce something new... Alas, in about 12 hours, it has sunk from -1 to -6, as the mob decides it is about nothing but "applause lights" and votes it down. This is a failure of imagination and it's about to become a lost opportunity. It is not every day that someone shows up wanting to organize the world-savers, and in this case, I see definite potential. Or is it really the case that all those altruists have no need for support? End of lecture, back to vacation.

Replies from: Giles
comment by Giles · 2011-05-15T05:41:25.238Z · LW(p) · GW(p)

I'm glad I have your support, but from my point of view none of this really works as an excuse. I'm trying to win here, and this post was clearly not a win (although it generated some interesting discussion, so maybe not quite so clearly).

There are things I want to change about LW culture too, but I know that I won't achieve that by whingeing. If LW culture is to change, then my own attitude really has to change first.

The lost opportunity may not be as great as you think. I'm committed to this, and I'm not going to stop trying to organize and support rational altruists just because of a few failed attempts.

comment by ata · 2011-05-14T06:17:24.246Z · LW(p) · GW(p)

Yes, I strongly prefer that earth-originating humane life survive and thrive and spread throughout the universe and make it much more fun and awesome to the fullest extent of what the laws of physics will allow, and I intend to use my life for this purpose.

(Though I'm curious, what kind of cooperation are you talking about, beyond what's already facilitated by entities like SIAI, LW, FHI, and the Existential Risk Reduction Career Network?)

comment by wedrifid · 2011-05-14T06:58:03.392Z · LW(p) · GW(p)

I am nauseated by the very thought of being included in your list, despite my own practical plans in that direction. What is it with with empty applause generating exhortations these days? Ick. Double ick.

PS: Being put on a list of people with Dorikka's line of thought would not be psychologically distressing to me in the least. It is not nearly so creepy sounding.

Replies from: paulfchristiano, Perplexed, rhollerith_dot_com
comment by paulfchristiano · 2011-05-14T18:02:52.112Z · LW(p) · GW(p)

Being "creepy sounding" seems like a very bad reason to be opposed to something. Cryonics is creepy sounding. The mission of the SIAI is creepy sounding (for exactly the same reason as this post, I would say). I don't even see how this differs from aversion to anything strange, which seems horribly destructive in the aggregate.

There may be plenty of other reasons to downvote or criticize this post (empty applause being the main one), but I don't see any legitimate cause for psychological distress. Of course, you may fear that a reader will draw incorrect conclusions about your motivations/beliefs. I don't see why that in particular would nauseate, though--just prompt (actionable) concern.

Replies from: wedrifid
comment by wedrifid · 2011-05-14T18:37:30.897Z · LW(p) · GW(p)

Being "creepy sounding" seems like a very bad reason to be opposed to something.

Making a public declaration is a social act, as is making your own identity be visibly attached to something. When considering the consequences of such actions the 'creepy' vibe or 'ick' aversion provides critically important information about the effects that can be expected.

Replies from: paulfchristiano
comment by paulfchristiano · 2011-05-14T19:05:36.276Z · LW(p) · GW(p)

This seems perfectly fair (and if anything is a particular concern for me, given how easily my online/offline identities are connected). But my response would be more along the lines of "I am concerned that this statement feels extreme and arrogant even if technically accurate; I really don't want my identity so publicly associated with this position. Could you either remove my name from the list, or clarify my position inline?" Alternatively, "My gut reaction to this is that it feels creepy, and while I wouldn't use this gut reaction to support a normative judgment, I am concerned that others might."

Neither of these sounds much like your position as you've expressed it.

comment by Perplexed · 2011-05-14T18:12:35.407Z · LW(p) · GW(p)

Well, gee. Look at all the applause wedrifid has garnered.

Applause lights still work around here, especially if you know your audience.

Replies from: wedrifid
comment by wedrifid · 2011-05-14T18:26:33.450Z · LW(p) · GW(p)

Applause lights still work around here

When was that ever in doubt?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-14T19:38:43.443Z · LW(p) · GW(p)

Disapproval, not surprise.

comment by RHollerith (rhollerith_dot_com) · 2011-05-14T07:47:28.315Z · LW(p) · GW(p)

Being put . . . would be psychologically distressing to me in the least.

I think you meant "would not be".

Replies from: wedrifid
comment by wedrifid · 2011-05-14T07:53:38.753Z · LW(p) · GW(p)

I think you meant "would not be".

Fixed.

comment by PhilGoetz · 2011-05-15T04:28:29.675Z · LW(p) · GW(p)

Looks like if you want to save the world, you've gotta accept that you're going to lose some karma.

Replies from: childofbaud
comment by childofbaud · 2011-05-15T18:25:38.380Z · LW(p) · GW(p)

Looks like if you want to save the world, you've gotta accept that you're going to lose some karma.

Seems like the stakes have lessened somewhat. Socrates lost his life doing similar things.

comment by RHollerith (rhollerith_dot_com) · 2011-05-14T15:49:21.392Z · LW(p) · GW(p)

A call to action should come with a definite goal IMHO. This call to action comes with not much more than a collection of vague motherhood statements.

comment by EchoingHorror · 2011-05-14T08:24:51.302Z · LW(p) · GW(p)

I want the world to not need to be saved, but will settle for it being saved. The reality of existential risk is such an inconvenience. I want to help, but probably won't have, recognize, and successfully act on the opportunity to do so.

The scenarios I can imagine where a list like this would be useful are farfetched.

comment by [deleted] · 2011-05-14T13:40:17.273Z · LW(p) · GW(p)

I've high preference for the world staying around.

comment by David Althaus (wallowinmaya) · 2011-05-14T11:13:27.773Z · LW(p) · GW(p)

Well, I think most of us want to save the world, or at least help to save it. The BIG problem is to find an efficient strategy to do so. We should make concrete proposals, not merely profess our altruism. ....and to be not too hypocritical here are my naive proposals:

  1. If your IQ is enormous -> FAI-research
  2. If you have money-making skills-> donate millions to SIAI, FHI, or other charities
  3. If your IQ is really high-> do some research ( maybe SENS, computer science, nanotech, etc...)
  4. or if you not that clever or you suffer akrasia: get an useful, but not too challenging job, like becoming a biology teacher and fight creationism, or become a good journalist or lawyer, etc...
  5. get involved in online discussions and make the world a little more rational, and criticize vague posts or comments full of applause lights like this one!
Replies from: Pavitra, CuSithBell, tenshiko
comment by Pavitra · 2011-05-14T15:29:14.914Z · LW(p) · GW(p)

I would slightly modify step 1 as follows: if you think there's a chance you might be useful to SIAI, send them a letter. If they don't accept you, continue to step 2.

Replies from: Giles, wallowinmaya
comment by Giles · 2011-05-14T23:57:40.493Z · LW(p) · GW(p)

This isn't exactly what I did. Instead I'm signing up as a volunteer. But in either case the SIAI is the closest thing I know of to a group of rational do-gooders who are actually cooperating. So I want to try and get involved.

comment by David Althaus (wallowinmaya) · 2011-05-14T17:30:21.777Z · LW(p) · GW(p)

I agree. IMO FAI means to first contact and consult the smartest guys in the field, which is presumably the Yudkowsky-gang.

comment by CuSithBell · 2011-05-14T15:18:53.997Z · LW(p) · GW(p)

If your IQ is enormous -> FAI-research

Good post, but this gave me pause. Does LW / do you really think that IQ is the relevant factor here?

Replies from: nazgulnarsil, wallowinmaya
comment by nazgulnarsil · 2011-05-15T04:39:30.013Z · LW(p) · GW(p)

Is everyone at SIAI in the triple nines cut off? (IQ in the 99.9 percentile)

Replies from: EchoingHorror
comment by EchoingHorror · 2011-05-15T06:32:20.698Z · LW(p) · GW(p)

For these eleven ... maybe. Much more likely than 10^-33 for eleven average or random people. My guess is yes, but they may just be good at presenting their credentials.

Replies from: CuSithBell
comment by CuSithBell · 2011-05-15T15:07:24.382Z · LW(p) · GW(p)

Sure, it's a higher chance, but I'd still say it's pretty improbable - my understanding is that IQ isn't that great.

comment by David Althaus (wallowinmaya) · 2011-05-14T16:15:07.152Z · LW(p) · GW(p)

To be clear, with IQ I mean intelligence, or abstract, analytical reasoning. But what else should you need? Maybe self-confidence?

Replies from: Kaj_Sotala, CuSithBell
comment by Kaj_Sotala · 2011-05-14T18:39:36.946Z · LW(p) · GW(p)

But what else should you need? Maybe self-confidence?

Motivation, energy and persistence. The best smarts in the world don't help much if actually studying all the requisite subjects feels like too much work.

Many people with high IQs are at a disadvantage, since they get used to all schoolwork being easy and not requiring any effort. When things start to actually get hard, they give up. This is one of the main reasons why I concluded that it isn't worth it for me to try to get into machine learning or other high-mathy fields, after beating my head against a rock wall for a couple of years.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-05-14T19:45:47.602Z · LW(p) · GW(p)

Motivation, energy and persistence. The best smarts in the world don't help much if actually studying all the requisite subjects feels like too much work.

Right. I am a poster child for akrasia and lazyness. But, wow, I never thought this was a problem for you. Your output is impressing. ( At least to me, I've published on my blog one sentence in 1 year....) Enough flatter. What kind of research do you focus on, instead? You've mentioned cognitive science somewhere, at least I think so. In which fields do you think people with akrasia-problems and IQ of around 120 can have the most impact for reducing existential risks? Hopefully not "Make money and donate", I have some emotional, maybe irrational concerns with capitalism. Sorry, if this comment is to personal, the lesswrong-culture seems to punish this kind of comment, but I would value your advice!

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-05-14T21:00:39.424Z · LW(p) · GW(p)

I don't mind. :) I think there should be more of this kind of discussion.

In which fields do you think people with akrasia-problems and IQ of around 130 can have the most impact for reducing existential risks?

I don't think there is a general answer to this. There are many forms of akrasia. They have several different causes and also several different effects. Where you can have the most impact depends on what your akrasia allows you to do, and which parts of your akrasia you can beat. It also depends on what your natural talents are otherwise, what you're intrinsically motivated to do, and what you can motivate yourself to do even though you have no intrinsic interest in it.

For instance, you were surprised to hear about my issues because you've seen me write a lot. The thing is, I find writing-related akrasia relatively easy to beat. However, when it comes to learning new math, my akrasia gets a lot worse. Overcoming it usually requires that I see interesting applications for which I can use the math at once. I'm also not intrinsically curious about most math: the best math folks are the ones who get a lot of practice at it because they keep playing with fun math problems all the time. I certainly play with math problems every now and then as well, but nowhere to the degree that some people do. I still haven't read most of the decision theory discussion here.

My advice would be to figure out where your comparative advantage is. Look at the things you're good at and which come easily to you. Then try to figure out whether there's something x-risk-related that could benefit from those skills.

Personally, I finally figured out that my comparative advantage is probably in writing and the social sciences. I just finished a BA in cognitive science, and I'm now taking a three-month sabbatical to concentrate on a) getting practice in writing b) improving my mental health by various means, particularly meditation. My current long-term goal involves honing my writing to such a point that I can make a living with it and become an influential enough writer/public figure to significantly raise support for x-risk work.

I currently think this is the best way to go for me: but for somebody else, the best way to go might be something different entirely.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-05-14T21:53:37.194Z · LW(p) · GW(p)

Thanks for the advice! I really appreciate it.

comment by CuSithBell · 2011-05-14T16:20:33.839Z · LW(p) · GW(p)

That I would agree with (maybe some sort of "intellectual creativity" if that's not included, though I guess it should be). Generally, though, I see IQ used to refer to the thing measured by IQ tests instead of to intelligence.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-05-14T17:20:04.427Z · LW(p) · GW(p)

Ah, sorry. For me IQ is just an abbreviation for intelligence. ( In its broadest meaning, I can't define it. But you know, Einstein, Russell,Bostrom, Yudkowsky ect... have something in common, which I would say is Intelligence. ) But you're right, in reality IQ means something different, guess I should change my use of the word.....

Replies from: CuSithBell
comment by CuSithBell · 2011-05-14T17:37:56.966Z · LW(p) · GW(p)

Well, cheers then! Confusion: solved.

comment by tenshiko · 2011-05-14T15:16:17.513Z · LW(p) · GW(p)

Here, my dear Giles, have a written downvote in the form of supporting this comment. This. Is. Applause Lights.

Replies from: Giles
comment by Giles · 2011-05-15T20:47:49.403Z · LW(p) · GW(p)

I'm willing to accept and update on criticism that this post was trollish or otherwise inappropriate. But I'm not sure I agree with the applause light criticism in particular.

If I understood it correctly, Eliezer described an applause light as a statement that was vacuous because its negation was obviously unacceptable. But there have been people here who stated that they don't want to save the world (not just that they disagree with how it's phrased or presented). And they didn't get demolished for it.

comment by katydee · 2011-05-14T20:59:44.270Z · LW(p) · GW(p)

This seems more appropriate for the Discussion section than for the main page.

Replies from: Giles
comment by Giles · 2011-05-14T23:52:19.526Z · LW(p) · GW(p)

OK you're right. Technical note: I appear to have the ability to move it to the discussion area, but do you know what will happen if I do? I don't want to end up with a duplicate post, or accidentally losing all my hard-earned negative karma by scaling all the downvotes back to 1 point each.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-15T00:47:23.482Z · LW(p) · GW(p)

Moved it for you (also wanted to see how/whether that works when I do it). Your negative Karma looks intact. :-)

Replies from: Giles
comment by Giles · 2011-05-15T01:09:23.497Z · LW(p) · GW(p)

Yep, that seems to have worked as intended. Thanks.

comment by novalis · 2011-05-14T05:08:25.707Z · LW(p) · GW(p)

I want to save the world.

comment by Alex_Altair · 2011-05-14T05:02:21.619Z · LW(p) · GW(p)

I want to save the world. Specifically by means of satisfying SIAI's mission.

Replies from: Giles
comment by Giles · 2011-05-15T01:02:17.328Z · LW(p) · GW(p)

Awesome. I've signed up to SIAI as a volunteer, as they seem to be an example of what I'm interested in - a community of genuine rational altruists. I hope it'll work out well.

comment by TimFreeman · 2011-05-15T19:59:35.512Z · LW(p) · GW(p)

I want the world to be saved, and am willing to take action to make that happen so long as the actions I take to make it happen don't make me feel like a victim. I tend to feel like a victim if I take an action that reduces my standard of living, I contribute to a lost cause, or a few other scenarios that don't seem relevant here.

I presently feel that SIAI is blocking itself by apparently believing that solving the FAI problem is blocked on any or all of the following:

  • Newcomb's problem
  • Dealing with people who have non-instrumental concerns about what is done with simulations of them, beyond saying "don't care about that"
  • Caring what happens to causally disconnected areas of space-time that resemble the here-and-now
  • Caring about ethical systems that have unbounded utility, beyond saying "don't make an ethical system with unbounded utility"
  • Probably a few other pieces of obscure philosophy I can't recall right now or don't know about yet

I have not yet posted coherent arguments against these things. I plan to spend some time on that for a while, since the people here claim to be responsive to good arguments. I don't really expect changing SIAI's position on enough of these issues to be politically possible, so I expect to fail and then focus my efforts elsewhere. Hmm, I suppose I should try to find and link up with any non-SIAI people on Giles' list above at that point.

I suppose the general lesson to learn from this is that in at least one case, lack of agreement on a general approach is blocking cooperation.

Replies from: Wei_Dai, Document
comment by Wei Dai (Wei_Dai) · 2011-05-16T03:02:26.577Z · LW(p) · GW(p)

In the past I've made the opposite argument to SIAI, which seemed to be well received, that there were more philosophical problems that need to be solved for FAI than they may have realized. Obviously it would be great news if that turns out not to be the case, so I would be really interested to hear your arguments.

comment by Document · 2011-05-16T04:16:01.381Z · LW(p) · GW(p)

I presently feel that SIAI is blocking itself by apparently believing that solving the FAI problem is blocked on any or all of the following:

  • Newcomb's problem

I thought SIAI consensus was that Newcomb's problem was solved, and not a block at all?

  • Dealing with people who have non-instrumental concerns about what is done with simulations of them, beyond saying "don't care about that"

It's not so much that they feel they have to deal with those people as that they are those people.

(Haven't read further yet.)

comment by Oscar_Cunningham · 2011-05-14T19:49:37.395Z · LW(p) · GW(p)

"I am concerned that this statement feels extreme and arrogant even if technically accurate; I really don't want my identity so publicly associated with this position." Could you remove my name from the list please?

ETA: Thanks!

comment by Emile · 2011-05-14T14:20:32.810Z · LW(p) · GW(p)

I want mankind to be saved, and reach the stars.

comment by Benquo · 2011-05-14T11:10:15.992Z · LW(p) · GW(p)

I want the world to be saved. If that means I have to do something about it, then I have to do something about it.

Replies from: Giles
comment by Giles · 2011-05-15T00:03:16.788Z · LW(p) · GW(p)

There's probably something you can do to make it a little more saved, or saved with a slightly higher probability. Does the expected payoff mean it's not worth looking into when compared to your other motivations?

Replies from: Benquo
comment by Benquo · 2011-05-15T14:02:12.514Z · LW(p) · GW(p)

It's much too important not to look into. But I think I need to become better and more powerful (which I am working on!) before I can really be of service.

comment by endoself · 2011-05-14T04:49:02.597Z · LW(p) · GW(p)

I want to save the world.

comment by Carwajalca · 2011-05-18T14:15:14.158Z · LW(p) · GW(p)

I want to increase the probability of world survival. This I intend to do by choosing a career which has some impact on existential risk and by donating money to SIAI. I also believe that promoting cryonics decreases existential risk indirectly - if you expect to be around 1000 years from now, that tends to give a longer-term view on matters.

comment by Rain · 2011-05-14T22:48:05.466Z · LW(p) · GW(p)

The effort it takes to keep up with the amount of analysis and meta analysis done here is quite exhausting.

comment by dvasya · 2011-05-14T18:29:48.510Z · LW(p) · GW(p)

I, too, want to save the world.

comment by Document · 2011-05-14T05:12:59.908Z · LW(p) · GW(p)

I want to save the world the world to be saved to improve the world.

comment by JoshuaFox · 2012-03-15T13:54:42.056Z · LW(p) · GW(p)

The Lifeboat Foundation has built a list of people, some high-status, who have said that they want the world saved. They have done nothing else, but this list is a good thing to have.

comment by Grognor · 2012-01-15T04:25:33.797Z · LW(p) · GW(p)

I want to live forever.

And I can't do that if the world ends, now can I?

comment by MatthewBaker · 2011-08-05T17:39:13.309Z · LW(p) · GW(p)

I want to help save the world just as much as I want the world to be saved but either would be amazing from my perspective.

comment by RHollerith (rhollerith_dot_com) · 2011-05-16T05:23:41.675Z · LW(p) · GW(p)

I want the world (i.e., civilization) to survive. I would choose a lower standard of living for myself and a lower probability of personal survival to increase the probability of global survival.

Except for rather minor exertions (such as devoting a minor fraction of my time and energy over a couple of years to making sure that my rather strange set of values had at least one advocate in the singularitarian conversation -- something I stopped doing about Apr 2009) I have not actually done anything for my civilization because I am so ridiculously disabled by chronic illness that with p=.95 I must allocate almost all of my resources into solving that bitch of a problem before I can be any significant use to myself or the world.

Replies from: Carwajalca
comment by Carwajalca · 2011-05-18T14:09:05.328Z · LW(p) · GW(p)

Hope you get well soon!

comment by Nick_Roy · 2011-05-15T20:32:59.108Z · LW(p) · GW(p)

I want to participate in saving the world in an important way.

comment by childofbaud · 2011-05-15T18:58:43.146Z · LW(p) · GW(p)

But equally clearly, the list [of people who want to save the world] will not include everyone.

What are you basing this claim on?

Replies from: wedrifid
comment by wedrifid · 2011-05-15T19:04:47.698Z · LW(p) · GW(p)

What are you basing this claim on?

Obviousness? Exposure to at least one person who has declared their disinclination to save the world?

Replies from: childofbaud
comment by childofbaud · 2011-05-15T19:31:13.924Z · LW(p) · GW(p)

Obviousness? Exposure to at least one person who has declared their disinclination to save the world?

Point taken. The list likely won't include everyone. :-)

I interpreted the original statement as "the list won't include a significant majority", because of the context it was given in. Perhaps Giles can chip in and say whether I was mistaken.

Replies from: Giles
comment by Giles · 2011-05-15T19:58:45.914Z · LW(p) · GW(p)

I meant "the list won't include a significant majority". (Possibly weak) evidence for this is the underfunding of organizations which actually appear to be trying to save the world (specifically GiveWell's charities and the SIAI).

I say possibly weak because this funding gap comes about as a result of people's behaviour, not their stated preference. So this could be seen as a failure of rationality rather than motivations. As mentioned on this site before, people lack a window on the back of their neck which allows you to read their volition, so it's difficult to distinguish between these two cases from the outside.

Also note the apparent lack of a thriving support community for people with these ambitions.

Replies from: childofbaud, Carwajalca
comment by childofbaud · 2011-05-15T22:32:05.065Z · LW(p) · GW(p)

A Google search for "save the world" yields 11,000,000 results. A search for "harm the world" yields 242,000. Also, the top results for the latter are framed as cautionary tales, rather than normative instructions, or communities for how to accomplish the malignant goal.

Saving the world is a very commonly expressed sentiment, which is why compiling a list of people who want to save the world seems a little redundant to me. A list about people who have saved the world might be a tad more useful.

As far as I know, an infinitesimal amount of the world population consciously sets out to be evil, or to do harm to the world. It's more a case of the road to hell being paved with good intentions. I'm pretty sure there have been many studies about this, though I'd have to dig for them again. Perhaps someone else can post them.

Neither the stated desire nor the action implies donating to charities. Even you have admitted to this in the past.

I thought your claim might be based on the replies to your HELP! I want to do good thread. In that case, I thought I should point out that no equivalent "HELP! I want to do bad" or "HELP! I want to be completely benign" threads were ever created.

One could easily verify your claim by making such posts, and counting the replies. If one wanted to be really accurate about it, one could also go through the post history of the respondants, to be sure they're not just being contentious, but truly ill-intentioned.

Extending the survey to the population at large would be similarly trivial. One could tell people on the street about a one-question survey, and if they decide to participate, alternate between: "Do you want to save/improve the world?" and "Do you want to harm the world?"

(This might be a fun exercise for the Toronto LW group, now that I think about it. Both to find the answer out for ourselves, and to get people thinking about the subject. Because thinking often precedes action. Or at least it should... )

Replies from: Rain, MixedNuts
comment by Rain · 2011-05-15T22:38:40.708Z · LW(p) · GW(p)

A list about people who have saved the world might be a tad more useful.

Stanislov Petrov, for one.

comment by MixedNuts · 2011-05-15T22:41:40.699Z · LW(p) · GW(p)

Only Disney villains want to harm the world. The alternative to "wanting to save the world" is "using world quality as a free variable when optimizing for other purposes" (that is, not caring). There's no reason for a "HELP! I want to do something unrelated to saving the world" thread.

Replies from: childofbaud
comment by childofbaud · 2011-05-15T22:53:18.129Z · LW(p) · GW(p)

A Google search for "using world quality as a free variable when optimizing for other purposes" yields... 0 results.

Though a search for "I don't care about the world" yields a respectable 58,600,000. If -cup is introduced in the search query, the result drops by 10,000,000 or so.

In somewhat related news, I'm starting to doubt my own heuristic.

Replies from: MixedNuts
comment by MixedNuts · 2011-05-15T22:59:17.835Z · LW(p) · GW(p)

Searching for "i want * more than anything in the world" -"to save the world" yields 17,700,000 results.

comment by Carwajalca · 2011-05-18T14:04:24.798Z · LW(p) · GW(p)

(Possibly weak) evidence for this is the underfunding of organizations which actually appear to be trying to save the world (specifically GiveWell's charities and the SIAI).

I'd say that the reason for the underfunding is more the fact that the organizations are rather unknown, not that most people wouldn't prefer saving the world. E.g. when walking to the university I almost daily meet Greenpeace and Amnesty representants harvesting new members but no-one representing SIAI or GiveWell. What are the latter two doing to make themselves more known to the public?

comment by scientism · 2011-05-15T17:28:12.038Z · LW(p) · GW(p)

I want to save the world.

comment by EchoingHorror · 2011-05-15T07:18:54.829Z · LW(p) · GW(p)

"Save the world" is a subset of "improve the world" where saving is improving by a lot in a way that the world really needs it. "Improving the world" can mean settling for a smaller improvement, but probably doesn't mean "improving in every way so it will include saving the world". If people stop wanting to "save the world" because they weighted their desire to improve it in lesser ways anywhere near their desire to save it, to sound less egotistical, to avoid the applause light, or to dissociate from people who think they're saving the world by raising awareness or making a list of people who say they want to save the world, or whatever, and the world doesn't get saved because of it, I will be sad.

comment by shokwave · 2011-05-14T14:45:12.885Z · LW(p) · GW(p)

It's enough just to say, "I want to save the world".

I want to save the world.

comment by atucker · 2011-05-14T12:32:24.565Z · LW(p) · GW(p)

But maybe some of us can find somewhere to talk that's a little quieter.

I guess we could have an irc meetup or something? To talk about what specifically we're doing, and what we can help each other with.

Replies from: Giles
comment by Giles · 2011-05-15T00:10:29.640Z · LW(p) · GW(p)

OK - I'll be hanging around on #rationaltruism on freenode. As soon as I find out when I'm not going to be busy, I'll suggest a time for a meetup there.

comment by timtyler · 2011-05-14T08:12:13.882Z · LW(p) · GW(p)

I think this sort of thing is quite common:

Rescuing things is widely regarded as being good - and the whole world acts as a superstimulus.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-05-14T11:45:42.282Z · LW(p) · GW(p)

Comparing a disliked belief to a religious one has all the universal applicability of repeating what they say in a high - pitched tone of voice.

Replies from: Giles, timtyler
comment by Giles · 2011-05-15T00:58:51.754Z · LW(p) · GW(p)

I think ciphergoth is right in that argument-by-reference-class should be avoided if possible.

I think that timtyler is onto something with the superstimulus thing - there are mundane, reductionist reasons why I might have ended up with the motivations that I do. I had pictured it more as "the result of a peculiar mix of social conditioning and rationalist memes". In evolutionary terms it definitely feels like a "mistake", which is why I wouldn't expect all that many people to be motivated the same way I am (maybe 0.1% of people, and I'm not even sure what to do with those people if they're hostile to rationalist ideas).

But even if I knew the exact cause of my motivations, I wouldn't want to change them.

Replies from: timtyler
comment by timtyler · 2011-05-15T06:47:43.940Z · LW(p) · GW(p)

In evolutionary terms it definitely feels like a "mistake", which is why I wouldn't expect all that many people to be motivated the same way I am

In terms of DNA-genes, yes. However, the SAVE THE WORLD meme gets quite a good deal out of it. Budding world-savers often prosletyse - resulting in more brains hijacked by the meme. It seems to be a case of meme-evoution outstripping the defenses of the natural memetic immune system.

comment by timtyler · 2011-05-14T12:24:47.710Z · LW(p) · GW(p)

I think religions have by far the most extensive set of prior claims relating to trying to save large numbers of people - or the world. Comparisons seem inevitable.

In the past, most with such beliefs have been delusional - suffering from hubris - and have subsequently been proclaimed false messiahs. This raises the issue of how best to avoid that fate.

comment by ata · 2011-05-14T06:05:52.060Z · LW(p) · GW(p)

I second Alicorn's wording.

comment by Giles · 2011-05-14T04:38:34.097Z · LW(p) · GW(p)

Post any "meta" (i.e. anything that's not "I want to save the world") under here to keep things tidy. Thanks.

Replies from: nhamann, Giles
comment by nhamann · 2011-05-14T06:46:32.419Z · LW(p) · GW(p)

"Save the world" has icky connotations for me. I also suspect that it's too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.

Replies from: Giles
comment by Giles · 2011-05-15T18:26:59.081Z · LW(p) · GW(p)

I hate to say it, but the icky connotations are sort of the point. I'm interested in people who want to save the world enough to overcome the icky factor.

I realise that "Lonely Dissent" is essentially a troll's manifesto, and I apologise. But I'm publicly committing to stop writing trollish LW posts.

comment by Giles · 2011-05-14T04:54:42.173Z · LW(p) · GW(p)

I'll start with a quick clarification:

  • Yes, "saving the world" is deliberately vague. It will mean different things to different people.
  • Saving the world isn't a yes/no thing. Some good outcomes can be better than others. Think of it as a rough utility function.
  • This doesn't imply total altruism; you can want to save the world within the constraints that the rest of your life will allow.
  • To help save the world, you need to be rational. Mainly because it's a really, really hard problem.
Replies from: None
comment by [deleted] · 2011-05-14T10:48:46.921Z · LW(p) · GW(p)

To help save the world, you need to be rational. Mainly because it's a really, really hard problem.

Being irrational doesn't prevent one from stumbling upon some technique necessary for world-saving. It just doesn't concentrate the likelihood of finding it in that direction. See for instance the irrationalist list, or Buckminster Fuller.

comment by AlphaOmega · 2011-05-15T03:18:23.002Z · LW(p) · GW(p)

Well I just want to rule the world. To want to abstractly "save the world" seems rather absurd, particularly when it's not clear that the world needs saving. I suspect that the "I want to save the world" impulse is really the "I want to rule the world" impulse in disguise, and I prefer to be up front about my motives...

Replies from: Giles
comment by Giles · 2011-05-15T05:49:30.840Z · LW(p) · GW(p)

I'm being upfront about my motives. By committing to them publicly I add social pressure to keep me on my desired track.

As to what my unconscious motives might be, well I love my unconscious mind dearly but there are times when it can just go screw itself.

comment by lukstafi · 2011-05-14T08:07:17.368Z · LW(p) · GW(p)

Everyone wants to save something, don't you think?

(ETA: I've realized that my comment isn't helpful.)

comment by DanielLC · 2011-05-14T07:18:53.895Z · LW(p) · GW(p)

As someone who accepts both the doomsday argument and EDT (as opposed to TDT), I don't think the world can be saved.

I want to improve the world.

Replies from: Giles, endoself
comment by Giles · 2011-05-15T00:31:34.746Z · LW(p) · GW(p)

I'm not sure of the predictive value of the doomsday argument but my own thought experiments seem to give a fairly high probability that we're all ultimately doomed (and long before thermodynamics wins out).

So I'm with you: if the world can't be "saved" then I want some to achieve some tradeoff between prolonging our existence as much as possible, and improving the condition of the world in the remaining time.

Replies from: DanielLC
comment by DanielLC · 2011-05-15T03:24:10.209Z · LW(p) · GW(p)

I am sure of the predictive value of the doomsday argument, but I'm not sure of the predictive value of virtually anything else. Exactly how sure can you be that your thought experiments aren't biased? The galaxy can support about 10^40 people. If there's only a one in ten billion chance of being wrong, it's an expected 10^30 people. And that's not even getting into the fact that the laws of thermodynamics might be wrong.

comment by endoself · 2011-05-14T16:15:44.877Z · LW(p) · GW(p)

EDT

What about the smoking lesion problem?

Replies from: DanielLC
comment by DanielLC · 2011-05-14T19:37:29.029Z · LW(p) · GW(p)

I suggest arguing about the smoking lesion problem on the article about that problem, or discussing EDT on an article about it.

Replies from: endoself
comment by endoself · 2011-05-14T21:51:32.090Z · LW(p) · GW(p)

Okay; if you reply to a post about the smoking lesion problem or if you know of a post defending EDT then I will discuss it with you there.

comment by Laoch · 2011-05-14T17:19:33.279Z · LW(p) · GW(p)

Saved from what exactly?