Something to Protect

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-30T17:52:49.000Z · LW · GW · Legacy · 81 comments

In the gestalt of (ahem) Japanese fiction, one finds this oft-repeated motif:  Power comes from having something to protect.

I'm not just talking about superheroes that power up when a friend is threatened, the way it works in Western fiction.  In the Japanese version it runs deeper than that.

In the X saga it's explicitly stated that each of the good guys draw their power from having someone—one person—who they want to protect.  Who?  That question is part of X's plot—the "most precious person" isn't always who we think.  But if that person is killed, or hurt in the wrong way, the protector loses their power—not so much from magical backlash, as from simple despair.  This isn't something that happens once per week per good guy, the way it would work in a Western comic.  It's equivalent to being Killed Off For Real—taken off the game board.

The way it works in Western superhero comics is that the good guy gets bitten by a radioactive spider; and then he needs something to do with his powers, to keep him busy, so he decides to fight crime.  And then Western superheroes are always whining about how much time their superhero duties take up, and how they'd rather be ordinary mortals so they could go fishing or something.

Similarly, in Western real life, unhappy people are told that they need a "purpose in life", so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes.  You should be careful not to pick something too expensive, though.

In Western comics, the magic comes first, then the purpose:  Acquire amazing powers, decide to protect the innocent.  In Japanese fiction, often, it works the other way around.

Of course I'm not saying all this to generalize from fictional evidence. But I want to convey a concept whose deceptively close Western analogue is not what I mean.

I have touched before on the idea that a rationalist must have something they value more than "rationality":  The Art must have a purpose other than itself, or it collapses into infinite recursion.  But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn't all that important by itself.  No.  I am asking:  Where do rationalists come from?  How do we acquire our powers? 

It is written in the Twelve Virtues of Rationality:

How can you improve your conception of rationality?  Not by saying to yourself, "It is my duty to be rational."  By this you only enshrine your mistaken conception.  Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, "The sky is green," and you look up at the sky and see blue.  If you think:  "It may look like the sky is blue, but rationality is to believe the words of the Great Teacher," you lose a chance to discover your mistake.

Historically speaking, the way humanity finally left the trap of authority and began paying attention to, y'know, the actual sky, was that beliefs based on experiment turned out to be much more useful than beliefs based on authority.  Curiosity has been around since the dawn of humanity, but the problem is that spinning campfire tales works just as well for satisfying curiosity.

Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable.  To this very day, magic and scripture still sound more reasonable to untrained ears than science.  That is why there is continuous social tension between the belief systems.  If science not only worked better than magic, but also sounded more intuitively reasonable, it would have won entirely by now.

Now there are those who say:  "How dare you suggest that anything should be valued more than Truth?  Must not a rationalist love Truth more than mere usefulness?"

Forget for a moment what would have happened historically to someone like that—that people in pretty much that frame of mind defended the Bible because they loved Truth more than mere accuracy.  Propositional morality is a glorious thing, but it has too many degrees of freedom.

No, the real point is that a rationalist's love affair with the Truth is, well, just more complicated as an emotional relationship.

One doesn't become an adept rationalist without caring about the truth, both as a purely moral desideratum and as something that's fun to have.  I doubt there are many master composers who hate music.

But part of what I like about rationality is the discipline imposed by requiring beliefs to yield predictions, which ends up taking us much closer to the truth than if we sat in the living room obsessing about Truth all day.  I like the complexity of simultaneously having to love True-seeming ideas, and also being ready to drop them out the window at a moment's notice.  I even like the glorious aesthetic purity of declaring that I value mere usefulness above aesthetics.  That is almost a contradiction, but not quite; and that has an aesthetic quality as well, a delicious humor.

And of course, no matter how much you profess your love of mere usefulness, you should never actually end up deliberately believing a useful false statement.

So don't oversimplify the relationship between loving truth and loving usefulness.  It's not one or the other.  It's complicated, which is not necessarily a defect in the moral aesthetics of single events.

But morality and aesthetics alone, believing that one ought to be "rational" or that certain ways of thinking are "beautiful", will not lead you to the center of the Way.  It wouldn't have gotten humanity out of the authority-hole.

In Circular Altruism, I discussed this dilemma:  Which of these options would you prefer:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability.

You may be tempted to grandstand, saying, "How dare you gamble with people's lives?"  Even if you, yourself, are one of the 500—but you don't know which one—you may still be tempted to rely on the comforting feeling of certainty, because our own lives are often worth less to us than a good intuition.

But if your precious daughter is one of the 500, and you don't know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.

And yes, everyone in that crowd is someone's son or daughter.  Which, in turn, suggests that we should pick the second option as altruists, as well as concerned parents.

My point is not to suggest that one person's life is more valuable than 499 people.  What I am trying to say is that more than your own life has to be at stake, before a person becomes desperate enough to resort to math.

What if you believe that it is "rational" to choose the certainty of option 1?  Lots of people think that "rationality" is about choosing only methods that are certain to work, and rejecting all uncertainty.  But, hopefully, you care more about your daughter's life than about "rationality".

Will pride in your own virtue as a rationalist save you?  Not if you believe that it is virtuous to choose certainty.  You will only be able to learn something about rationality if your daughter's life matters more to you than your pride as a rationalist.

You may even learn something about rationality from the experience, if you are already far enough grown in your Art to say, "I must have had the wrong conception of rationality," and not, "Look at how rationality gave me the wrong answer!"

(The essential difficulty in becoming a master rationalist is that you need quite a bit of rationality to bootstrap the learning process.)

Is your belief that you ought to be rational, more important than your life?  Because, as I've previously observed, risking your life isn't comparatively all that scary.  Being the lone voice of dissent in the crowd and having everyone look at you funny is much scarier than a mere threat to your life, according to the revealed preferences of teenagers who drink at parties and then drive home.  It will take something terribly important to make you willing to leave the pack.  A threat to your life won't be enough.

Is your will to rationality stronger than your pride?  Can it be, if your will to rationality stems from your pride in your self-image as a rationalist?  It's helpful—very helpful—to have a self-image which says that you are the sort of person who confronts harsh truth.  It's helpful to have too much self-respect to knowingly lie to yourself or refuse to face evidence.  But there may come a time when you have to admit that you've been doing rationality all wrong.  Then your pride, your self-image as a rationalist, may make that too hard to face.

If you've prided yourself on believing what the Great Teacher says—even when it seems harsh, even when you'd rather not—that may make it all the more bitter a pill to swallow, to admit that the Great Teacher is a fraud, and all your noble self-sacrifice was for naught.

Where do you get the will to keep moving forward?

When I look back at my own personal journey toward rationality—not just humanity's historical journey—well, I grew up believing very strongly that I ought to be rational.  This made me an above-average Traditional Rationalist a la Feynman and Heinlein, and nothing more.  It did not drive me to go beyond the teachings I had received.  I only began to grow further as a rationalist once I had something terribly important that I needed to do.  Something more important than my pride as a rationalist, never mind my life.

Only when you become more wedded to success than to any of your beloved techniques of rationality, do you begin to appreciate these words of Miyamoto Musashi:

"You can win with a long weapon, and yet you can also win with a short weapon.  In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size."
        —Miyamoto Musashi, The Book of Five Rings

Don't mistake this for a specific teaching of rationality.  It describes how you learn the Way, beginning with a desperate need to succeed.  No one masters the Way until more than their life is at stake.  More than their comfort, more even than their pride.

You can't just pick out a Cause like that because you feel you need a hobby.  Go looking for a "good cause", and your mind will just fill in a standard clicheLearn how to multiply, and perhaps you will recognize a drastically important cause when you see one.

But if you have a cause like that, it is right and proper to wield your rationality in its service.

To strictly subordinate the aesthetics of rationality to a higher cause, is part of the aesthetic of rationality.  You should pay attention to that aesthetic:  You will never master rationality well enough to win with any weapon, if you do not appreciate the beauty for its own sake.

81 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Unknown · 2008-01-30T19:58:44.000Z · LW(p) · GW(p)

What was it? AI?

comment by manuelg · 2008-01-30T20:04:00.000Z · LW(p) · GW(p)

I get an uncomfortable feeling, Eliezer, that this work is to ultimately lead to a mechanism to attract:

  • people of libertarian bent

  • people interested in practically unbounded longevity of consistent, continual consciousness

and also lead to a mechanism to tar people disinclined to those two goals; tar them with the label "sentimentally irrational".

Rationality to me is simply a tool. I would have absolutely no confidence in it without the ongoing experiences of applying it iteratively, successfully to specific goals.

And of course, no matter how much you profess your love of mere usefulness, you should never actually end up deliberately believing a useful false statement.

I haven't yet needed to "deliberately believe a useful false statement" (to my knowledge), but I wouldn't be particularly disturbed if I tried to, and found it repeatedly successful. Another tool for my tool belt.

Right now I am having some success with modeling the world over the conditions I care about with:

  • scientific laws (including information theory)

  • mathematics

  • groups of causality graphs, for the same phenomena, in competition

  • specific causality graphs

  • naive Bayesian

  • straightforward use of Bayes' theorem

  • frequentist probability and statistics

  • discrete probability

  • logic

(causality graphs considered can include relations defined by simulation, and all other tools listed. Whatever it is, shove it into a causality graph. I haven't found it useful to restrict the use of anything in a causality graph, particularly if they are forced to compete over the ability to be consistent with past data and predict future results.)

(The list above is somewhat ordered over more applicable to specific situations, to less applicable to specific situations. I attach the lowest confidence to any specific causality graph, more confidence with the graphs in aggregate in competition. I attach more confidence in frequentist analysis over good data, over Bayesian, but Bayesian is applicable in more circumstances.)

I have to deal with finite resource allocation in a manufacturing plant. Where else to use these tools? Possibly an the opportunity from celebrating the differences in all the people working in the plant.

I am often confused by your writing, because I don't see where you have "skin in the game". Where are you exercising your tools of rationality?

Is it all just to make the world slightly more hospitable to libertarians interested in life extension? (No negative judgment if that is the case.)

(Sorry to beg your indulgence of a long post)

comment by HalFinney · 2008-01-30T20:05:30.000Z · LW(p) · GW(p)

The success of science was and is because it is useful, and similarly for rationalism. But one of the critiques of rationalism and of the overcoming-bias program is that it is sometimes counterproductive. The unbiased tend to be unhappy and/or insane. If someone's goals are to be happy and successful in life, he does best not to be fully rational. Irrationality is the most useful policy if these are your goals.

Your argument suggests that this is true only because this is setting the goalposts too low. For someone who merely seeks happiness, yes, irrationality is in order. But if someone's goals are much higher - if lives are at stake, perhaps even the lives of all humanity, then irrationality no longer works best. In that case, he must follow a path of strict rationality as closely as possible, because the stakes are so high.

However it could be argued that this is not always the case, that high stakes may nevertheless require a degree of irrationality. Rationality is useful for getting at the truth; but irrationality may be useful in persuading and motivating others to help. Successful leaders are notoriously irrational, and if your project is big enough, leadership will be a necessary ingredient for success.

Perhaps a solution is to split one's efforts into two pieces: a rational part, which ruthlessly seeks the truth regardless of inhibitions and discomfort; and an irrational part, which takes the core results from rational analysis, dresses them up in attractive lies, and sells them enthusiastically to the larger world. In fact I would suggest that many successful enterprises have been built on a partnership with this structure: the creative genius who works behind the scenes, and the leader who is the public face of the endeavor and who excels at presentation. You might consider a similar arrangement for your own project.

comment by Michael_G. · 2008-01-30T20:11:41.000Z · LW(p) · GW(p)

I rarely post, only read in hopes of learning. Today, I comment: I appreciate the beauty of this post.

Thank you, Eliezer.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-30T20:27:28.000Z · LW(p) · GW(p)

I am often confused by your writing, because I don't see where you have "skin in the game". Where are you exercising your tools of rationality?

If I'd went ahead and said that within the post, it would've transformed a piece on rationality into overt propaganda, destroying its internal aesthetics. Read my website.

comment by Caledonian2 · 2008-01-30T23:34:30.000Z · LW(p) · GW(p)
I have touched before on the idea that a rationalist must have something they value more than "rationality"

What a terrible idea... then whenever rationality comes in conflict with that thing, rationality will be discarded.

We already see lots and lots of this behavior. It's the human norm, in fact: use rationality as a tool as long as it doesn't threaten X, then discard it when it becomes incompatible with X.

comment by Anon10 · 2008-01-30T23:42:47.000Z · LW(p) · GW(p)

Perhaps I am one of the "sentimentally irrational," but I would pick the 400 certain lives saved if it were a one-time choice, and the 500 @ 90% if it were an iterated choice I had to make over, and over again. In the long run, probabilities would take hold, and many more people would be saved. But for a single instance of an event never to be repeated? I'd save the 400 for certain.

Your 80% and 90% figures don't really add up either. You don't describe how many people in total will die, regardless of you decision. If the max death number possible from this catastrophe is 500, then your point is valid. But what if it were 100 million, or even better, all of humanity? Now, the difference in chance of saving your loved one via either strategy is vanishingly small, and you are left with a 90% chance or a 100% chance of saving humanity as a whole. It's exactly the same as the situation you describe above, but it seems the moral math reverses itself. You need to more fully specify your hypothetical situations if you wish to make a convincing point.

comment by JulianMorrison · 2008-01-31T01:29:40.000Z · LW(p) · GW(p)

Caledonian, I think you're misreading him. He's not saying: the cause is the one thing you never think rationally about. He's saying: the cause is good (rationally good) and to protect/preserve it you have to pull yourself into conformance with the real world, because that's where the action is. To achieve that you have to hold up what you (perhaps mistakenly) think of as "reason" against the real world, and be prepared to re-evaluate if it doesn't work. What your re-evaluation seeks is better techniques of reason - not to throw reason away.

comment by Caledonian2 · 2008-01-31T01:59:21.000Z · LW(p) · GW(p)
Caledonian, I think you're misreading him. He's not saying: the cause is the one thing you never think rationally about. He's saying: the cause is good (rationally good) and to protect/preserve it you have to pull yourself into conformance with the real world, because that's where the action is.

I think you're misreading him, substituting a reasonable argument for the rather bizarre things he says.

Rationality by its nature cannot be only a means towards an end.

comment by JulianMorrison · 2008-01-31T02:09:13.000Z · LW(p) · GW(p)

"Rationality by its nature cannot be only a means towards an end."

Rationality is conformance to reality. You can conform to reality for a cause. (You're saying, you can't mold reality to your cause - I agree, but that's not what he was meaning.) He was meaning that people have thought themselves rational when applying formal, skillful, pedigreed academic techniques that DON'T WORK, such as Jesuit style casuistry. So you have to hold the technique up against reality. You won't do that if you put the technique first by saying "I serve reason", because that morphs in your mind into "I serve Jesuit casuistry" or whatever. It blithely assumes your all-too-human technology of achieving reason works - and it might not.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-31T02:14:00.000Z · LW(p) · GW(p)

Julian, Caledonian is a well-known troll on OB. We've decided against censorship for now, but you might not want to waste too much time. I generally don't respond to Caledonian unless I see someone else agreeing with him.

comment by Wendy_Collings · 2008-01-31T02:26:15.000Z · LW(p) · GW(p)

I totally agree with "Anon", and others who made similar points in the Circular Altruism post. Context matters! Is it a one-time choice, or an iterated choice? Is there an upper limit to the number of deaths, or no limit? Are the 500 the number of people on the sinking ship/last people on planet earth, or possible victims from a much larger pool? You can only do the math and make a rational decision when you have ALL the numbers from the relevant context.

The first steps of rationality lie not in separating problems from their context, but in determining what context is relevant.

comment by Caledonian2 · 2008-01-31T03:26:04.000Z · LW(p) · GW(p)
You won't do that if you put the technique first by saying "I serve reason", because that morphs in your mind into "I serve Jesuit casuistry" or whatever.

I agree with this point, but that's exactly what Eliezer stated he was in favor of: serving something else and merely using rationality as a means toward that end while it's convenient to do so.

It doesn't do any good to avoid making an implicit error by explicitly making that error instead. Certainly we need to compare our thinking to a fundamental basis, but the goal we're seeking can't be that basis. Rationality is about always checking our thinking against reality directly, and using that to evaluate not only our methods of reaching our goals but the nature of our goals themselves.

If you adopt rationality merely because you want to use it to attain your ends, what happens if you discover that your ends aren't compatible with it? (And if that's really what you're doing, how did you know to adopt rationality in the first place? Just keep trying random stuff until you happen to stumble into the correct meta-strategy by chance? I think rationality has to be the starting point, not something picked up along the way.)

comment by Nominull3 · 2008-01-31T03:30:55.000Z · LW(p) · GW(p)

I don't have anything desperately important to me, and you say I'm not allowed to just pick something. Given this, what am I supposed to do, to become more rational? Am I just doomed? I really desperately want to believe true things and not false things, but you say that's not good enough.

Replies from: Kenny, Mycroft65536, 3p1cd3m0n
comment by Kenny · 2013-02-01T00:22:17.980Z · LW(p) · GW(p)

You're not doomed; you may just not be terribly motivated.

comment by Mycroft65536 · 2013-08-16T06:02:58.648Z · LW(p) · GW(p)

Explore the world. Meet people, read books, find blogs like this one. Hopefully something will inspire you.

comment by 3p1cd3m0n · 2015-01-15T01:53:00.024Z · LW(p) · GW(p)

Decreasing existential risk isn't incredibly important to you? Could you explain why?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-31T03:39:59.000Z · LW(p) · GW(p)

Good question, Nominull. Unfortunately I lack the ability to answer your question from personal experience. Mine just fell into my lap.

But is believing true things what you most desperately want, in all the world?

comment by TGGP4 · 2008-01-31T05:24:41.000Z · LW(p) · GW(p)

Caledonian, I gather Eliezer put "rationality" in quotes because people may believe they are committed to rationality when in fact they are not. If they have a goal which is contingent on rationality that will help them from straying from the path.

comment by Caledonian2 · 2008-01-31T05:54:05.000Z · LW(p) · GW(p)

What he said immediately after the part you mention was: "The Art must have a purpose other than itself, or it collapses into infinite recursion"

He wasn't talking about pseudo-rationality. When he talks about "The Art", he's talking about rationality.

And he's wrong: truth points to itself.

comment by Unknown · 2008-01-31T06:23:24.000Z · LW(p) · GW(p)

Anon: do you suggest that others follow your policy as well? Then when many people have individual made isolated choices like that, far fewer lives will have been saved. And in the whole history of the world, choices like that must have been made many times. Why does it matter whether it is you who are repeating the choice or other people?

The question about whether the 500 are that last people in the world is adding other utilities into the issue, such as preserving the human race, and so on. In that case you have a different comparison; naturally, you may have to consider other factors besides the utility of the lives. But as long as you consider only the lives, Eliezer is right.

comment by mitchell_porter2 · 2008-01-31T06:48:46.000Z · LW(p) · GW(p)

Caledonian: "I think rationality has to be the starting point."

Can you expand on this? A rationalistic moral relativist might say that actions require goals, ultimate goals are arbitrary, and so rationality cannot be the starting point there. In the real world, by the time one is able to entertain ideas like 'choosing to be more rational', you're already going to have goals, preferences, ideas about how you should live your life. So it could be countered that 'rationality' never has to supply everything; its purpose will largely be to critique existing purposes, order them by significance, or evaluate new possibilities. Say something more about what you think the role of rationality should be in developing a morality, and about the particular powers it has to fulfil that role.

comment by GreedyAlgorithm · 2008-01-31T08:08:17.000Z · LW(p) · GW(p)

Anon, Wendy:

Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.
comment by Unknown · 2008-01-31T11:31:09.000Z · LW(p) · GW(p)
  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability.

i.e.

  1. Save 4 lives, with certainty
  2. Save 5 billion lives,0.00000009% probability; save no lives, 99.99999991% probability.

Any takes for #2? I seem to remember Ben Jones saying he would choose #1 in a case similar to the second case.

Formerly, I think I would have chosen #2 in the first case and #1 in the second. But Eliezer has converted me. Now I choose #2 in both cases. But would he do that himself? Consider:

"Perhaps I am one of the 'sentimentally irrational,' but I would pick the 400 certain lives saved if it were a one-time choice, and the 500 @ 90% if it were an iterated choice I had to make over, and over again. In the long run, probabilities would take hold, and many more people would be saved. But for a single instance of an event never to be repeated? I'd save the 400 for certain." (Anon, above)

"If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.

But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn't work that way?" (Eliezer Yudkowsky, Pascal's Mugging)

Who sees the similarity? Eliezer no doubt thinks that Anon is biased toward certainty, but so is he: he simply has less of the bias.

So I hereby retract my argument against voting, Pascal's Mugging, and Pascal's Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted.

Replies from: Polymeron
comment by Polymeron · 2011-05-04T10:45:55.948Z · LW(p) · GW(p)

It's not a matter of bias toward certainty; accepting Pascal's Mugger's terms can be conclusively demonstrated to be a losing strategy. Remember, the purpose is to win. That would imply that "rationality" that complies with the Mugger is not rational after all, which means rethinking the whole thing.

Having said that, I haven't been able to formulate a response to Pascal's Mugging myself, so I might be wrong-

...Except that in the process of writing this right now, I think I might have! I need to think this a little further.

comment by Alexandros_Marinos · 2008-01-31T13:46:19.000Z · LW(p) · GW(p)

"It takes visceral panic, channeled through cold calculation, to cut away all the distractions." - this just made it to my quotes file.

If i understand Eliezer's point correctly in terms of the map/territory analogy, what he says is that having somewhere to go and actually needing to put your map to use will motivate you to make that map as accurate as possible, if you care about your destination more than you 'believe in' the current iteration of your map and/or the techniques used to derrive it.

comment by Caledonian2 · 2008-01-31T14:54:30.000Z · LW(p) · GW(p)
A rationalistic moral relativist might say that actions require goals, ultimate goals are arbitrary, and so rationality cannot be the starting point there.

Lots of things act without having any sort of goals. Does fire have a goal of reducing high-energy compounds into oxidized components and free energy? No, but it does it anyway.

You can limit 'action' to intentional events only, I suppose.

However, how does declaring that goals are arbitrary rule out assertions about necessary starting points?

So it could be countered that 'rationality' never has to supply everything; its purpose will largely be to critique existing purposes, order them by significance, or evaluate new possibilities.

If the goals already developed are incompatible with each other, rationality isn't going to help much. If they're incompatible with rationality, it really isn't going to help. But no helping is possible.

Say something more about what you think the role of rationality should be in developing a morality, and about the particular powers it has to fulfil that role.

Rationality is required to form a coherent model (however incomplete or imperfect) of the world. To take an action with the intention of bringing about a specific result requires a coherent model. Ergo...

An incoherent actor can't be said to have any goals at all.

comment by Zubon · 2008-01-31T16:27:33.000Z · LW(p) · GW(p)

Formerly, I think I would have chosen #2 in the first case and #1 in the second. But Eliezer has converted me. Now I choose #2 in both cases. But would he do that himself?

Isn't that implicitly what he does for a living? Eliezer could become a firefighter or emergency medical technician, or work for clean drinking water in rural Africa, with a near-certainty of preventing several deaths in the next year. Meanwhile, there is a very small chance of someone creating an non-Friendly AI in the next year. We can argue about the probabilities (of the problem arising, of successfully presenting it), but Eliezer has already chosen the existential threat.

comment by Z._M._Davis · 2008-01-31T18:09:02.000Z · LW(p) · GW(p)

"So I hereby retract my argument against voting, Pascal's Mugging, and Pascal's Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted."

I'm certainly glad you think so, Unknown, because I was just contacted by the Dark Lords of the Matrix. It turns out that we are living in a simulation. I have no idea what the physics of the world outside are like, but they're claiming that unless you personally send $100 to SIAI right now, they're going to put one dust speck in the eye of each of BusyBeaver(BusyBeaver(BusyBeaver(3^^^^^^^^^^^^^^^^^^^3))!!)! people.

Get out your checkbook, quickly, before it's too late!

comment by Anon10 · 2008-01-31T18:49:55.000Z · LW(p) · GW(p)

(same anon from above who asked about the context of the 400/500 problem being an issue)

In response to GreedyAlgorithm who said:

Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.

While I agree with your constrained view of the problem and its analysis, you are trying to have your cake and eat it too. In such a freed-from-context view, this is (to use your own words) "not interesting". It's like asserting that "4.5 is greater than 4" and that since we wish to pick the greater number, the rationalist picks 4.5. True as far as it goes, but trivial and of no consequence.

Eliezer brought in the idea of something more valuable than your own life, say that of your child. By stepping outside the cold, hard calculus of mere arithmetic comparisons he made a good point (we are still discussing it), but he opened the door for me to do the same. I see your child, and raise you "all of humanity".

Either we are discussing a tautological, uninteresting, degenerate case which reduces down to "4.5 is greater than 4, so to be rational you should always pick 4.5" (which, I agree with, but is rather pointless) or we are discussing the more interesting question of the intersection between morality and rationality. In that case, I assert bringing "extra" conditions into the problem matters very much.

If "rationality has to work no matter how many facts you have" [Greedy's words] (which I agree with) then you must grant me that it should provide consistent results. To make the problem "interesting" Eliezer brought in the "extra" personal stake of a loved family member, and came to his rationalist conclusion, pointing out why you'd want to "take a chance" given that you don't know if your daughter is in the certain group or might be saved as one of the "chance" group. I merely followed his example. His daughter may still be in the certain group or not (same situation) but I've just added everyone else's daughter into the pot. I don't see how these are fundamentally different cases, so rationality should produce the same answer, no?

comment by Unknown · 2008-01-31T19:12:17.000Z · LW(p) · GW(p)

Z. M. Davis, given the existence of that many people, and given that threat, the probability that I personally would be the one threatened in that must be multiplied by one over the number of people, since it could have been anyone else. So the expected disutility from your mugging is one dust speck multiplied by the probability that the Matrix scenario is actually true. This probability is very low, and even if it were unity, the disutility of one dust speck isn't going to get me to pay $100.

So again, I said "without such reasons, it should be accepted." But I have the reasons in this case.

Zubon, of course, Eliezer might well take my second gamble. I was only speaking in preparation for what came ahead: Anon doesn't want to gamble with human lives if there's a small chance of failure. Eliezer may be willing to do this, but once the chance of success becomes extremely small (much smaller than 1 in a billion), as in the Mugging case, he ignores the expected utility, thus falling into exactly the same sort of irrationality as Anon. In relation to this, it is significant that he admitted that he would reject the Mugging even if he had no reason to think that expected utility of rejecting it was greater than the expected utility of accepting it.

Or in other words: expected utility must equal utility times probability, no matter how small the probability is.

comment by Z._M._Davis · 2008-01-31T19:56:50.000Z · LW(p) · GW(p)

It's probably just that I'm stupid, but I don't understand the anthropic solution to Pascal's Mugging. Why does it matter that other people could have been asked? What if it were stipulated that the mugger threatens everyone?

Maybe I should actually study Kolmogorov complexity before trying to grapple with such matters.

comment by rs · 2008-01-31T21:49:33.000Z · LW(p) · GW(p)

Viz. the dilemma posed in Circular Altruism, what should we do? When forced to "Shup up and multiply", we have forgone our intuitions and picked the choice based upon our mathematics. However, we are not just overcoming our own intuitions, but also the intuitions of everyone who does not simply "Shut up and multiply". We are held accountable not by those who knows the math, but by those who have intuitions like ourselves.

If we save everyone, we are heroes. If we do not, we are held accountable not for the math, but for the very intuition we had to overcome multiplied by as many people who are now trying to hold us accountable.

So at the very least, we should add ourselves as the 501st person who might die in the second case. The price we pay for the burden of rationality!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-31T21:53:48.000Z · LW(p) · GW(p)

RS, if that really bothers you, you haven't found your something to protect yet.

comment by rs · 2008-01-31T22:01:40.000Z · LW(p) · GW(p)

So, is your point that we need a cause against which to evaluate the success of our mathematics? That perhaps this sort of feedback that, persumably, you encounter on a daily basis, is something that does not come through rationality itself, but through the very real feedback of what you have chosen to protect?

I guess my previous post was a reflection that I am just a budding rationalist, and also that my skills have not been sharpened against the proper stone.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-31T22:46:41.000Z · LW(p) · GW(p)

So, is your point that we need a cause against which to evaluate the success of our mathematics? That perhaps this sort of feedback that, persumably, you encounter on a daily basis

I'm not going to get feedback on my final success or failure for, oh, probably at least another 10 years.

My point, rather, was that your post illustrated very clearly why rationality comes from having something to protect - you thought of doing something rational, but worried about the other people whose intuitions differed from yours, and what they might think of you. So that worry is a force binding you to the old way of thinking.

But if the thing you were protecting was far more important than what anyone thought of you, that wouldn't slow you down. This isn't about iconoclasm - it's about an inertial drag exerted by all the little fears and worries, an inertial drag of the way that you or others previously did things; the motivating force has to be more powerful than that, or you won't move.

Replies from: Grognor, pato-lubricado
comment by Grognor · 2011-09-27T07:34:53.938Z · LW(p) · GW(p)

I was going to post an issue I had with this article, personally.

What is most important to me is my intent to live for a very, very long time. Assuming I do better on average, I will end at a very high place! But how can living forever be more important to me than my own life? It obviously can't. But I think I see it now; it's more important to me than anything else.

Who cares what anyone thinks of my desire? I'll do whatever it takes, and I don't mean I'll give it a shot!

Second in importance to me is giving everyone else a chance to live a long time as well. I can't say that this is more important to me than my own life, but it coincides with the first one anyway.

comment by Pato Lubricado (pato-lubricado) · 2020-09-07T10:39:02.515Z · LW(p) · GW(p)

It's been 13 years, what's the feedback?

comment by Wendy_Collings · 2008-01-31T23:27:26.000Z · LW(p) · GW(p)

"The point is that given this information, rationality picks choice 2." - Posted by: GreedyAlgorithm

Sorry, no. Given this information, rationality says that there is not enough information to make an appropriate decision, and demands to know the context. If contextual information isn't available, rationality will say that either option 1 or 2 may be right, depending on circumstances.

Rationality never dismisses context as irrelevant just because it isn't known. If unknown factors make the right answer uncertain, then you must accept that it is uncertain.

Context can change what you're trying to achieve. Many people seem to assume that the point (re Circular Altruism problems) is to save as many lives as possible, but this might have to be balanced with other goals - e.g. setting a limit to acceptable risk (as in not risking destruction of the entire human population, whatever their number), or spreading risk instead of marking certain people for death (as in putting the last few people from a sinking ship in the lifeboat, not leaving them behind to make a crowded lifeboat safer).

Making assumptions is one of the danger pitfalls for rational thinkers. So is a reluctance to say "I don't know the answer" when appropriate.

comment by George_Weinberg2 · 2008-02-01T01:02:32.000Z · LW(p) · GW(p)

For some reason this post reminds me of the Buddhist parable asceticsim now, nymphs later.

I don't think it's all that uncommon to begin cultivating an art for some specific purpose, proceed to cultivate it largely for its own sake, and eventually to abandon the original purpose.

comment by tcpkac · 2008-02-01T08:30:03.000Z · LW(p) · GW(p)

Under Multiple Worlds, aren't you condemned, whatever you do or don't do, to there being a number tending to infinity of worlds where what you want to protect is protected, and a number tending to infinity where it is not ?

comment by mitchell_porter2 · 2008-02-02T03:43:56.000Z · LW(p) · GW(p)

Caledonian: Let's distinguish between the aesthetics of rationality and the pragmatics of rationality. Is my model of the world consistent, do my goals make sense - that's pragmatics. Aesthetics is by comparison nebulous and subtle, but perhaps it encompasses both admiration for the lawlike nature of reality and self-admiration for one's own relationship to it. :-)

It seems to me that you are taking issue with the idea that the pragmatics of rationality should be trumped by a higher cause. This essay says nothing about that. It says, first, that it's a psychological fact that people don't adopt rationality as a conscious value until some other, already existing value is threatened by irrationality, and second, that you won't keep developing as a rationalist without such pressure.

As for whether reason by itself can supply supreme values, I had to ask because so many people do think you can get an ought from an is. (I still don't know what you meant by "truth points to itself".)

comment by Richard_Hollerith2 · 2008-02-02T03:52:24.000Z · LW(p) · GW(p)

You are not alone, Z. M. Davis: I disagree with Eliezer over whether Robin's anthropic solution is a satisfactory solution to Pascal's Mugging. (Eliezer repeated his endorsement of Robin's anthropic solution here a few weeks ago.) Since I started reading Eliezer 6 years ago, this is the first time I can recall disagreeing with him on a question of fact. (As I have pointed out many times in the comments here, I disagree with him significantly on terminal values.) If anyone wants to reply to this, I humbly suggest doing so by clicking on my name below.

comment by Caledonian2 · 2008-02-02T04:06:10.000Z · LW(p) · GW(p)
Let's distinguish between the aesthetics of rationality and the pragmatics of rationality.

Extraordinary - I don't believe I've ever heard anyone speak of the aesthetic aspects of rational thought before.

I'm not sure I agree with the concept, but it's something to think about.

It says, first, that it's a psychological fact that people don't adopt rationality as a conscious value until some other, already existing value is threatened by irrationality,

And when that value is threatened by the rationality? What then?

and second, that you won't keep developing as a rationalist without such pressure.

I suspect relatively few people have a deep desire for knowledge and understanding. They're usually the only ones I see developing as rationalists at all. If you don't have a need for the world to make sense, you tend to develop ad hoc methods for getting what you want. The need for systematic understanding is either present, or not.

comment by Gordon_Worley · 2008-02-03T04:10:03.000Z · LW(p) · GW(p)

For those saying they have nothing to protect or still need to find something to protect, remember that you are human and, unless you have no natural family or reproductive ties, you always have the people you love to protect. It may seem counterintuitive if you've bought into Hollywood rationality, but love is a powerful motivational force. If you think that, in theory, being more rational is good, but don't see how you can effect greater rationality in your mind, consider the many benefits of your increased rationality (again, not Hollywood rationality, but rationality of the type Eliezer describes above).

In my case, I know I'm trying harder than ever to become a better person because of my wife. And when I do something that hurts her, my first thought is to figure out what is wrong with my thinking that led to this. My second is to find a better way to express my love, through increasing her happiness and enjoyment of life. And, realizing that the best thing I can do is shut up an multiply, I figure out how to change myself to be a better multiplier.

comment by Richard_Hollerith2 · 2008-02-03T07:17:14.000Z · LW(p) · GW(p)

Excellent point by Worley. Since I have assumed the role on this blog of pointing out that happiness is not the meaning of life, let me hasten to add that happiness is a very useful barometer. Whether you are happier on average now than you were 10 years ago is for example probably a more reliable barometer of whether your life is on a better track than it was 10 years ago than change in financial net worth over those 10 years (though net worth is an important barometer too). And the one situation in which happiness is least likely to steer you wrong is when you use your wife's happiness as a barometer for how good a job you are doing as a husband.

The object of the game of life is not just to become more rational but rather to become more rational, more ethical and more loving. "Being loving" is defined as helping those close to you to become more rational, ethical and loving. This is the way we maximize the ethics and the rationality of every intelligent agent in our reach, which, if it is not the purpose of life, is a sufficiently good approximation for most people. (Singularity scientists however will probably need a more sophisticated definition of the purpose of life.)

By "ethics" I mean simply the sincere desire to do good and to avoid doing evil. (I freely admit I do not have a formula or algorithm that allows a person to tell good from evil in any situation). I bring the concept of ethics into this little exposition because I want to suggest that it unethical to increase the rationality of an unethical person. By doing so, you are increasing his capacity to do evil. That suggestion goes against the egalitarianism that is such a central part of our ethical culture: the conventional ethical wisdom is that every human is equally deserving of loving treatment. I want to suggest that that is wrong and that although the majority of us could stand to become much more loving, it is also true that we should direct our love as much as possible towards ethical people and away from unethical people.

I end with a warning. Rationality only becomes powerful when it is combined with knowledge. If you wish to be rational about physics or space travel, it is easy to find accurate knowledge to help you, but it is much more difficult to find accurate knowledge about how to become more loving: in that domain, the accurate knowledge is mixed with a much larger amount of false information. And as has been said here before, most psychologists are idiots.

comment by tcpkac · 2008-02-03T12:01:46.000Z · LW(p) · GW(p)

Hollerith, if 'most psychologists are idiots', I wonder how they discovered all the cognitive biases ?

comment by Caledonian2 · 2008-02-03T15:25:53.000Z · LW(p) · GW(p)
if 'most psychologists are idiots', I wonder how they discovered all the cognitive biases

He said 'most', not 'all'. And just because someone is an idiot doesn't mean everything they do is wrong. Even Freud managed to do some good descriptive work before descending into madness and delusion.

comment by Richard_Hollerith2 · 2008-02-04T19:21:31.000Z · LW(p) · GW(p)

I mentioned psychologists in a particular context, namely, how to apply the skills of rationality to the project of nuturing and supporting your friends, lovers and family. Worley and I think rationality can be applied to that project. But I thought just leaving it at that would mislead some of the readers who have not had a lot of practical experience in life: unlike many of the other projects rationality is typically applied to, this project is different in that you cannot just travel to your nearest bookstore and by browsing the shelves expect to find accurate knowledge to help you in this project (again because the true information is mixed with a much larger amount of false and misleading information and it is impractical to decide which is which). This remains true even if your nearest bookstore is in an elite university and full of textbooks.

If Eliezer mentions a book or article in psychology in a positive light, that is strong evidence that that book or article is worth reading. In 2001 I took his the advice on his web site and read Robert Wright's Moral Animal and "The Psychological Foundations of Culture", and I am extremely pleased with the outcome.

comment by mitchell_porter2 · 2008-02-05T10:57:14.000Z · LW(p) · GW(p)

Caledonian: "I don't believe I've ever heard anyone speak of the aesthetic aspects of rational thought before."

It's funny - the phrase "aesthetics of rationality" appears in the final paragraph of Eliezer's post; apparently it's what the whole thing was about. But I didn't notice it either, until I was seriously casting about for some way to show that Caledonian person why their criticism was off the mark. I think Eliezer's point may be something like this: the aesthetics of rationality are all that could truly make it an end in itself; this necessarily involves attachment to a particular notion of rationality; and this attachment will hinder genuine progress in rationality, which may require adoption of a different but superior notion of rationality.

Along the way, I think I belatedly noticed a subtext to your own first comment too - you think Eliezer, and other promethean transhumanists like him, are themselves examples of limited rationality, their goals or expectations being unrealistic and therefore irrational. I've seen you say as much here, but I hadn't figured out that this was probably on your mind as you wrote your comment.

Anyway, I should get back to thinking about specks versus torture.

comment by Caledonian2 · 2008-02-05T14:49:28.000Z · LW(p) · GW(p)

Aesthetics are rarely a topic when rationality is discussed. Mostly because they're only relevant to ancient-Greek-style thought.

On the list of things likely to cause unreasonable attachment, it's pretty far down. Love of familiarity, wanting to appear intelligent to others, wanting to appear intelligent to oneself, unwillingness to face conclusions that one finds unpalatable, general inflexibility... these are all plausible causes of failure. But aesthetics?

comment by Amaroq · 2010-03-14T13:47:50.751Z · LW(p) · GW(p)

I think you're pretty close to the core of this one. You identified that having something to protect gives you strength. And having a worthy cause to work for, for the same reason.

But what is that reason? What is it that gives you strength? What is the underlying cause of us gaining strength from certain causes?

I'm not certain I understand the topic well enough myself, but I think I have something that you might find insightful here.

Moral Idealism. That's where your power comes from. Whether you're fighting to protect a loved one, or you're fighting to promote a worthy cause, you have the power to dedicate yourself with every fiber of your being because you believe your actions are righteous!

You see it all the time. When people are completely confident in the righteousness of their cause, they will put their all into it. You see it when someone protects a loved one, you see it when someone works for a worthy cause, and you see it particularly with religions; their unwavering faith in their belief instills them with a sense of righteousness.

I think your mention of people being more afraid of the crowd disagreeing with them than dying highlights a very dangerous philosophical flaw people hold today. They don't believe that protecting their lives is a righteous cause!! Having grown up in an altruistic society, they've probably been hammered with the message that other peoples' lives are more important than their own. So they lack the moral justification to protect themselves and they have a flawed moral premise that works to enslave them to the whim of the crowd.

You're worried about people not having a good reason to be rational? Here's the answer. Your own life must be your ultimate value. It must be an end in itself, and not the means to anyone else's ends. You must judge value with your life as the standard of judgment. Don't think in terms of good and evil, think in terms of good for you and bad for you. Not only is logic and reasoning a tool to promote your life, you depend on it for survival. I can't imagine any way to throw away reason and promote your life at the same time.

(If you can instill people with the power of moral idealism to promote their own lives, you might also have a higher turnout of people buying into cryogenics life insurance policies. :P)

comment by realitygrill · 2010-05-23T20:49:16.632Z · LW(p) · GW(p)

Personally, I find aesthetic purity to be a very strong source of attachment for me. It's certainly caused 'unreasonable attachments', like being stuck on being "right" and ascribing a purity to it (eg. I am right about this and you are wrong, therefore I will absolutely refuse to do this small nitpicky thing and I don't care if I jam up the whole process because it's MORALLY WRONG not to do so. I am the lone voice of dissent!). Oh, school..

I came across the same hack, or coping trick. Just remap the definition of what you're being pure about to "winning" or "rationality".

Pretty sure I'm displaying that I missed the point somehow.

comment by ksvanhorn · 2011-01-21T20:18:18.141Z · LW(p) · GW(p)

The proper choice between (1) certainly save 400 lives and (2) 90% probability of saving 500 lives with 10% probability of saving no lives, depends on your utility function, which depends on the circumstances. If your utility is proportional to the number of lives saved, then sure, go with (2).

On the other hand, suppose that some cataclysm has occurred, those 500 lives are all that remains of the human race, and extinction of the human race has such an extremely negative utility for you that all other considerations amount to rounding error in the utility function. Then, to a close approximation, you want the choice C that maximizes P(S | C), where S="human race survives".

We have

P(S | C=1) = P(S | N=400)

P(S | C=2) = 0.9 * P(S | N=500)

where N is the size of the current population. Therefore, you should choose (1) if

P(S | N=400) / P(S | N=500) > 0.9.

comment by juliawise · 2011-08-26T22:51:59.247Z · LW(p) · GW(p)

Holden made a similar point.

comment by Ronny Fernandez (ronny-fernandez) · 2011-09-17T16:44:48.688Z · LW(p) · GW(p)

All right, I'd like to attempt a summary to make sure that I am understanding this post, if anyone see's some mistake in my interpretation, I'd appreciate it if they let me know.

Virtually everyone wants their beliefs to be true, this amounts to practically everyone wants to be epistimically rational. Rationality is a rare trait, so obviously that desire is not enough to make you epistimically rational. But that desire mixed with the rare desire to have all of your beliefs make useful predictions about whatever they talk about, is enough, provided that you never subordinate the mere predictive power of a belief to its truth. If you allow yourself to believe something because you thought it was true, even after you notice that some other belief makes reliably better predictions about whatever target of inquiry, at that point you fail as a rationalist.

Is there anything I'm missing?

What if what I want isn't true beliefs or to be rational, but to have the best method for finding truths in general, and I use the predictive power of a belief as the best guide to its truth value? In that case, if I find a method that works better than mine, i.e., leads to beliefs with higher predictive power more often than my old method, I'll switch to that method. I don't pride myself on having the best method, I pride myself on doing everything I can to find the best method. And, let's say that finding the best method for finding truth in general is far more important to me than my own life. Is that enough ya think?

It seems a lot like trying to be rational for its own sake, and I know that EY say's that that leads to an infinite recursion, but I don't know why the person I described above must be using circular justification.

please help if you can

comment by Ronny Fernandez (ronny-fernandez) · 2011-09-17T17:00:31.141Z · LW(p) · GW(p)Replies from: lessdazed
comment by lessdazed · 2011-09-17T17:05:21.170Z · LW(p) · GW(p)

It's logically possible but humans tend to want other things.

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2011-09-17T17:14:00.856Z · LW(p) · GW(p)

I authentically feel like that's what I want. I can't think of anything i enjoy more, or anything i wouldn't give up for a few decades with the best algorithm for truth finding in general. Though my revealed prefrences may end up saying something else.

comment by hannahelisabeth · 2012-11-14T13:40:05.460Z · LW(p) · GW(p)
  1. Save 400 lives, with certainty.
  2. Save 500 lives, 90% probability; save no lives, 10% probability.

I think it ought to be made explicit in the first scenario that 100 lives are being lost with certainty, because it's not necessarily implied by the proposition. I know a lot of people inferred it, but the hypothetical situation never stated it was 400/500, so it could just as easily be 400/400, in which case choosing it would certianly be preferable to the second option. I think it's important you make your hypothetical situations clear and unambiguous. Besides, a 100% probability of 100 deaths explicitly stated will influence the way people perceive the question. If you leave out writing out that 100 people are dying, you're also subtly encouraging your readers to forget about those people as well, so it comes as little surprise that some would prefer option 1.

Replies from: MugaSofer, ArisKatsaris
comment by MugaSofer · 2012-11-14T13:52:28.426Z · LW(p) · GW(p)

For all we know, billions of lives could be lost, with certainty; the question is how many we can save.

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-14T13:57:09.010Z · LW(p) · GW(p)

Or, for all we know, there are only 400 lives to be saved in the first instance. Saving 400 out of 400 is different than saving 400 out of 7 billion. The context of the proposition makes a difference, and it's always best to be clear and unambiguous in the paramaters which will necessarily guide ones decision as to which choice is the best.

Replies from: MugaSofer, TheOtherDave
comment by MugaSofer · 2012-11-14T14:01:30.326Z · LW(p) · GW(p)

If there were only 400, where do the extra 100 come from in option 2?

That said, if this genuinely confuses you there may well be others who are having similar problems and this should be noted in the example.

comment by TheOtherDave · 2012-11-14T14:04:32.752Z · LW(p) · GW(p)

Huh.
Can you clarify exactly why it matters?
That is... I recognize that on a superficial level it feels like it matters, so if you're making a point about how to manipulate human psychology, then I understand that.
OTOH, if you're making an ethical point about the value of life, I don't quite understand why the value of those 400 lives is dependent on how many people there are in... well, in what? The world? The galaxy? The observable universe? The unobservable universe? Other?

Replies from: MugaSofer, ArisKatsaris, hannahelisabeth
comment by MugaSofer · 2012-11-14T14:11:40.258Z · LW(p) · GW(p)

To clarify, that's how many people in "The world? The galaxy? The observable universe? The unobservable universe? Other?" are going to die. You can save a maximum of 500 in this manner.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-14T19:55:35.645Z · LW(p) · GW(p)

Um.
OK... I still seem to be missing the point.

So I have a choice between A. "Save 400 lives, allow (N-400) people to die, with certainty." and
B. "Save 500 lives (allow N-500 people to die), 90% probability; save no lives (allow N people to die), 10% probability."

Are you suggesting that my choice between A and B ought to depend on N?
If so, why?

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-14T20:55:33.640Z · LW(p) · GW(p)

It doesn't depend on N if N is consistent between options A and B, but it would if they were different. It would make for an odd hypothetical scenario, but I was just saying that it's not made completely explicit.

comment by ArisKatsaris · 2012-11-14T14:15:09.955Z · LW(p) · GW(p)

Well, if there are only 400 people in the universe, option 1 means you're saving them all and nobody needs die.

But that's a rather silly interpretation. That the option 2 exists obviously means there exist at least 500 people in the universe.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-14T19:48:16.641Z · LW(p) · GW(p)

I agree with all of this.

comment by hannahelisabeth · 2012-11-14T20:53:51.936Z · LW(p) · GW(p)

I'm making a point about human psychology. The value of a life obviously does not change.

Although, I suppose theoretically, if the concern is not over individual lives, but over the survival of the species as a whole, and there are only 500 people to be saved, then picking the 400 option would make sense.

comment by ArisKatsaris · 2012-11-14T14:03:24.872Z · LW(p) · GW(p)

As MugaSofer said, it doesn't need be 400/500, it may be 400/1,000,000 vs (500/1,000,000 with 90% probability). The original question indicated "Suppose that a disease, or a monster, or a war, or something, is killing people. "

Imagine that hundreds of thousand lives are getting lost.

If you leave out writing out that 100 people are dying, you're also subtly encouraging your readers to forget about those people as well, so it comes as little surprise that some would prefer option 1.

How about the following rephrasing?

There's a natural catastrophe (e.g. a tsunami) occuring that will claim >100,000 lives. You have two options:

  1. Save 400 lives, with certainty.
  2. Save 500 lives, 90% probability; save no lives, 10% probability.
Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-14T20:50:46.124Z · LW(p) · GW(p)

I think that rephrasing improves it.

comment by Mestroyer · 2013-01-03T02:22:22.066Z · LW(p) · GW(p)

You will never master rationality well enough to win with any weapon, if you do not appreciate the beauty for its own sake.

I have a low prior for this statement, but I don't have any data. I wonder why Eliezer thinks this is the case.

comment by Rixie · 2013-08-27T19:08:42.579Z · LW(p) · GW(p)

Here I have a question that is slightly unrelated, but I'm looking for a good cognitive science science fair project and I'm having trouble thinking of one that would be not completely impractical for a high-schooler to do, won't take more than a few months, and would be interesting enough to hold people's attention for at least a few minutes before they head off to the physics and medical research projects. No one ever does decent cognitive science projects and I really want to show them that this branch of science can be just as rigorous and awesome as the other ones. Does anyone have any ideas?

comment by Capla · 2014-11-11T16:57:00.524Z · LW(p) · GW(p)

I want to read the X saga but I can't seem to find it. Can anyone point my way?

Replies from: Username
comment by Username · 2014-11-29T03:53:34.417Z · LW(p) · GW(p)

I'm fairly sure he was referring to X/1999 by Clamp.

comment by BenLowell · 2017-04-09T07:23:53.562Z · LW(p) · GW(p)

I've been coming back to this post for 7 years or so, and the whole time it's obvious that I don't have something to protect, and haven't found one, and haven't yet found a way to find something to protect. It seems pretty cool though - and accurate that people who really care about things are able to go to great lengths to improve the way they think about the thing and their ability to to solve it.

I can say that once I realized I cared about wanting to care about something, that helped me quite a bit and I started improving my life.

comment by Ben (ben-lang) · 2022-02-18T17:51:18.976Z · LW(p) · GW(p)

Very interesting. I can't help feeling that "trying to be a better rationalist" is somehow a paradoxical aim.

Roughly speaking I would say that we have preferences, and their is no rational way of picking preferences. If you prefer pizza to icecream, or pleasure to pain, or living to dying, then that is that. Rationality is a mechanism for effectively seeking your preferences, ordering pizza, not putting your had in a fire etc. You can't pick rational preferences (goals), you can pick a rational route towards those goals.

If you adopt "I want to be more rational" as a preference/goal in-itself it feels like the snake is eating its own tail. 

Maybe "meta goals" like this do arise elsewhere, eg. "I don't currently have any interest in being strong/rich/powerful/skilled for its own sake, and nor are these things worth pursuing based on my current preferences (which are more efficiently achieved else-ways).  However, these are things that might be generically useful for achieving preferences I may or may not have in the future, so I should acquire them as tools for later".

But if we take rationality to mean "taking the best actions with the available information to meet your goals", then, at least by this definition, pursuing the meta-goals appears to be definitionally irrational. This extends to the meta-goal of "being a better rationalist".

comment by heresieding · 2022-04-13T02:30:06.796Z · LW(p) · GW(p)

I savor the succulent choleric chaos of declaring that I value mere phlegm above yellow bile. That is almost a contradiction, but not quite; and the resulting blend has a choleric quality as well: a delicious humor.

comment by Tracey Foster (tracey-foster) · 2024-02-03T03:36:55.276Z · LW(p) · GW(p)

Lessons from experimenting prove to be more valuable than from Authority? I think that Adam and Eve would beg to differ. I know, mentioning them probably disqualifies me as fertile ground for rationalist seeding, huh? Oh well, can't win them all.

But anyway, thanks for the well done Harry Potter fanfic. Truly, I am going to reread it several times, I'm sure.

Replies from: jacobjacob
comment by jacobjacob · 2024-02-05T19:24:12.279Z · LW(p) · GW(p)

[Mod note: I edited out your email from the comment, to save you from getting spam email and similar. If you really want it there, feel free to add it back! :) ]