Posts

Comments

Comment by Manon_de_Gaillande on Another Call to End Aid to Africa · 2009-04-05T08:25:59.000Z · LW · GW

I talked to other people about such calls. They called me evil. Apparently, people don't see the proposition "Aid is good" as following from "Aid helps people" (a purely factual claim) and "Helping people is good" (which only evil people deny); it's all in the same mental bucket. So we're pretty much screwed explaining it. Moreover, even when people finally get the distinction, the claims tend to be rejected at the speed of thought - because we all know "Aid is good".

Comment by Manon_de_Gaillande on Formative Youth · 2009-02-25T00:51:46.000Z · LW · GW

I'm somewhat puzzled by how all the influences you quote are fiction. I read and watched fiction as a child, and the only obvious consequence on my personality has been 1) extremely distorted - I can recognize the influence because I remember it, but you couldn't look at that part of my personality and say "Aha, that came from Disney movies!" 2) tossed out of the window in a recent crisis of faith 3) more influenced by real life than fiction. I've been recalculating a lot of things since as young as 4 (most of which ended up wrong because of lack of evidence and a few fundamental mistakes), with a wave of recalculation each time I uncovered a fundamental mistake (happened twice) and many recalculations ended up in a very different place from their starting point, which gives somewhat more credence to the "lovely excuse" when it applies.

What did I pick up from childhood? Altruism? I can't trace back the causal line, I don't remember a point at which I wasn't altruistic in full generality - I do remember stories about "altruism = good" and "ingroup/outgroup dichotomy = bad", but I already agreed with that. What I remember picking up were social norms of the form "Saying 'X is Y' is good" - but unlike other children, I picked up "X is Y" - "Truth is good", "Death is bad" (didn't quite believe that one, had to recalculate later), "Love is good" (tossed out of the window when I realized "love" is vague). But I picked up those from social life, not fiction - and I was a stereotypical bookworm. I may have confused "good fiction" and "good life" due to fiction, but real life influences look more like the culprits.

The simplest hypothesis is not "People are embarrassed". I bet they simply don't know. Most people are just terrible at introspection, and don't even think about it.

Also, yes, I'm going to get you started. Incredible disregard for what?

Comment by Manon_de_Gaillande on Wise Pretensions v.0 · 2009-02-20T17:45:09.000Z · LW · GW

I find this harder to read. The arguments are obscured. The structure sucks; claims are not isolated into neat little paragraphs so that I can stop and think "Is this claim actually true?". It's about you (why you aren't Wise) rather than about the world (how Wisdom works).

Comment by Manon_de_Gaillande on Against Maturity · 2009-02-19T11:18:17.000Z · LW · GW

I've rarely heard "You'll understand when you're older" on questions of simple fact. Usually, it's uttered when someone who claims to be altruistic points out someone else's actions are harmful. The Old Cynic then tells the Young Idealist "I used to be like you, but then I realized you've got to be realistic, you'll understand when you're older that you should be more selfish.". But they never actually offer an object-level argument, or even seem to have changed their minds for rational reasons - it looks like the Selfishness Fairy just changed their terminal values as they grew older. That may be the case; it may also be sour grapes bias: when they realized their altruism could never have as big an effect as it ought to, they decided altruism wasn't right after all. The best defense I can come up with is: If your moral intuitions change, especially change in a way you've previously noticed as "maturing", only trust them if your justifications for it would convice your past self as their most idealistic.

Is this "stupid teeneager" thing real, or just a stereotype that sells books? I've seen teenagers drink and drive; they don't look like they do it to look adult. I've tried some drugs and turned others down, and the only things that (I'm aware) factored were what I could learn from the experience, how pleasant it would be, and the risks. I consciously ignored peer pressure - as for looking mature, I simply didn't even consider it could be a criterion any more than the parity of my number of nose hair.

Comment by Manon_de_Gaillande on True Ending: Sacrificial Fire (7/8) · 2009-02-05T20:22:36.000Z · LW · GW

Oh, I'm starting to see why the Superhappies are not so right after all, what they lack, why they are alien, in the Normal Ending and in Eliezer's comments. I think this should have been explained in more detail in the story, because I initially failed to see their offer as anything but good, let alone bad enough to kill yourself. I want untranslatable 2!

Still, if I had been able to decide on behalf of humanity, I would have tried to make a deal - not outright accepted their offer, but negotiated to keep more of what matters to us, maybe by adopting more of their emotions, or asking lesser modifications of them. It just doesn't look that irreconciliable.

Also, their offer to have the Babyeaters eat nonsentient children sounds stupid - like replacing out friends and lovers with catgirls.

Comment by Manon_de_Gaillande on The Super Happy People (3/8) · 2009-02-01T11:16:40.000Z · LW · GW

Wait. Aren't they right? I don't like that they don't terminally value sympathy (though they're pretty close), but that's beside the point. Why keep the children suffering? If there is a good reason - that humans need a painful childhood to explore, learn and develop properly, for example - shouldn't the Super Happy be conviced by that? They value other things than a big orgasm - they grow and learn - they even tried to forsake some happiness for more accurate beliefs - if, despite this, they end up preferring stupid happy superbabies to painful growth, it's likely we agree. I don't want to just tile the galaxy with happiness counters - but if collapsing into orgasmium means the Supper Happy, sign me up.

Comment by Manon_de_Gaillande on War and/or Peace (2/8) · 2009-01-31T16:22:06.000Z · LW · GW

Eliezer, why do you hate death so much? I understand why you'd hate it as much as the social norm wants you to say you do, but not so much more. People don't hate death, and don't even say they hate death nearly as much as you do. I can't think of a simpler hypothesis than "Eliezer is a mutant".

Now, of course, throwing in the long, painful agony of children changes something.

Comment by Manon_de_Gaillande on Value is Fragile · 2009-01-30T19:49:46.000Z · LW · GW

@Jotaf: No, you misunderstood - guess I got double-transparent-deluded. I'm saying this:

  • Probability is subjectively objective
  • Probability is about something external and real (called truth)
  • Therefore you can take a belief and call it "true" or "false" without comparing it to another belief
  • If you don't match truth well enough (if your beliefs are too wrong), you die
  • So if you're still alive, you're not too stupid - you were born with a smart prior, so justified in having it
  • So I'm happy with probability being subjectively objective, and I don't want to change my beliefs about the lottery. If the paperclipper had stupid beliefs, it would be dead - but it doesn't, it has evil morals.

  • Morality is subjectively objective

  • Morality is about some abstract object, a computation that exists in Formalia but nowhere in the actual universe
  • Therefore, if you take a morality, you need another morality (possibly the same one) to assess it, rather than a nonmoral object
  • Even if there was some light in the sky you could test morality against, it wouldn't kill you for your morality being evil
  • So I don't feel on better moral ground than the paperclipper. It has human_evil morals, but I have paperclipper_evil morals - we are exactly equally horrified.
Comment by Manon_de_Gaillande on Value is Fragile · 2009-01-29T19:28:38.000Z · LW · GW

@Eliezer: Can you expand on the "less ashamed of provincial values" part?

@Carl Shuman: I don't know about him, but for myself, HELL YES I DO. Family - they're just randomly selected by the birth lottery. Lovers - falling in love is some weird stuff that happens to you regardless of whether you want it, reaching into your brain to change your values: like, dude, ew - I want affection and tenderness and intimacy and most of the old interpersonal fun and much more new interaction, but romantic love can go right out of the window with me. Friends - I do value friendship; I'm confused; maybe I just value having friends, and it'd rock to be close friends with every existing mind; maybe I really value preferring some people to others; but I'm sure about this: I should not, and do not want to, worry more about a friend with the flu than about a stranger with cholera.

@Robin Hanson: HUH? You'd really expect natural selection to come up with minds who enjoy art, mourn dead strangers and prefer a flawed but sentient woman to a perfect catgirl on most planets?

This talk about "'right' means right" still makes me damn uneasy. I don't have more to show for it than "still feels a little forced" - when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel wrong. I think about the paperclipper in exactly the same way it thinks about me! Sure, that's also what happens when I talk to a creationist, but we're trying to approximate external truth; and if our priors were too stupid, our genetic line would be extinct (or at least that's what I think) - but morality doesn't work like probability, it's not trying to approximate anything external. So I don't feel so happier about the moral miracle that made us than about the one that makes the paperclipper.

Comment by Manon_de_Gaillande on Failed Utopia #4-2 · 2009-01-21T19:56:36.000Z · LW · GW

Oh please. Two random men are more alike than a random man and a random woman, okay, but seriously, a huge difference that makes it necessary to either rewrite minds to be more alike or separate them? First, anyone who prefers to socialize with the opposite gender (ever met a tomboy?) is going to go "Ew!". Second, I'm pretty sure there are more than two genders (if you want to say genderqueers are lying or mistaken, the burden of proof is on you). Third, neurotypicals can get along with autists just fine (when they, you know, actually try), and this makes the difference between genders look hoo-boy-tiiiiny. Fourth - hey, I like diversity! Not just just knowing there are happy different minds somewhere in the universe - actually interacting with them. I want to sample ramensubspace everyday over a cup of tea. No way I want to make people more alike.

Comment by Manon_de_Gaillande on Continuous Improvement · 2009-01-11T12:52:16.000Z · LW · GW

I don't see how removing getting-used-to is close to removing boredom. IANAneurologist, but on a surface level, they do seem to work differently - boredom is reading the same book everyday and getting tired of it, habituation is getting a new book everyday and not thinking "Yay, new fun" anymore.

I'm reluctant to keep habituation because, at least in some cases, it is evil. When the emotion is appropriate to the event, it's wrong for it to disminish - you have a duty to rage against the dying of the light. (Of course we need it for survival, we can't be mourning all the time.) It also looks linked to status quo bias.

Maybe, like boredom, habituation is an incentive to make life better; but it's certainly not optimal.

Comment by Manon_de_Gaillande on You Only Live Twice · 2008-12-13T15:16:00.000Z · LW · GW

I'm going to stick out my neck. Eliezer wants everyone to live. Most people don't.

People care about their and their loved ones' immediate survival. They discount heavily for long-term survival. And they don't give a flying fuck about the life of strangers. They say "Death is bad.", but the social norm is not "Death is bad.", it's "Saying "Death is bad." is good.".

If this is not true, then I don't know how to explain why they dismiss cryonics out of hand with arguments about how death is not that bad that are clearly not their true rejection. The silliness heuristic explains believing it would fail, or that it's a scam - not rejecting the principle. Status quo and naturalistic bias explain part of the rejection, but surely not the whole thing.

And it would explain why I was bewildered, thinking "Why would you want a sucker like me to live?" even though I know Eliezer truly values life.

Comment by Manon_de_Gaillande on Mundane Magic · 2008-10-31T18:37:40.000Z · LW · GW

Actually, the Mystic Eyes of Depth Perception are pretty underwhelming. You can tell how far away things are with one eye most of the time. The difference is big enough to give a significant advantage, but nothing near superpower level. My own depth perception is crap (better than one eye though), and I don't often bump into walls.

Comment by Manon_de_Gaillande on Crisis of Faith · 2008-10-11T13:34:06.000Z · LW · GW

Nazir Ahmad Bhat, you are missing the point. It's not a question of identity, like which ice cream flavor you prefer. It's about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see Absence of evidence is evidence of absence and the posts on Occam's Razor). You may call this a part of my identity. But I don't need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don't believe in a flying teapot: do you need other people to?

Comment by Manon_de_Gaillande on Inseparably Right; or, Joy in the Merely Good · 2008-08-30T18:19:54.000Z · LW · GW

Eliezer, sure, but that can't be the whole story. I don't care about some of the stuff most people care about. Other people whose utility functions differ in similar but different ways from the social norm are called "psychopaths", and most people think they should either adopt their morals or be removed from society. I agree with this.

So why should I make a special exception for myself, just because that's who I happen to be? I try to behave as if I shared common morals, but it's just a gross patch. It feels tacked on, and it is.

I expected (though I had no idea how) you'd come up with an argument that would convice me to fully adopt such morals. But what you said would apply to any utility function. If a paperclip maximizer wondered about morality, you could tell it: "'Good' means 'maximizes paperclips'. You can think about it all day long, but you'd just end up making a mistake. Is that worth forsaking the beauty of tiling the universe with paperclips? What do you care there exists in mindspace minds that drag children off train tracks?" and it'd work just as well. Yet if you could, I bet you'd choose to make the paperclip maximizer adopt your morals.

Comment by Manon_de_Gaillande on The Meaning of Right · 2008-07-30T15:34:00.000Z · LW · GW

Constant: "Give a person power, and he no longer needs to compromise with others, and so for him the raison d'etre of morality vanishes and he acts as he pleases."

If you could do so easily and with complete impunity, would you organize fights to death for your pleasure? Would you even want to? Moreover, humans are often tempted to do things they know they shouldn't, because they also have selfish desires. AIs don't if you don't build it into them. If they really do ultimately care about humanity's well-being, and don't take any pleasure from making people obey them, they will keep doing so.

Comment by Manon_de_Gaillande on The Meaning of Right · 2008-07-30T11:53:00.000Z · LW · GW

I'm confused. I'll try to rephrase what you said, so that you can tell me whether I understood.

"You can change your morality. In fact, you do it all the time, when you are persuaded by arguments that appeal to other parts of your morality. So you may try to find the morality you really should have. But - "should"? That's judged by your current morality, which you can't expect to improve by changing it (you expect a particular change would improve it, but you can't tell in what direction). Just like you can't expect to win more by changing your probability estimate to win the lottery.

Moreover, while there is such a fact as "the number on your ticket matches the winning number", there is no ultimate source of morality out there, no way to judge Morality_5542 without appealing to another morality. So not only you can't jump to another morality, you also have to reason to want to: you're not trying to guess some true morality.

Therefore, just keep whatever morality you happen to have, including your intuitions for changing it."

Did I get this straight? If I did, it sounds a lot like a relativistic "There is no truth, so don't try to convice me" - but there is indeed no truth, as in, no objective morality.

Comment by Manon_de_Gaillande on The Meaning of Right · 2008-07-29T09:05:19.000Z · LW · GW

This argument sounds too good to be true - when you apply it to your own idea of "right". It also works for, say, a psychopath unable to feel empathy who gets a tremendous kick out of killing. How is there not a problem with that?

Comment by Manon_de_Gaillande on Changing Your Metaethics · 2008-07-27T22:49:41.000Z · LW · GW

No! The problem is not reductionism, or that morality is or isn't about my brain! The problem is that what morality actually computes is "What should you feel-moral about in order to maximize your genetic fitness in the ancestral environment?". Unlike math, which is more like "What axioms should you use in order to develop a system that helps you in making a bridge?" or "What axioms should you use in order to get funny results?". I care about bridges and fun, not genetic fitness.

Actually, "Whatever turns y'all on" is a pretty damn good morality. Because it makes sense on an intuitive level (it looks like what selfishness would be if other people were you). Because it doesn't care too much where your mind comes from, as it maximizes whatever turns you on. Because it mostly adds up to normality. Possibly because it's what I used, so I'm biased. Though I don't think you quite get normality - killing is a minor offense here, because people don't get to experience it.

Comment by Manon_de_Gaillande on Could Anything Be Right? · 2008-07-18T23:27:58.000Z · LW · GW

Folks, we covered that already! "You should open the door before you walk trough it." means "Your utility function ranks 'Open the door then walk through it' above 'Walk through the door without opening it'". YOUR utility function. "You should not murder." is not just reminding you of your own preferences. It's more like "(The 'morality' term of) my utility function ranks 'You murder' below 'you don't murder'.", and most "sane" moralities tend to regard "this morality is universal" as a good thing.

Comment by Manon_de_Gaillande on The Moral Void · 2008-07-01T00:18:25.000Z · LW · GW

Caledonian: 1) Why is it laughable? 2) If hemlines mattered to you as badly as a moral dilemma, would you still hold this view?

Comment by Manon_de_Gaillande on The Moral Void · 2008-06-30T11:37:17.000Z · LW · GW

I'm pretty sure you're doing it wrong here.

"What if the structure of the universe says to do something horrible? What would you have wished for the external objective morality to be instead?" Horrible? Wish? That's certainly not according to objective morality, since we've just read the tablet. It's just according to our intuitions. I have an intuition that says "Pain is bad". If the stone tablet says "Pain in good", I'm not going to rebel against it, I'm going to call my intuition wrong, like "Killing is good", "I'm always right and others are wrong" and "If I believe hard enough, it will change reality". I'd try to follow that morality and ignore my intuition - because that's what "morality" means.

I can't just choose to write my own tablet according to my intuitions, because so could a psychopath.

Also, it doesn't look like you understand what Nietzsche's abyss is. No black makeup here.

Comment by Manon_de_Gaillande on Heading Toward Morality · 2008-06-20T19:57:38.000Z · LW · GW

I'm surprised no one seems to doubt HA's basic premise. It sure seems to me that toddlers display enough intelligence (especially in choosing what they observe) to make one suspect self-awareness.

I'm really glad you will write about morality, because I was going to ask. Just a data dump from my brain, in case anyone finds this useful:

Obviously, by "We should do X" we mean "I/We will derive utility from doing X", but we don't mean only that. Mostly we apply it to things that have to do with altruism - the utility we derive from helping others.

There is no Book of Morality written somewhere in reality like the color of the sky and about which you can do Bayesian magic as if it were a fact, though in extreme circumstances it can be a good idea. E.g., if almost everyone values human life as a terminal value and someone doesn't, I'll call them a psychopath and mistaken. Unlike facts, utility functions depend on agents. We will, if we are good Bayesian wannabes, agree on whether doing X will result in A, but I can't see why the hell we'd agree on whether A is terminally desirable.

That's a big problem. Our utility functions are what we care about, but they were built by a process we see as outright evil. The intuition that says "I shouldn't torture random people on the street" and the one that says "I must save my life even if I need to kill a bunch of people to survive" come from the same source, and there is no global ojective morality to call one good and the other bad, just another intuition that also comes from that source.

Also, our utility functions differ. The birth lottery made me a liberal ( http://faculty.virginia.edu/haidtlab/articles/haidt.graham.2007.when-morality-opposes-justice.pdf ). It doesn't seem like I should let my values depend on such a random event, but I just can't bring myself to think of ingroup/outgroup and authority as moral foundations.

The confusing part is this: we care about the things we care about for a reason we consider evil. There is no territory of Things worth caring about out there, but we have maps of it and we just can't throw them away without becoming rocks.

I'll bang my head on the problem some more.

Comment by Manon_de_Gaillande on Ghosts in the Machine · 2008-06-19T03:38:11.000Z · LW · GW

kevin: Eliezer has written about that already. The AI could convice any human to let it out. See the AI box experiment ( http://yudkowsky.net/essays/aibox.html ). If it was connected to the Internet, it could crack the protein folding problem, find out how to build protein nanobots (to, say, build other nanobots), order the raw material (such as DNA strings) online) and convice some guy to mix it ( http://www.singinst.org/AIRisk.pdf ). It could think of something we can't even think of, like we could use fire if we were kept in a wooden prison (same paper).

Comment by Manon_de_Gaillande on Living in Many Worlds · 2008-06-05T18:30:32.000Z · LW · GW

Your main argument is "Learning QM shouldn't change your behavior". This is false in general. If your parents own slaves and you've been taught that people in Africa live horrible lives and slavery saves them, and you later discover the truth, you will feel and act differently. Yet you shouldn't expect your life far away from Africa to be affected: it still adds up to normality.

Some arguments are convincing ("you can't do anything about it so just call it the past" and "probability"), but they may not be enough to support your conclusion on their own.

Comment by Manon_de_Gaillande on Why Quantum? · 2008-06-04T20:15:41.000Z · LW · GW

Why does the area under a curve equal the antiderivative? I've done enough calculus to suspect I somehow know the reason, but I just can't quite pinpoint it.

Comment by Manon_de_Gaillande on Timeless Physics · 2008-05-27T18:15:13.000Z · LW · GW

For some reason, this view of time fell nicely in place in my mind (not "Aha! So that's how it is?" but "Yes, that's how it is."), so if it's wrong, we're a lot of people to be mistaken in the same way.

But that doesn't dissolve the "What happened before the Big Bang?" question. I point at our world and ask "Where does this configuration come from?", you point at the Big Bang, I ask the same question, and you say "Wrong question.". Huh?

Comment by Manon_de_Gaillande on The Dilemma: Science or Bayes? · 2008-05-16T18:38:00.000Z · LW · GW

Eliezer: "A little arrow"? Actual little arrows are pieces of wood shot with a bow. Ok, amplitudes are a property of a configuration you can map in a two-dimensional space (with no preferred basis), but what property? I'll accept "Your poor little brain can't grok it, you puny human." and "Dunno - maybe I can tell you later, like we didn't know what temperature was before Carnot.", but a real answer would be better.

Comment by Manon_de_Gaillande on Science Doesn't Trust Your Rationality · 2008-05-14T20:34:49.000Z · LW · GW

I am not smarter than that. But you might (just might) be. "Eliezer says so" is strong evidence for anything. I'm too stupid to use the full power of Bayes, and I should defer to Science, but Eliezer is one of the few best Bayesian wannabes - he may be mistaken, but he isn't crazily refusing to let go of his pet theory. Still not enough to make me accept MWI, but a major change in my estimate nonetheless.

As a side note, what actually happens in a true libertarian system is Europe during the Industrial Revolution.

Comment by Manon_de_Gaillande on The Dilemma: Science or Bayes? · 2008-05-13T18:53:09.000Z · LW · GW

I don't believe you.

I don't believe most scientists would make such huge mistakes. I don't believe you have shown all the evidence. This is the only explaination of QM I've been able to understand - I would have a hard time checking. Either you are lying for some higher purpose or you're honestly mistaken, since you're not a physicist.

Now, if you have really presented all the relevant evidence, and you have not explained QM in a way which makes some interpretation sound more reasonable than it is (what is an amplitude exactly?), then the idea of a single world is preposterous, and I really need to work out the implications.

Comment by Manon_de_Gaillande on Quantum Non-Realism · 2008-05-08T16:25:12.000Z · LW · GW

"My curiosity doesn't suddenly go away just because there's no reality, you know!" Eliezer, I want to high-five you.

Does this "Many worlds" thing imply that there exists (in some meaningful sense) other worlds alongside us where whatever quantum events didn't happen here happened? (If not, or if this is a wrong question, disregard the following.)

What are the moral implications? If some dictator says "If this photon passes through this filter (which it can do with probability 0.5), I will torture you all; if it is absorbed, I will do something vaguely nice.", and the photon if absorbed, should we rejoice, or should we grieve for those people in another world who are tortured?

Should we try quantum suicide? I think I'm willing to die (at least once, but maybe not in a lot of worlds, my poor little brain can't grasp the concept of multiple deaths) to let one world know whether the MWI is true.

What about other events? A coinflip isn't really a quantum random event (and may even be not random at all if you know enough), but the coin is made out of amplitudes - are there worlds where the coin lands on the other side? We won WW2 by the skin of the teeth, are there any worlds where the Earth is ruled by Nazi Germany?

Comment by Manon_de_Gaillande on Is Humanism A Religion-Substitute? · 2008-03-26T19:57:24.000Z · LW · GW

If people do have a religion-shaped hole (I can tell at least some do), what are they supposed to do about it? Ignoring it to focus on real things will not plug the hole. Modifying your brain or creating a real godlike thing is not possible yet. So what are we to do?

Comment by Manon_de_Gaillande on Joy in Discovery · 2008-03-21T11:57:07.000Z · LW · GW

"Sure, someone else knows the answer - but back in the hunter-gatherer days, someone else in an alternate Earth, or for that matter, someone else in the future, knew what the answer was."

I think the difference is that someone else knows the answer and can tell you.

Comment by Manon_de_Gaillande on 37 Ways That Words Can Be Wrong · 2008-03-06T08:27:53.000Z · LW · GW

What's the bad thing that happens if I do 35? It's a mistake, but how will it prevent me from using words correctly? I'd still be able to imagine a triangular lightbulb.

Comment by Manon_de_Gaillande on The Second Law of Thermodynamics, and Engines of Cognition · 2008-02-29T11:13:14.000Z · LW · GW

You lost me there.

1) If Alice and Bob observe the system in your first example, and Alice decides to keep track precisely of X's possile states while Bob just says "2-8", the entropy of X+Y is 2 bits for Alice and 2.8 for Bob. Isn't entropy a property of the system, not the observer? (This is the problem with "subjectivity": of course knowledge is physical, it's just that it depends on the observer and the observed system instead of just the system.)

2) If Alice knows all the molecules' positions and velocities, a thermometer will still display the same number; if she calculates the average speed of the molecules, she will find this same number; if she sticks her finger in the water at a random moment, she should expect to feel the same thing Bob, who just knows the water's temparature, does. How is the water colder? Admittedly, Alice could make it colder (and extract electricity), but she doesn't have to.

Comment by Manon_de_Gaillande on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T19:43:00.000Z · LW · GW

I think I've found one of the factors (besides scope insensivity) involved in the intuitive choice: in real life, a small amount of harm inflicted n times to one person has negative side-effects which don't happen when you inflict it once to n persons. Even though there aren't any in this thought experiment, we are so used to it we probably take it into account (at least I did).

Comment by Manon_de_Gaillande on Zut Allais! · 2008-01-20T13:08:28.000Z · LW · GW

Maybe the reason we tend to choose bet 2 over bet 1 (before computing the actual expected winnings) is not the higher probability to win, but the smaller sum we can lose (either we expect to lose or we can lose at worst, I'm not sure about that). So the bias here could be more something along the lines of status quo bias or endownment effect than a need for certainty.

I can only speak for myself, but I do not intuitively value certainty/high probability of winning, while I am biased towards avoiding losses.

Comment by Manon_de_Gaillande on Stranger Than History · 2008-01-13T15:20:14.000Z · LW · GW

Actually, the last statement (about spankings instead of jails) doesn't sound foolish at all. We abolished torture and slavery, we have replaced a lot of punishments with softer ones, we are trying to make executions painless and more and more people are against death penalty, we are more and more concerned about the well-being of ever larger groups (white men, then women, then other "races", then children), we pay attention to personal freedom, we think inmates are entitled to human rights, and if we care more about preventing further misdeeds than punishing the culprit, jails may not be efficient. I doubt spanking will replace jail, but I'd bet on something along these lines.

Comment by Manon_de_Gaillande on A Failed Just-So Story · 2008-01-05T10:15:42.000Z · LW · GW

This pressure exists once religion is already in place, but doesn't explain why it appears and spreads.

However, selecting for cheats doesn't matter, since they must teach their religion to their children in order to properly simulate faith. Moreover, I suspect that most people who didn't actively choose their religion, but passively accepted it as children don't fully believe it.