Posts

Comments

Comment by Caravelle on Trivers on Self-Deception · 2011-07-24T22:32:44.317Z · LW · GW

This.

I don't know if latent homosexuality in homophobes is the best example, but I've definitely seen it in myself. I will sometimes behave in certain ways, for motives I find perfectly virtuous or justified, and it is only by analysing my behaviour post-hoc that I realize it isn't consistent with the motives I thought I had - but it is consistent with much more selfish motives.

I think the example that most shocked me was back when I played an online RPG, and organised an action in a newly-coded environment. I and others on my team noticed an unexpected consequence of the rules that would make it easy for us to win. Awesome ! We built our strategy around it, proud of our cleverness, and went forward with the action.

And down came the administrators, furious that we had cheated that way.

I was INCENSED at the accusation. How were we supposed to know this was a bug and not a feature ? How dare they presume bad faith on our part ? I loudly and vocally defended our actions.

It's only later, as I was re-reading our posts on the private forum where we organised the action (posts that I realized as I re-read them the administrators had access to, and had probably read... please kill me now), that I noticed that not only did we discuss said bug, I specifically told everyone not to tell the administrators about it. At the time, my reasoning was that, well, they might decide to tell us not to use it, and we wouldn't want that, right ?

But if I'd thought there was a chance that the administrators would disapprove of us using the bug, how could I possibly think it wasn't a bug, and that using it wasn't cheating ? If I was acting in good faith how could I possibly not want to check with the administrators and make sure ?

Well, I didn't. I managed to cheat, obviously, blatantly, and have no conscious awareness I was doing so. That's not even quite true; I bet if I'd thought it through, as I did afterwards, I would have realized it. But my subconscious was damn well not going to let me think it through now was it ?

And why would my subconscious not allow me to understand I was cheating ? Well, the answer is obvious : so that I could be INCENSED and defend myself vocally, passionately and with utter sincerity once I did get accused of cheating. Heck, I probably did get away with it in some people's eyes. Those that didn't read the incriminating posts on the private forum at least.

So basically, now I don't take my motives for granted. I try to consider not only why I think I want to do something, but what motives one could infer from the actual consequences of what I want to do.

It also means I worry much less about other people's motives. If motives are a perfect guide to people's actions, then someone who thinks they truly love their partner while their actions result in abuse might just be an unfortunate klutz with anger issues, who should be pitied and given second chances instead of dumped. But if the subconscious can have selfish motives and cloak them in virtue for the benefit of the conscious mind, then that person can have the best intentions and still be an abuser, and one should very much DTMFA.

Comment by Caravelle on Secrets of the eliminati · 2011-07-24T21:49:49.770Z · LW · GW

That doesn't help much. If people were told they were going to be murdered in a painless way (or something not particularly painful - for example, a shot for someone who isn't afraid of needles and has no problem getting vaccinated) most would consider this a threat and would try to avoid it.

I think most people's practical attitude towards death is a bit like Syrio Forel from Game of Thrones - "not today". We learn to accept that we'll die someday, we might even be okay with it, but we prefer to have it happen as far in the future as we can manage.

Signing up for cryonics is an attempt to avoid dying tomorrow - but we're not that worried about dying tomorrow. Getting out of a burning building means we avoid dying today.

(whether this is a refinement of how to understand our behaviour around death, or a potential generalized utility function, I couldn't say).

Comment by Caravelle on Guessing the Teacher's Password · 2011-07-24T20:49:06.425Z · LW · GW

"Surely they can't have formed a publishing empire, and survived to the point where they could hire assistants, solely by doing what 'the proper God-appointed authorities' told them to do? "

Dunno; I wouldn't underestimate to what extent plain instinct can make one behave in a rational-like manner even though one's beliefs aren't rational. How those instincts are rationalized post-hoc, if they're rationalized at all, isn't that relevant.

"Perhaps we could tell the authors to imagine in detail what they would see if (or when) God shows them how all their facts fit together, and exactly how this would allow them to answer any and all objections."

I would agree with Eliezer's rule more than with yours here. For one thing, the issue isn't so much that L&J aren't following the right rationality rules; I suspect they don't want to follow the right rationality rules. I don't know if they haven't realized they have to follow them to be right, or don't care that much about being right (or to be more accurate, they're sufficiently married to their current worldview that they don't even want to consider it might be wrong), but I'm pretty sure if someone suggested they follow either your rule or Eliezer's they'd just stare blankly.

So there's that. But if we assume we managed to get them to listen to what we say, then I think Eliezer's rule would work much better, because it's much harder to misuse. "Ask yourself how things would actually work" is prone to rationalization, I can just picture the sentence getting translated by some brain module as "picture your current model. Nice, innit ?".

Or, put another way, I think that the part of the brain that actually examines one's beliefs, and the part of the brain that gives you the warm glow of self-satisfaction from being right, are not the same part of the brain. Your question will get intercepted by the warm glow part of the brain.. Eliezer's question... will not. In fact it looks specifically designed to avoid it.

In particular, if "try to find the thought that hurts you the most" would elicit "get behind me Satan", I'm not convinced that "try and work out how your worldview would actually work" wouldn't have the same results. Satan is the Great Deceiver after all. How easy would it be to assume, once you meet the first contradiction, that Satan is clouding your thoughts...

Comment by Caravelle on Approving reinforces low-effort behaviors · 2011-07-24T20:27:22.445Z · LW · GW

I've never worked in a soup kitchen (although I should, because I think I might enjoy it) but I've found that often when I voluntarily engage in a social and purely beneficial activity I enjoy myself enormously. There's a kind of comraderie going on, it's like the pleasure of social interaction is combining with the pleasure of Helping in just the right ways.

I don't expect it would work all the time, or for everyone. And I usually feel differently when I'm forced to do something instead of volunteering. Still, it could be a factor in why some people enjoy that sort of thing.

Comment by Caravelle on Efficient Charity: Do Unto Others... · 2011-07-24T19:25:21.938Z · LW · GW

I have a question. This article suggests that for a given utility function there is one single charity that is best and that's the one one should give money to. That looks a bit problematic to me - for example, if everyone invests in malaria nets because that's the single one that saves most lives, then nobody is investing in any other kind of charity, but shouldn't those things get done too ?

We can get around this by considering that the efficiency function varies with time - for example, once everybody gives their money to buy nets the marginal cost of each saved life increases, until some other charity becomes best and all charitable giving switches to that one.

But we don't have a complete and up-to-the-second knowledge of how many lives each marginal dollar will save in every charity, all we have to work with is approximations. In that situation, wouldn't it be best to have a basket of charities one gives to, with more money going to those that save the most lives but not putting all the money on a single charity ?

Or is this consideration completely and utterly pointless in a world where most people do NOT act like this, and most people don't give enough money to change the game, so rational actors who don't have millions of dollars to give to charity should always give to the one that saves the most lives per dollar anyway ?

Comment by Caravelle on Welcome to Less Wrong! (2010-2011) · 2011-07-24T18:21:19.952Z · LW · GW

Hello all !

I'm a twenty-seven years old student doing a PhD in vegetation dynamics. I've been interested in science since forever, in skepticism and rationality per se for the last few years, and I was linked to LessWrong only a few months ago and was blown away. I'm frankly disconcerted by how every single internet argument I've gotten into since has involved invoking rationality and using various bits of LessWrong vocabulary, I think the last time I absorbed a worldview that fast was from reading "How the Mind Works", lo these many years ago. So I look forward to seeing how that pans out (no, I do not think I'm being a mindless sheep - I don't agree with everything Steven Pinker said either. I'm just in the honeymoon "it all makes SENSE !" phase).

I've got to say, I'm really grateful for this great resource and to the internet for giving me access to it. Next time an old geezer tells me about how awesome the 50s and 60s were I'll bonk them over the head. Metaphorically.

What do I value ? 1) being right and 2) being good, in no particular order. I'm afraid I'm much better at the first one than the second, but reading posts here has gotten me to think a bit on how to integrate both.

Comment by Caravelle on Guessing the Teacher's Password · 2011-07-24T17:53:02.974Z · LW · GW

Hi ! Yep, it's the same me, thanks for the welcome !

I don't know if I'd call integrating knowledge THE root problem of Left Behind, which has many root problems, and a lack of integration strikes me as too high-level and widespread among humans to qualify as [i]root[/i] per se...

But yeah, good illustration of the principle :-)

(and thanks for the welcome link, I'd somehow missed that page)

Comment by Caravelle on You Can Face Reality · 2011-07-24T16:30:37.142Z · LW · GW

I can see the objection there however, partly because I sort of have this issue. I've never been attacked, or mugged, or generally made to feel genuinely unsafe - those few incidents that have unsettled me have affected me far less than the social pressure I've felt to feel unsafe - people telling me "are you sure you want to walk home alone ?", or "don't forget to lock the door at all times !".

I fight against that social pressure. I don't WANT the limitations and stress that come with being afraid, and the lower opinion it implies I should have of the world around me. I value my lack of fear quite highly, overall.

That said, is it really to my advantage to have a false sense of security ? Obviously not. I don't want to be assaulted or hurt or robbed. If the world really is a dangerous place there is no virtue in pretending it isn't.

What I should to is work to separate my knowledge from my actions. If I really want to go home alone, I can do this without fooling myself about how risk-free it is; I can choose instead to value the additional freedom I get from going over the additional safety I'd get from not going. And if I find I don't value my freedom that highly after all, then I should change my behaviour with no regrets. And if I'm afraid that thinking my neighbourhood is unsafe will lead me to be a meaner person overall, well, I don't have to let it. If being a kind person is worth doing at all, it's worth doing in a dangerous world.

(this has the additional advantage that if I do this correctly, actually getting mugged might not change my behaviour as radically as it would if I were doing all that stuff out of a false sense of security)

Of course the truth is that it isn't that simple: our brain being what it is, we cannot completely control the way we are shaped by our beliefs. As earlier commenters have pointed out, while admitting you're gay won't affect the fact that you are gay, and it doesn't imply you should worsen your situation by telling your homophobic friends that you're gay, our brains happen to be not that good at living a sustained lie, so in practice it probably will force you to change your behaviour.

Still, I don't think this makes the litany useless. I think that it is possible when we analyse our beliefs, to not only figure out how true they are but also to figure out the extent to which changing them would really force us to change our behaviour. It probably won't lead to a situation where we choose to adopt a false belief - the concept strikes me as rather contradictory - but at the end of the exercise we'd know better which behaviours we really value, and we might figure ways to hold on to them even as our beliefs change.

Comment by Caravelle on Guessing the Teacher's Password · 2011-06-22T21:49:57.467Z · LW · GW

When I was a kid I had this book called "Thinking Physics", which was basically a book of multiple choice physics questions (such as "an elephant and a feather are falling, which one experiences more air resistance ?", or "Kepler and Galileo made telescopes around the same time and Kepler's was adopted widely, why ?") aimed to point out where our natural instincts or presuppositions go against how physics actually work, and explaining, well, how physics actually work.

Really, the simple idea that physics are a habit of thought that have to be worked on because our defaults are incorrect (or, as I realized much later, are correct only in the special case of the everyday life of a social hominid) has been helpful to me ever since, and too few people have it or realize it's important.

I think it gets to what you're saying : one shouldn't learn physics (or anything for that matter) as a list of facts or methods to apply in the classroom, one should work to integrate them into one's mental model of the world. Which is not as easy as it sounds.

Comment by Caravelle on How not to move the goalposts · 2011-06-22T14:35:20.904Z · LW · GW

I've been thinking of this question lately, and while I agree with the main thrust of your article, I don't think that giving all possible objections is always possible (it can get really long, and sometimes there are thematic issues). Which is why I think multiple people responding tends to be a good thing.

But more to the point, I don't think I agree that RA is moving the goalposts. Because really, every position has many arguments pro or con where even if just one is demolished the position can survive off the others. I think the arguing technique that really is problematic is abandoning position A to go to position B while still taking A as true, thus continuing to make arguments based on A or going back to asserting A once B doesn't work out.

I think that if someone explicitly concedes A before going on to B, and doesn't go back to A afterwards (unless they've got new arguments of course) they aren't doing anything wrong.