Posts
Comments
On the topic of "utilities in the prisoner dilemma coinciding with jailtime" I quote one of my guest blog posts: http://phd.kt.pri.ee/2009/01/27/the-real-prisoner-dilemma/
Two hardened criminals are taken to interrogation in separate cells. They are offered the usual deal: If neither confesses, both get one year probation. If both confess, both do 5 years in jail. If one confesses, he goes free but the other does 10 years hard time.
Here’s what actually goes through their minds: “Okay, if neither of us confesses, we have to go back to the real world. But its so hard there! But if I confess, he will kill me when he gets out.. so thats bad… If both of us confess, then we can just get back to jail and continue our lives!”
Lateral thinking, people ;)
Im just reading Thomas Schelling's Theory of Conflict and one of his key tenets is that providing an identifiable point around which the discussion can be centered will tend to lead the discussion to be centered around that (classical anchoring). However, he brings out that in many cases, having a "line in the sand" brings benefits to all sides by allowing intermediate deals to be struck when only extremes were possible before.
This article, however, clearly demonstrates that having a line in the sand can be just as bad as it can be good, as it is with all of biases. However, I really recommend Schelling hit on "what is good" (in the evolutionary sense) about this phenomenon.
But three people should do already. Im fairly convinced that this game is unstable in the sense it would not make sense for any of them to agree to get 1/3 as they can always guarantee themselves more by defecting with someone (even by offeing them 1/6 - epsilon which is REALLY hard to turn down). It seems that a given majority getting 1/2 each would be a more probable solution but you would really need to formalize the rules before this can be proven. Im a cryptologist so this is sadly not really my area...
Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain...
It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?
One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more costly than gain and so on)...
Or is this how most people here already do things? Any and all feedback on this idea would be really apreciated (especially links to relevant discussion of this idea as I am sure Im not the first to come up with it).
Hello,
My name is Margus Niitsoo and Im a 22 year old Computer Science doctorial student in Tartu, Estonia. I have wide interests that span religion and psychology as well (I am a pantheist by the way.. so somewhat religious but unaffected by most of the classical theism bashing). I got here through OB which I got to when reading about AI and the thing that shall not be named.
I do not identify myself as a rationalist for I only recently understood how emotional a person I really am and id like to enjoy it before trying to get it under control again. However, I am interested in understanding human behaviour as best I can and this blog has given me many new insights I doubt I could have gotten somewhere else.
Another thing that comes off the top of my head is that one might try to get some groups already interested in this topic (in theory) to read LW and OB. One such group I can think of are LaVeyan Satanists. In theory, it is a religion of rationality (although, in practice, it is rather far from it quite often.. Im just lucky to know a specimen who embodies the theory)... Then again, this might not be an association we want (especially in US.. it would even be rather bad here in Estonia where most of the country is atheistic).. but there should be some other groups who hold rationality as one of their core values but know relatively little about it. These people should be rather easy to get - just by stressing that it is one of their own core values...
The game of "Paranoid Debating" ( http://lesswrong.com/lw/77/selecting_rationalist_groups/6lb ) would make for a great gameshow and it would definately increase the popularity of rationality. Someone should try pitching it to a TV station...
Just reminding everyone of one more sad thing - every good cause to rally people under generally needs an enemy. And if there isnt one, it usually develops or is found. People somehow just want to be against things rather than for them..
Also, atheism seems to be one of the few things most of us here have in common so Matt Newports post hits a nail there. We have a tradition of bashing theism. Traditions go a long way towards cementing a sense of community, so they do have a positive side. But the fact is that once a tradition has developed, people who break it are usually viewed as outsiders in some sense so it makes sense for people to stick to the traditons.
Mental energy is actually a limiting factor and I believe that this is the cause for more failures than people care to admit. That is, we as humans have a tendency to pick our battles as we have a limited amount of time and thinking resources and as such only invest large amounts of both only on a very small set of descisions. This means that most descisions do get done rather automatically.. which (as has been argued in previous articles) is rather normal. However, I think that a rationalist should be able to determine wether the thing he messed up was something he clearly did without paying it much attention (and thus did as best as he could given his very limited resources of time) or wether he really did invest a lot of consideration into it and just messed up. In both cases, lessons of course need to be learned and priorities adjusted (the fact you did it automatically might need to be corrected so that the next time you WOULD actually think in that situation) but I still believe that we cannot hold ourselves to the highest of rational standards for every single descision we make..
Maybe this is what the original author meant by saying that his mental energy budget is limited. Anyways, I thought that this aspect required further discussion...
One thing might be worth mentioning. To most religions, helping others is one of their shared core values. This means that everyone joining a church can expect (even on a rational level) that he recieves a warm welcome. As the communities themselves also feel they ought to be warm to newcomers, that is what usually happends too.
The problem rationalists are facing is that the community is essentially dog eat dog where everyone tries their best to scrutinize others thoughts (as this is the "rational" thing to do) and then to bash the hell out of them for every small detail missed because "this is critical for becoming more rational".. although more often than not it can probably be ascribed to just that person wanting to show that he is smarter and a "better rationalist". And people fear this criticism, even when it might not materialize. They know that in order to be accepted, they need to be really rational. This sets rather high standards on self confidence for joining.
What religious groups have over us is that for them the competition to be a better christian/muslim/jew doesnt interfere but rather helps the community forming. Rationalists have it the other way around and frankly I see no way to cure this while still remaining a "rationalist group". If anyone can solve it, or at least offer possible solutions, this might help the cause greatly.
I just noticed a study that might be relevant here cited in a classic Social psychology book I was reading (Baron, Byrne). The article they refer to is Graziano et al. "Social influence, sex differences, and judgments of beauty : putting the Interpersonal back in interpersonal attraction", 1993 and the result relevant here is that when women are shown an assessment of a man by some other woman, their own assesment moves towards it. This would make the result discussed here just a corollary of a self-fulfilling prophecy. I do not have time to read either of the articles at the moment but it would probably do some good if someone looked over the Graziano paper and verified whether or not it is relevant?